abstract
stringlengths 1
4.43k
| claims
stringlengths 14
189k
| description
stringlengths 5
1.46M
|
---|---|---|
Microelectronic assemblies, related devices and methods, are disclosed herein. In some embodiments, a microelectronic assembly may include a first die, having a first surface and an opposing second surface, in a first layer, and including a first metallization stack at the first surface; a device layer on the first metallization stack; a second metallization stack on the device layer; and an interconnect on the first surface of the die electrically coupled to the first metallization stack; a conductive pillar in the first layer; and a second die, having a first surface and an opposing second surface, in a second layer on the first layer, wherein the first surface of the second die is coupled to the conductive pillar and to the second surface of the first die by a hybrid bonding region. |
33Claims:1. A microelectronic assembly, comprising: a first die, having a first surface and an opposing second surface, in a first layer, the first die including: a first metallization stack at the first surface; a device layer, including a device, on the first metallization stack; a second metallization stack on the device layer; and an interconnect at the first surface electrically coupled to the first metallization stack; a conductive pillar in the first layer; and a second die, having a first surface and an opposing second surface, in a second layer on the first layer, wherein the first surface of the second die is coupled to the second surface of the first die by a hybrid bonding region and is coupled to the conductive pillar.2. The microelectronic assembly of claim 1, wherein the hybrid bonding region is a first hybrid bonding region, and further comprising: a third die, having a first surface and an opposing second surface, in the second layer, wherein the first surface of the third die is coupled to the second surface of the first die by a second hybrid bonding region.3. The microelectronic assembly of claim 1, wherein the interconnect is part of a power delivery network.4. The microelectronic assembly of claim 1, further comprising: a substrate layer, including a micro-through silicon via (pTSV), between the first metallization stack and the device layer.5. The microelectronic assembly of claim 4, wherein the pTSV electrically couples the device in the device layer to the first metallization stack.6. The microelectronic assembly of claim 1, further comprising: a package substrate electrically coupled to the first surface of the first die by the interconnect and electrically coupled to the first surface of the second die by the conductive pillar.7. The microelectronic assembly of claim 1, wherein conductive structures of the first metallization stack are thicker than conductive structures of the second metallization stack.
348. A microelectronic assembly, comprising: a first die, having a first surface and an opposing second surface, in a first layer, the first die including: a substrate including a through-substrate via (TSV) at the first surface; a first metallization stack on the substrate; a device layer, including a device, on the first metallization stack; a second metallization stack on the device layer; and an interconnect at the first surface electrically coupled to the first metallization stack by the TSV in the substrate; a conductive pillar in the first layer; and a second die, having a first surface and an opposing second surface, in a second layer on the first layer, wherein the first surface of the second die is coupled to the conductive pillar and to the second surface of the first die by a hybrid bonding region.9. The microelectronic assembly of claim 8, wherein the interconnect is part of a power delivery network.10. The microelectronic assembly of claim 8, wherein the substrate is a first substrate, and further comprising: a second substrate, including a micro-through silicon via (pTSV), between the first metallization stack and the device layer.11. The microelectronic assembly of claim 10, wherein the pTSV electrically couples the device in the device layer to the first metallization stack.12. The microelectronic assembly of claim 8, further comprising: a package substrate electrically coupled to the first surface of the first die by the interconnect and electrically coupled to the first surface of the second die by the conductive pillar.13. The microelectronic assembly of claim 8, further comprising: a third die, having a first surface and an opposing second surface, wherein the second surface of the third die is electrically coupled to the first surface of the first die by the interconnect and electrically coupled to the first surface of the second die by the conductive pillar.14. The microelectronic assembly of claim 13, wherein the first surface of the third die further includes a second interconnect, and the microelectronic assembly further comprises:
a package substrate electrically coupled to the first surface of the third die by the second interconnect.15. The microelectronic assembly of claim 14, wherein the second interconnect is part of a power delivery network.16. The microelectronic assembly of claim 8, wherein conductive structures of the first metallization stack are thicker than conductive structures of the second metallization stack.17. A microelectronic assembly, comprising: a first die in a first dielectric layer, the first dielectric layer having a first surface and an opposing second surface, and the first die including: a substrate including a through-substrate via (TSV) at the first surface; a first metallization stack on the substrate; a device layer, including a device, on the first metallization stack; a second metallization stack on the device layer; and first interconnects at the first surface electrically coupled to the first metallization stack by the TSV in the substrate; a second die in the first dielectric layer, the second die including: a substrate including a through-substrate via (TSV) at the first surface; a first metallization stack on the substrate; a device layer, including a device, on the first metallization stack; a second metallization stack on the device layer; and second interconnects at the first surface electrically coupled to the first metallization stack by the TSV in the substrate; a conductive pillar in the first dielectric layer; a third die, in a second dielectric layer on the second surface of the first dielectric layer, electrically coupled to the conductive pillar, electrically coupled to the first die by a first hybrid bonding region at the second surface of the first dielectric layer, and electrically coupled to the second die by a second hybrid bonding region at the second surface of the first dielectric layer; and a redistribution layer (RDL), having a first surface and an opposing second surface, at the first surface of the first dielectric layer, wherein the second surface of the RDL is electrically coupled to the first surface of the first dielectric layer, and wherein the first surface of the RDL includes third interconnects electrically coupled to the conductive pillar, the first interconnects, and the second interconnects by conductive pathways in the RDL.18. The microelectronic assembly of claim 17, wherein the first interconnects, the second interconnects, and the third interconnects are part of a power delivery network.19. The microelectronic assembly of claim 17, wherein the substrate of the first die is a first substrate, and further comprising: a second substrate, including a micro-through silicon via (pTSV), between the first metallization stack and the device layer, wherein the pTSV electrically couples the device in the device layer to the first metallization stack.20. The microelectronic assembly of claim 17, further comprising: a package substrate electrically coupled to the first surface of the RDL by the third interconnects. |
MICROELECTRONIC ASSEMBLIES HAVING BACKSIDE D I E-TO- PACKAGE INTERCONNECTSCross-Reference to Related Application(s)[0001] This application claims the benefit of and hereby incorporates by reference, for all purposes, the entirety of the contents of U.S. Nonprovisional Application No. 17/470,189, filed September 9, 2021, and entitled, "M ICROELECTRONIC ASSEMBLIES HAVING BACKSIDE DIE-TO-PACKAGE INTERCONNECTS."Background[0002] For reliable operation of integrated circuit (IC) packages, as well as increased manufacturing assembly yields and reduced costs, IC dies and subassemblies may be tested prior to coupling to a package substrate or to each other so that only known good dies and subassemblies are used.Brief Description of the Drawings[0003] Embodiments will be readily understood by the following detailed description in conjunction with the accompanying drawings. To facilitate this description, like reference numerals designate like structural elements. Embodiments are illustrated by way of example, not by way of limitation, in the figures of the accompanying drawings.[0004] FIG. 1 is a side, cross-sectional view of an example microelectronic assembly, in accordance with various embodiments.[0005] FIG. 2 is a side, cross-sectional exploded view of a portion of the microelectronic assembly of FIG. 1, in accordance with various embodiments.[0006] FIG. 3 is a side, cross-sectional view of an example microelectronic assembly, in accordance with various embodiments.[0007] FIG. 4 is a side, cross-sectional view of an example microelectronic assembly, in accordance with various embodiments.[0008] FIG. 5 is a side, cross-sectional view of an example microelectronic assembly, in accordance with various embodiments.[0009] FIG. 6 is a side, cross-sectional view of an example microelectronic assembly, in accordance with various embodiments.[0010] FIG. 7 is a side, cross-sectional view of an example microelectronic assembly, in accordance with various embodiments.[0011] FIGS. 8A-8J are side, cross-sectional views of various stages in an example process for manufacturing the microelectronic assembly of FIG. 3, in accordance with various embodiments.[0012] FIGS. 9A-9D are side, cross-sectional views of various stages in an example process for manufacturing the microelectronic assembly of FIG. 1, in accordance with various embodiments.[0013] FIGS. 10A-10G are side, cross-sectional views of various stages in an example process for manufacturing the microelectronic assembly of FIG. 5, in accordance with various embodiments.
[0014] FIGS. 11A-11D are side, cross-sectional views of various stages in an example process for manufacturing the microelectronic assembly of FIG. 7, in accordance with various embodiments.[0015] FIG. 12 is a top view of a wafer and dies that may be included in a microelectronic assembly, in accordance with any of the embodiments disclosed herein.[0016] FIG. 13 is a cross-sectional side view of an IC device that may be included in a microelectronic assembly, in accordance with any of the embodiments disclosed herein.[0017] FIG. 14 is a cross-sectional side view of an IC device assembly that may include a microelectronic assembly, in accordance with any of the embodiments disclosed herein.[0018] FIG. 15 is a block diagram of an example electrical device that may include a microelectronic assembly, in accordance with any of the embodiments disclosed herein.Detailed Description[0019] Microelectronic assemblies, related devices and methods, are disclosed herein. For example, in some embodiments, a microelectronic assembly may include a first die, having a first surface and an opposing second surface, in a first layer, and including a first metallization stack at the first surface; a device layer on the first metallization stack; a second metallization stack on the device layer; and a die- to-package interconnect on the first surface of the die electrically coupled to the first metallization stack; a conductive pillar in the first layer; and a second die, having a first surface and an opposing second surface, in a second layer on the first layer, wherein the first surface of the second die is coupled to the conductive pillar and to the second surface of the first die by a hybrid bonding region.[0020] Coupling two or more components by direct bonding in a multi-die IC package is challenging due to the increasingly small size and thickness of such components, the finer pitch of interconnects, and the reduced thickness of the bonding interface between components (e.g., z-height of die-to-die spacing), among others. Conventional methods for testing die functionality (e.g., to identify known good dies (KGD) during manufacturing) include using standard probing techniques to land on die pads. However, once dies are integrated into subassemblies, die pads may not be available for testing until after the subassembly is integrated into an IC package and thick metal layers (e.g., backside connections) are formed for connecting to a circuit board. In one example, a top wafer and a bottom wafer may be manufactured up to the fine pitch bonding layers, the fine pitch bonding layers of the top and bottom wafers may be attached using wafer-to-wafer bonding techniques, then the backside of the bottom wafer may be thinned to reveal TSVs in the bottom die, and thick metal layers may be formed and electrically coupled to the TSVs, which enable testing for functionality. In another example, the fine pitch bonding layers of a top die and a bottom wafer may be attached using die-to-wafer bonding techniques, then the backside of the bottom wafer may be thinned to reveal TSVs in the bottom die, and thick metal layers may be formed and electrically coupled to the TSVs, which enable testing for functionality. In many such cases, when a bad or non-functioning subassembly is attached to a package, the manufacturing defect units and costs are compounded. Various ones of the microelectronic
assemblies disclosed herein may exhibit better assembly yields during manufacturing, and improved performance and reliability during use, relative to conventional approaches by providing integrated interconnects to a backside of a die that may be used for testing die functionality in subassemblies. For example, the microelectronic assemblies disclosed herein may enable matching high-performance base dies to high performance top dies to achieve the best possible performance, customizing the backside power grid for each base die to reduce the cost or improve the performance, and separating the requirements on the base die signaling vias (e.g., backside-to-frontside vias) from the base die power delivery vias to optimize performance.[0021] In the following detailed description, reference is made to the accompanying drawings that form a part hereof wherein like numerals designate like parts throughout, and in which is shown, by way of illustration, embodiments that may be practiced. It is to be understood that other embodiments may be utilized, and structural or logical changes may be made without departing from the scope of the present disclosure. Therefore, the following detailed description is not to be taken in a limiting sense. [0022] Various operations may be described as multiple discrete actions or operations in turn, in a manner that is most helpful in understanding the claimed subject matter. However, the order of description should not be construed as to imply that these operations are necessarily order dependent. In particular, these operations may not be performed in the order of presentation. Operations described may be performed in a different order from the described embodiment. Various additional operations may be performed, and/or described operations may be omitted in additional embodiments. [0023] For the purposes of the present disclosure, the phrase "A and/or B" means (A), (B), or (A and B). For the purposes of the present disclosure, the phrase "A, B, and/or C" means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B, and C). The drawings are not necessarily to scale. Although many of the drawings illustrate rectilinear structures with flat walls and right-angle corners, this is simply for ease of illustration, and actual devices made using these techniques will exhibit rounded corners, surface roughness, and other features.[0024] The description uses the phrases "in an embodiment" or "in embodiments," which may each refer to one or more of the same or different embodiments. Furthermore, the terms "comprising," "including," "having," and the like, as used with respect to embodiments of the present disclosure, are synonymous. As used herein, a "package" and an "IC package" are synonymous, as are a "die" and an "IC die." The terms "top" and "bottom" may be used herein to explain various features of the drawings, but these terms are simply for ease of discussion, and do not imply a desired or required orientation. As used herein, the term "insulating" means "electrically insulating," unless otherwise specified. Throughout the specification, and in the claims, the term "coupled" means a direct or indirect connection, such as a direct electrical, mechanical, or magnetic connection between the things that are connected or an indirect connection, through one or more passive or active intermediary devices. The meaning of "a," "an," and "the" include plural references. The meaning of "in" includes "in" and "on."
[0025] When used to describe a range of dimensions, the phrase "between X and Y" represents a range that includes X and Y. For convenience, the phrase "FIG. 8" may be used to refer to the collection of drawings of FIGS. 8A-8J, the phrase "FIG. 9" may be used to refer to the collection of drawings of FIGS. 9A-9D, etc. Although certain elements may be referred to in the singular herein, such elements may include multiple sub-elements. For example, "an insulating material" may include one or more insulating materials.[0026] FIG. 1 is a side, cross-sectional view of an example microelectronic assembly, in accordance with various embodiments. The microelectronic assembly 100 may include a multi-layer die subassembly 104 having integrated backside die-to-package (DTP) interconnects 150. As used herein, the term a "multi-layer die subassembly" 104 may refer to a composite die having two or more stacked dielectric layers with one or more dies in each layer, and conductive interconnects and/or conductive pathways connecting the one or more dies, including dies in non-adjacent layers. As used herein, the terms a "multi-layer die subassembly" and a "composite die" may be used interchangeably. As shown in FIG. 1, the multi-layer die subassembly 104 may include a first layer 104-1 having a die 114-1 and conductive pillars 152, and a second layer 104-2 having a die 114-2 and a die 114-3. The first layer 104-1 may include a first surface 170-1 and an opposing second surface 170-2. In particular, the multi-layer die subassembly 104 may include a first die 114-1 in a first dielectric layer 104-1, a second die 114-2 in a second dielectric layer 104-2 coupled to the first die 114-1 by a first hybrid bonding region 130-1, and a third die 114-3 in the second dielectric layer 104-2 coupled to the first die 114-1 by a second hybrid bonding region 130-2. The die 114-1 may include a first metallization stack 126 at a first surface 170-1, a substrate layer 120 on the first metallization stack 126, a device layer 124 having devices 125 on the substrate layer 120, a second metallization stack 122 on the device layer (e.g., at a second surface 170- 2), and DTP interconnects 150 at the first surface 170-1 of the die 114-1 coupled to the first metallization stack 126. The first and second metallization stacks 126, 122 may include a plurality of layers that include an insulating material formed in multiple layers and multiple conductive pathways formed through the insulating material. The conductive pathways in the first and second metallization stacks 126, 122 may include conductive traces and/or conductive vias. The first metallization stack 126 may be referred to herein as "backside metal layers," "thick metallization layers," or other like terms, and the second metallization stack 122 maybe be referred to herein as "active side metal layers" or "thin metallization layers," or other like terms, where the conductive structures of the first metallization stack 126 may be thicker than the conductive structures of the second metallization stack 122. The device layer 124 may include active and passive devices (e.g., transistors, diodes, resistors, inductors, and capacitors, among others). In some embodiments, the device layer 124 may include one or more device layers including transistors (e.g., as discussed below with reference to FIG. 13). For example, the device layer 124 may include first and second transistors, where the first transistor may be a p-type metal oxide semiconductor (PMOS) and the second transistor may be an n-type metal oxide
semiconductor (NMOS). The substrate layer 120 may include micro-through silicon vias (pTSVs) 123. The pTSVs 123 may connect the first metallization stack 126 to devices 125 in the device layer 124 through the second metallization stack 122. In some embodiments, the uTSVs 123 have a pitch between 0.01 microns and 0.5 microns. In some embodiments, the substrate layer 120 may be omitted. [0027] The die 114-1 may be coupled to the package substrate 102 by the backside DTP interconnects 150. As used herein, the term "backside DTP interconnects" or "DTP interconnects" may include conductive contacts 132 on the multi-layer die subassembly 104 at the first surface 170-1 of the die 114-1 that are coupled to a first metallization stack 126 (e.g., backside metallization layers) of a die 114-1, and may further include solder 134, or other interconnect structures, and may further include conductive contacts 136 on a surface of a substrate (e.g., a silicon or glass interposer, a package substrate 102 or a circuit board (not shown) in the absence of a package substrate 102 between the multi-layer die subassembly 104 and the circuit board). As used herein, a "conductive contact" may refer to a portion of conductive material (e.g., metal) serving as an electrical interface between different components; conductive interconnects may be recessed in, flush with, or extending away from a surface of a component, and may take any suitable form (e.g., a conductive pad or socket, or portion of a conductive line or via). The conductive contacts 132 on the surface of the die 114-1 may further be coupled to conductive pathways in the die 114-1 (e.g., coupled to devices 125 in the device layer 124 through the pTSVs 123 in the substrate layer 120 and/or through the second metallization stack 122). The DTP interconnects 150 may be configured to route power or signals to and from the dies 114 in the multi-layer die subassembly 104 through the conductive contacts 132, the conductive pillars 152, and/or the first metallization stack 126 in die 114-1.[0028] The dies 114-2, 114-3 in the second layer 104-2 may be coupled to the package substrate 102 through the conductive pillars 152 to form multi-level (ML) interconnects. In particular, the dies 114-2, 114-3 may be coupled to the package substrate 102 through the conductive pillars 152, the conductive contacts 132 on the multi-layer die subassembly 104 (e.g., at the first surface 170-1), the solder 134, and the conductive contacts 136 on the package substrate 102. The ML interconnects may be power delivery interconnects or high speed signal interconnects. As used herein, the term "ML interconnect" may refer to an interconnect that includes a conductive pillar between a first component and a second component where the first component and the second component are not in adjacent layers, or may refer to an interconnect that spans one or more layers (e.g., an interconnect between a first die in a first layer and a second die in a third layer, or an interconnect between a package substrate and a die in a second layer). The dies 114 may include other conductive pathways (e.g., including lines and vias) and/or to other circuitry (not shown) coupled to the respective conductive contacts (e.g., conductive contacts 132 on die 114-1 and/or conductive contacts 110 on dies 114-1, 114-2, 114-3).[0029] The microelectronic assembly 100 may include a second die 114-2 coupled to a first die 114-1 by a hybrid bonding (HB) region 130-1. In particular, as illustrated in FIG. 2, the HB region 130-1 may
include a HB interface 180-1A at the top surface of the first die 114-1, with the HB interface 180-1A including a set of conductive HB contacts 110 and a HB dielectric 108 around the HB contacts 110 of the HB interface 180-1A. The HB region 130-1 may also include a HB interface 180-1B at the bottom surface of the die 114-2, with the HB interface 180-1B including a set of HB contacts 110 and a HB dielectric 108 around the HB contacts 110 of the HB interface 180-1B. The HB contacts 110 of the HB interface 180-1A of the die 114-1 may align with the HB contacts 110 of the HB interface 180-1B of the die 114-2 so that, in the microelectronic assembly 100, the HB contacts 110 of the die 114-2 are in contact with the HB contacts 110 of the die 114-1. In the microelectronic assembly 100 of FIG. 1, the HB interface 180-1A of the die 114-1 may be bonded (e.g., electrically and mechanically) with the HB interface 180-1B of the die 114-2 to form the HB region 130-1 coupling the die 114-1 and the die 114-2. The second die 114-2 may be further coupled to a conductive pillar 152 by the HB region 130-1.[0030] The microelectronic assembly 100 may further include a third die 114-3 coupled to a first die 114-1 by a hybrid bonding (DB) region 130-2. In particular, as illustrated in FIG. 2, the HB region 130-2 may include a HB interface 180-2A at the top surface of the first die 114-1, with the HB interface 180-2A including a set of conductive HB contacts 110 and a HB dielectric 108 around the HB contacts 110 of the HB interface 180-2A. The HB region 130-3 may also include a HB interface 180-2B at the bottom surface of the die 114-3, with the HB interface 180-2B including a set of HB contacts 110 and a HB dielectric 108 around the HB contacts 110 of the HB interface 180-2B. The HB contacts 110 of the HB interface 180-2A of the die 114-1 may align with the HB contacts 110 of the HB interface 180-2B of the die 114-3 so that, in the microelectronic assembly 100, the HB contacts 110 of the die 114-3 are in contact with the HB contacts 110 of the die 114-1. In the microelectronic assembly 100 of FIG. 1, the HB interface 180-2A of the die 114-1 may be bonded (e.g., electrically and mechanically) with the HB interface 180-2B of the die 114-3 to form the HB region 130-2 coupling the die 114-1 and the die 114-3. More generally, the HB regions 130 disclosed herein may include two complementary HB interfaces 180 bonded together; for ease of illustration, many of the subsequent drawings may omit the identification of the HB interfaces 180 to improve the clarity of the drawings. The third die 114-3 may be further coupled to a conductive pillar 152 by the HB region 130-2. In some embodiments, the second die 114-2 and/or third die 114-3 may not be coupled to a conductive pillar by a HB region. In such instances, the second die 114-2 and/or third die 114-3 may be coupled to a conductive pillar by other interconnects, such as metal-to- metal.[0031] As used herein, the term "hybrid bonding" is used to include techniques in which the HB dielectric 108 of opposing HB interfaces 180 are brought into contact first, then subject to heat and sometimes compression, or techniques in which the HB contacts 110 and the HB dielectric 108 of opposing HB interfaces 180 are brought into contact substantially simultaneously, then subject to heat and compression. In such techniques, the HB contacts 110 and the HB dielectric 108 at one HB interface 180 are brought into contact with the HB contacts 110 and the HB dielectric 108 at another HB interface
180, respectively, and elevated pressures and/or temperatures may be applied to cause the contacting HB contacts 110 and/or the contacting HB dielectrics 108 to bond. HB interconnects may be capable of reliably conducting a higher current than other types of interconnects; for example, some conventional solder interconnects may form large volumes of brittle IMCs when current flows, and the maximum current provided through such interconnects may be constrained to mitigate mechanical failure. Although FIGS. 1 and 2 show the HB dielectric 108 as extending fully along the entire top surface of the first dielectric layer 104-1, the HB dielectric 108 may extend along only a portion of the bottom surfaces of the second and third dies 114-2, 114-3 where the second and third dies 114-2, 114-3 overlap the first die 114-1.[0032] A HB dielectric 108 may include one or more dielectric materials, such as one or more inorganic dielectric materials. For example, a HB dielectric 108 may include silicon and nitrogen (e.g., in the form of silicon nitride); silicon and oxygen (e.g., in the form of silicon oxide); silicon, carbon, and nitrogen (e.g., in the form of silicon carbon nitride); silicon, carbon and oxygen (e.g., in the form of a carbon- doped silicon oxide); silicon, oxygen, and nitrogen (e.g., in the form of silicon oxynitride); aluminum and oxygen (e.g., in the form of aluminum oxide); titanium and oxygen (e.g., in the form of titanium oxide); hafnium and oxygen (e.g., in the form of hafnium oxide); silicon, oxygen, carbon, and hydrogen (e.g., in the form of tetraethyl orthosilicate (TEOS)); zirconium and oxygen (e.g., in the form of zirconium oxide); niobium and oxygen (e.g., in the form of niobium oxide); tantalum and oxygen (e.g., in the form of tantalum oxide); and combinations thereof.[0033] A HB contact 110 may include a pillar, a pad, or other structure. The HB contacts 110, although depicted in the accompanying drawings in the same manner at both HB interfaces 180 of a HB region 130, may have a same structure at both HB interfaces 180, or the HB contacts 110 at different HB interfaces 180 may have different structures. For example, in some embodiments, a HB contact 110 in one HB interface 180 may include a metal pillar (e.g., a copper pillar), and a complementary HB contact 110 in a complementary HB interface 180 may include a metal pad (e.g., a copper pad) recessed in a dielectric. The HB pads also may have different shapes (e.g., a larger regular polygon on a HB interface and a small regular polygon on a complementary HB interface). A HB contact 110 may include any one or more conductive materials, such as copper, manganese, titanium, gold, silver, palladium, nickel, copper and aluminum (e.g., in the form of a copper aluminum alloy), tantalum (e.g., tantalum metal, or tantalum and nitrogen in the form of tantalum nitride), cobalt, cobalt and iron (e.g., in the form of a cobalt iron alloy), or any alloys of any of the foregoing (e.g., copper, manganese, and nickel in the form of manganin). The pad structure also may include a plurality of metals (e.g., may include a high conductivity metal, such as copper or aluminum, capped with a corrosion resistant metal, such as titanium or gold, or a corrosion resistant alloy, such as manganin). In some embodiments, the HB dielectric 108 and the HB contacts 110 of a HB interface 180 may be manufactured using low- temperature deposition techniques (e.g., techniques in which deposition occurs at temperatures below
250 degrees Celsius, or below 200 degrees Celsius), such as low-temperature plasma-enhanced chemical vapor deposition (PECVD).[0034] FIG. 1 also illustrates the die 114-1 coupled to the package substrate 102 by backside DTP interconnects 150. Although FIG. 1 depicts a particular number of dies 114 coupled to the package substrate 102 and to other dies 114 by HB regions 130, this number and arrangement are simply illustrative, and a microelectronic assembly 100 may include any desired number and arrangement of dies 114 coupled to a package substrate 102 and to other dies 114 by HB regions 130. Although a single reference numeral "108" is used to refer to the HB dielectrics of multiple different HB interfaces 180 (and different HB regions 130), this is simply for ease of illustration, and the HB dielectric 108 of different HB interfaces 180 (even within a single HB region 130) may have different materials and/or structures. Similarly, although a single reference numeral "110" is used to refer to the HB contacts of multiple different HB interfaces 180 (and different HB regions 130), this is simply for ease of illustration, and the HB contacts 110 of different HB interfaces 180 (even within a single HB region 130) may have different materials and/or structures.[0035] The die 114 disclosed herein may include an insulating material (e.g., a dielectric material formed in multiple layers, as known in the art) and multiple conductive pathways formed through the insulating material. In some embodiments, the insulating material of a die 114 may include a dielectric material, such as silicon dioxide, silicon nitride, oxynitride, polyimide materials, glass reinforced epoxy matrix materials, or a low-k or ultra low-k dielectric (e.g., carbon-doped dielectrics, fluorine-doped dielectrics, porous dielectrics, organic polymeric dielectrics, photo-imageable dielectrics, and/or benzocyclobutene-based polymers). In some embodiments, the insulating material of a die 114 may include a semiconductor material, such as silicon, germanium, or a lll-V material (e.g., gallium nitride), and one or more additional materials. For example, an insulating material may include silicon oxide or silicon nitride. The conductive pathways in a die 114 may include conductive traces and/or conductive vias, and may connect any of the conductive contacts in the die 114 in any suitable manner (e.g., connecting multiple conductive contacts on a same surface or on different surfaces of the die 114). Example structures that may be included in the dies 114 disclosed herein are discussed below with reference to FIG. 13. The conductive pathways in the dies 114 may be bordered by liner materials, such as adhesion liners and/or barrier liners, as suitable. In some embodiments, the die 114 is a wafer. In some embodiments, the die 114 is a monolithic silicon, a fan-out or fan-in package die, or a die stack (e.g., wafer stacked, die stacked, or multi-layer die stacked).[0036] In some embodiments, the die 114 may include conductive pathways to route power, ground, and/or signals to/from other dies 114 included in the microelectronic assembly 100. For example, the die 114-1 may include TSVs, including a conductive material via, such as a metal via, isolated from the surrounding silicon or other semiconductor material by a barrier oxide), or other conductive pathways through which power, ground, and/or signals may be transmitted between the package substrate 102
and one or more dies 114 "on top" of the die 114-1 (e.g., in the embodiment of FIG. 1, the dies 114-2 and/or 114-3). In some embodiments, the die 114-1 may not route power and/or ground to the dies 114-2 and 114-3; instead, the dies 114-2, 114-3 may couple directly to power and/or ground lines in the package substrate 102 by ML interconnects (e.g., through conductive contacts 132 and conductive pillars 152). In some embodiments, the die 114-1 in the first layer 104-1, also referred to herein as "base die," "interposer die," or bridge die," may be thicker than the dies 114-2, 114-3 in the second layer 104-2. In some embodiments, a die 114 may span multiple layers of the multi-layer die subassembly 104 (e.g., may span first and second layers 104-1, 104-2). The die 114-1 of the microelectronic assembly 100 may be a single-sided die (in the sense that the die 114-1 only has conductive contacts on a single surface), or, as shown, may be a double-sided die (in the sense that the die 114-1 has conductive contacts on two surfaces (e.g., a top surface and a bottom surface)), and may be a mixed-pitch die (in the sense that the die 114-1 has sets of conductive contacts with different pitches). In some embodiments, the dies 114-2 and/or 114-3 may not include active devices or routing, and may simply provide thermal and/or mechanical support. In such an embodiment, the HB regions 130-1 and/or 130-2 may not include HB contacts 110. In some embodiments, the dies 114-2, 114-3 may include the elements of the die 114-1 (e.g., a first metallization stack 126, a device layer 124 having devices 125, and a second metallization stack 122). In some embodiments, the die 114-1 may be a memory device (e.g., as described below with reference to the die 1502 of FIG. 12), a high frequency serializer and deserializer (SerDes), such as a Peripheral Component Interconnect (PCI) express. In some embodiments, the die 114-1 may be a processing die, a radio frequency chip, a power converter, a network processor, a workload accelerator, or a security encryptor. In some embodiments, the die 114-2 and/or the die 114-3 may be a processing die.[0037] The multi-layer die subassembly 104 may include an insulating material 133 (e.g., a dielectric material formed in multiple layers, as known in the art) to form the multiple layers and to embed one or more dies in a layer. In some embodiments, the insulating material 133 of the multi-layer die subassembly 104 may be a dielectric material, such as an organic dielectric material, a fire retardant grade 4 material ( FR-4), bismaleimide triazine (BT) resin, polyimide materials, glass reinforced epoxy matrix materials, or low-k and ultra low-k dielectric (e.g., carbon-doped dielectrics, fluorine-doped dielectrics, porous dielectrics, and organic polymeric dielectrics). In some embodiments, the die 114 may be embedded in an inhomogeneous dielectric, such as stacked dielectric layers (e.g., alternating layers of different inorganic dielectrics). In some embodiments, the insulating material 133 of the multilayer die subassembly 104 may be a mold material, such as an organic polymer with inorganic silica particles. The multi-layer die subassembly 104 may include one or more ML interconnects through the dielectric material (e.g., including conductive vias and/or conductive pillars, as shown). The multi-layer die subassembly 104 may have any suitable dimensions. For example, in some embodiments, a thickness of the multi-layer die subassembly 104 may be between 100 um and 2000 um. In some
embodiments, the multi-layer die subassembly 104 may be a composite die, such as stacked dies. The multi-layer die subassembly 104 may have any suitable number of layers, any suitable number of dies, and any suitable die arrangement. For example, in some embodiments, the multi-layer die subassembly 104 may have between 3 and 20 layers of dies. In some embodiments, the multi-layer die subassembly 104 may include a layer having between 2 and 50 dies.[0038] The package substrate 102 may include an insulating material (e.g., a dielectric material formed in multiple layers, as known in the art) and one or more conductive pathways to route power, ground, and signals through the dielectric material (e.g., including conductive traces and/or conductive vias, as shown). In some embodiments, the insulating material of the package substrate 102 may be a dielectric material, such as an organic dielectric material, a fire retardant grade 4 material ( FR-4), BT resin, polyimide materials, glass reinforced epoxy matrix materials, organic dielectrics with inorganic fillers or low-k and ultra low-k dielectric (e.g., carbon-doped dielectrics, fluorine-doped dielectrics, porous dielectrics, and organic polymeric dielectrics). In particular, when the package substrate 102 is formed using standard printed circuit board (PCB) processes, the package substrate 102 may include FR-4, and the conductive pathways in the package substrate 102 may be formed by patterned sheets of copper separated by build-up layers of the FR-4. The conductive pathways in the package substrate 102 may be bordered by liner materials, such as adhesion liners and/or barrier liners, as suitable. In some embodiments, the package substrate 102 may be formed using a lithographically defined via packaging process. In some embodiments, the package substrate 102 may be manufactured using standard organic package manufacturing processes, and thus the package substrate 102 may take the form of an organic package. In some embodiments, the package substrate 102 may be a set of redistribution layers formed on a panel carrier by laminating or spinning on a dielectric material, and creating conductive vias and lines by laser drilling and plating. In some embodiments, the package substrate 102 may be formed on a removable carrier using any suitable technique, such as a redistribution layer technique. Any method known in the art for fabrication of the package substrate 102 may be used, and for the sake of brevity, such methods will not be discussed in further detail herein.[0039] In some embodiments, the package substrate 102 may be a lower density medium and the die 114 may be a higher density medium or have an area with a higher density medium. As used herein, the term "lower density" and "higher density" are relative terms indicating that the conductive pathways (e.g., including conductive interconnects, conductive lines, and conductive vias) in a lower density medium are larger and/or have a greater pitch than the conductive pathways in a higher density medium. In some embodiments, a higher density medium may be manufactured using a modified semiadditive process or a semi-additive build-up process with advanced lithography (with small vertical interconnect features formed by advanced laser or lithography processes), while a lower density medium may be a PCB manufactured using a standard PCB process (e.g., a standard subtractive process using etch chemistry to remove areas of unwanted copper, and with coarse vertical interconnect
features formed by a standard laser process). In other embodiments, the higher density medium may be manufactured using semiconductor fabrication process, such as a single damascene process or a dual damascene process. In some embodiments, additional dies may be disposed on the top surface of the dies 114-2, 114-3. In some embodiments, additional components may be disposed on the top surface of the dies 114-2, 114-3. Additional passive components, such as surface-mount resistors, capacitors, and/or inductors, may be disposed on the top surface or the bottom surface of the package substrate 102, or embedded in the package substrate 102.[0040] The backside DTP interconnects 150 disclosed herein may take any suitable form. In some embodiments, the backside DTP interconnects 150 may include solder 134 (e.g., solder bumps or balls that are subject to a thermal reflow to form the interconnects), as shown. In some embodiments, the backside DTP interconnects 150 may include an anisotropic conductive material, such as an anisotropic conductive film or an anisotropic conductive paste. An anisotropic conductive material may include conductive materials dispersed in a non-conductive material. The DTP interconnects 150 may be direct metal-to-metal bonding, such as copper-to-copper bonding. In some embodiments, for example, when the package is a silicon interposer, the DTP interconnects 150 may include hybrid bonding.[0041] The microelectronic assembly 100 of FIG. 1 may also include an underfill material 127. In some embodiments, the underfill material 127 may extend between the die 114-1 and the package substrate 102 around the associated backside DTP interconnects 150. The underfill material 127 may be an insulating material, such as an appropriate epoxy material. In some embodiments, the underfill material 127 may include a capillary underfill, non-conductive film (NCF), or molded underfill. In some embodiments, the underfill material 127 may include an epoxy flux that assists with soldering the die 114-1 to the package substrate 102 when forming the backside DTP interconnects 150, and then polymerizes and encapsulates the backside DTP interconnects 150. The underfill material 127 may be selected to have a coefficient of thermal expansion (CTE) that may mitigate or minimize the stress between the multi-layer die subassembly 104 and the package substrate 102 arising from uneven thermal expansion in the microelectronic assembly 100. In some embodiments, the CTE of the underfill material 127 may have a value that is intermediate to the CTE of the package substrate 102 (e.g., the CTE of the dielectric material of the package substrate 102) and a CTE of the multi-layer die subassembly 104.[0042] The microelectronic assembly 100 of FIG. 1 may also include a circuit board (not shown). The package substrate 102 may be coupled to the circuit board by second-level interconnects at the bottom surface of the package substrate 102. The second-level interconnects may be any suitable second-level interconnects, including solder balls for a ball grid array arrangement, pins in a pin grid array arrangement or lands in a land grid array arrangement. The circuit board may be a motherboard, for example, and may have other components attached to it. The circuit board may include conductive pathways and other conductive contacts for routing power, ground, and signals through the circuit
board, as known in the art. In some embodiments, the second-level interconnects may not couple the package substrate 102 to a circuit board, but may instead couple the package substrate 102 to another IC package, an interposer, or any other suitable component. In some embodiments, the multi-layer die subassembly 104 may not be coupled to a package substrate 102, but may instead be coupled to a circuit board, such as a PCB.[0043] Many of the elements of the microelectronic assembly 100 of FIG. 1 are included in other ones of the accompanying drawings; the discussion of these elements is not repeated when discussing these drawings, and any of these elements may take any of the forms disclosed herein. Further, a number of elements are illustrated in FIG. 1 as included in the microelectronic assembly 100, but a number of these elements may not be present in a microelectronic assembly 100. For example, in various embodiments, the underfill material 127 and the package substrate 102 may not be included. In some embodiments, individual ones of the microelectronic assemblies 100 disclosed herein may serve as a system-in- package (SiP) in which multiple dies 114 having different functionality are included. In such embodiments, the microelectronic assembly 100 may be referred to as an SiP.[0044] FIG. 3 is a side, cross-sectional view of another example microelectronic assembly, in accordance with various embodiments. The microelectronic assembly 100 may include a multi-layer die subassembly 104 having integrated backside DTP interconnects 150. As shown in FIG. 3, the multi-layer die subassembly 104 may include a first layer 104-1 having a die 114-1 and conductive pillars 152, and a second layer 104-2 having a die 114-2. In particular, the multi-layer die subassembly 104 may include a first die 114-1 in a first dielectric layer 104-1 and a second die 114-2 in a second dielectric layer 104-2 coupled to the first die 114-1 by a first hybrid bonding region 130. The die 114-1 may include a first metallization stack 126 at a first surface 170-1, a substrate layer 120 on the first metallization stack 126, a device layer 124 having devices 125 on the substrate layer 120, a second metallization stack 122 on the device layer (e.g., at a second surface 170-2), and DTP interconnects 150 at the first surface 170-1 of the die 114-1 coupled to the first metallization stack 126. In some embodiments, the substrate layer 120 may be omitted. The die 114-1 may be coupled to the package substrate 102 through the backside DTP interconnects 150 and the die 114-2 in the second layer 104-2 may be coupled to the package substrate 102 by ML interconnects.[0045] FIG. 4 is a side, cross-sectional view of another example microelectronic assembly, in accordance with various embodiments. The microelectronic assembly 100 may include a multi-layer die subassembly 104 couple to a die 114-3 at a first surface 170-1 and having integrated backside DTP interconnects 150. As shown in FIG. 4, the multi-layer die subassembly 104 may include a first layer 104-1 having a die 114-1 and conductive pillars 152, and a second layer 104-2 having a die 114-2. In particular, the multi-layer die subassembly 104 may include a first die 114-1 in a first dielectric layer 104-1, a second die 114-2 in a second dielectric layer 104-2 coupled to the first die 114-1 by a first hybrid bonding region 130-1, and a third die 114-3 coupled to a first surface 170-1 of the first dielectric
layer 104-1 by a second hybrid bonding region 130-2 with DTP interconnects 150 at a bottom surface of the die 114-3. The die 114-1 may include a first metallization stack 126 at a first surface 170-1, a substrate layer 120 on the first metallization stack 126, a device layer 124 having devices 125 on the substrate layer 120, a second metallization stack 122 on the device layer (e.g., at a second surface 170-2). In some embodiments, the substrate layer 120 may be omitted. The die 114-3 may be a double-sided die and may include TSVs 121 and/or other conductive pathways (not shown) for coupling to the package substrate 102 and the multi-layer die subassembly 104. The DTP interconnects 150 at the bottom surface of the die 114-3 may be coupled to the first metallization stack 126 in the die 114-1 by conductive pathways (e.g., TSVs 121) in the die 114-3. The die 114-1 may be coupled to the package substrate 102 through the die 114-3 and the backside DTP interconnects 150 and the die 114-2 in the second layer 104-2 may be coupled to the package substrate 102 through the conductive pillars 152 to form ML interconnects and conductive pathways in the die 114-3. In some embodiments, the die 114-2 may be non-functional and may provide mechanical and/or thermal support. In such an embodiment, the first hybrid bonding region 130-1 may not include HB contacts 110. Further, the die 114-3 may be a passive die including pass-through and redistribution routing.[0046] FIG. 5 is a side, cross-sectional view of an example microelectronic assembly, in accordance with various embodiments. The microelectronic assembly 100 may include a multi-layer die subassembly 104 having integrated backside DTP interconnects 150. As shown in FIG. 5, the multi-layer die subassembly 104 may include a first layer 104-1 having a die 114-1 and conductive pillars 152, and a second layer 104-2 having a die 114-2 and a die 114-3. The first layer 104-1 may include a first surface 170-1 and an opposing second surface 170-2. In particular, the multi-layer die subassembly 104 may include a first die 114-1 in a first dielectric layer 104-1, a second die 114-2 in a second dielectric layer 104-2 coupled to the first die 114-1 by a first hybrid bonding region 130-1, and a third die 114-3 in the second dielectric layer 104-2 coupled to the first die 114-1 by a second hybrid bonding region 130-2. The die 114-1 may include a first substrate layer 128 at a first surface 170-1 having TSVs 118, a first metallization stack 126 on the first substrate layer 128, a second substrate layer 120 on the first metallization stack 126, a device layer 124 having devices 125 on the substrate layer 120, a second metallization stack 122 on the device layer (e.g., at a second surface 170-2), and DTP interconnects 150 at the first surface 170-1 coupled to the first metallization stack 126 in the first die 114-1 through the TSVs 118 in the first substrate layer 128. In some embodiments, the TSVs 118 in the first substrate layer 128 may have a pitch between 5 microns and 100 microns. In some embodiments, the second substrate layer 120 may be omitted. The die 114-1 may be coupled to the package substrate 102 through the backside DTP interconnects 150 and the dies 114-2, 114-3 in the second layer 104-2 may be coupled to the package substrate 102 through the ML interconnects.[0047] FIG. 6 is a side, cross-sectional view of an example microelectronic assembly, in accordance with various embodiments. The microelectronic assembly 100 may include a multi-layer die subassembly
104 having integrated backside DTP interconnects 150. As shown in FIG. 6, the multi-layer die subassembly 104 may include a first layer 104-1 having a die 114-1 and conductive pillars 152, and a second layer 104-2 having a die 114-2. The first layer 104-1 may include a first surface 170-1 and an opposing second surface 170-2. In particular, the multi-layer die subassembly 104 may include a first die 114-1 in a first dielectric layer 104-1 and a second die 114-2 in a second dielectric layer 104-2 coupled to the first die 114-1 by a hybrid bonding region 130. The die 114-1 may include a first substrate layer 128 at a first surface 170-1 having TSVs 118, a first metallization stack 126 on the first substrate layer 128, a second substrate layer 120 on the first metallization stack 126, a device layer 124 having devices 125 on the substrate layer 120, a second metallization stack 122 on the device layer (e.g., at a second surface 170-2), and DTP interconnects 150 at the first surface 170-1 coupled to the first metallization stack 126 in the first die 114-1 through the TSVs 118 in the first substrate layer 128. In some embodiments, the second substrate layer 120 may be omitted. The die 114-1 may be coupled to the package substrate 102 through the backside DTP interconnects 150 and the die 114-2 in the second layer 104-2 may be coupled to the package substrate 102 through the ML interconnects.[0048] FIG. 7 is a side, cross-sectional view of an example microelectronic assembly, in accordance with various embodiments. The microelectronic assembly 100 may include a multi-layer die subassembly 104 having integrated backside DTP interconnects 150. As shown in FIG. 7, the multi-layer die subassembly 104 may include a redistribution layer (RDL) 148 having DTP interconnects 150 on a bottom surface, a first layer 104-1 on the top surface of the RDL 148, and a second layer 104-2 on the first layer 104-1. The first layer 104-1 may include a first surface 170-1 and an opposing second surface 170-2. In particular, the multi-layer die subassembly 104 may include a first dielectric layer 104-1, the RDL 148 coupled to the first surface 170-1 of the first layer 104-1, and the second dielectric layer 104-2 coupled to the second surface 170-2 of the first dielectric layer 104-1. The first dielectric layer 104-1 may include a first die 114-1, a second die 114-2, and conductive pillars 152 embedded therein and the second dielectric layer 104-2 may include a third die 114-3 embedded therein coupled to the first die 114-1 and the second die 114-2 by a hybrid bonding region 130. The dies 114-1, 114-2 may include a first substrate layer 128 at a first surface 170-1 having TSVs 118, a first metallization stack 126 on the first substrate layer 128, a second substrate layer 120 on the first metallization stack 126, a device layer 124 having devices 125 on the substrate layer 120, a second metallization stack 122 on the device layer (e.g., at a second surface 170-2). In some embodiments, the second substrate layer 120 may be omitted. The first metallization stacks 126 in the respective dies 114-1, 114-2 may be coupled to the DTP interconnects 150 on the bottom surface of the RDL 148 through the TSVs 118 in the first substrate layer 128 and conductive pathways in the RDL 148. The dies 114-1, 114-2 may be coupled to the package substrate 102 through the backside DTP interconnects 150 and the die 114-3 in the second layer 104-2 may be coupled to the package substrate 102 by the DTP interconnects 150 through the conductive pillars 152 of the ML interconnects. Although FIG. 7 shows a particular number and
arrangement of a microelectronic assembly 100 including a plurality of embedded first, second, and third dies 114, and a single RDL 148, a microelectronic assembly 100 may include any number and arrangement of dies 114 and RDLs 148, including two or more RDLs 148 and including an RDL 148 at the second surface 170-2 of the first dielectric layer 104-1.[0049] Any suitable techniques may be used to manufacture the microelectronic assemblies 100 disclosed herein. For example, FIGS. 8A-8J are side, cross-sectional views of various stages in an example process for manufacturing the microelectronic assembly 100 of FIG. 3, in accordance with various embodiments. Although the operations discussed below with reference to FIGS. 8A-8J (and others of the accompanying drawings representing manufacturing processes) are illustrated in a particular order, these operations may be performed in any suitable order.[0050] FIG. 8A illustrates an assembly subsequent to placing a first die 114-1 on a first carrier 105-1 with an active surface (e.g., metallization stack 122) facing towards the first carrier 105-1. The first die 114-1 may include an active side metallization stack 122, a device layer 124 having a device 125, and a substrate 120 having a uTSV 123 (e.g., at a backside surface opposing the active surface), where the substrate 120 includes a non-electrical material on and over the uTSV 123. The non-electrical material, which is an inactive portion of the die 114-1, may include silicon, germanium, indium antimonide, lead telluride, indium arsenide, indium phosphide, gallium arsenide, gallium antimonide, further materials classified as group lll-V, or an insulating material, such as silicon dioxide (glass), ceramic, or quartz, among other materials. A carrier 105 may include any suitable material, and in some embodiments, may include a semiconductor wafer (e.g., a silicon wafer) or glass (e.g., a glass panel). The first die 114-1 may be attached to the first carrier 105-1 using any suitable technique, including a temporary adhesive layer or a die attach film (DAF).[0051] FIG. 8B illustrates an assembly subsequent to removing the non-electrical material from the top surface of the substrate 120 and revealing the top surface of the uTSV 123. The non-electrical material may be removed using any suitable technique, including, for example, grinding, etching, such as reactive ion etching (RIE) or chemical etching. In some embodiments, the top surface of the substrate 120 may be polished to reveal the top surface of the uTSV 123. In some embodiments, when the uTSV 123 are not included in the assembly of FIG. 8A, the uTSV 123 may be formed in the substrate material 120 subsequent to thinning the non-electrical material at the top surface of the substrate 120. In some embodiments, the first die 114-1 may be processed at the wafer level and subsequently singulated. [0052] FIG. 8C illustrates an assembly subsequent to forming a backside metallization stack 126 on the top surface of the assembly of FIG. 8B, electrically coupling the backside metallization stack 126 and the active side metallization stack 122 by the uTSV 123 in the substrate 120, and forming conductive pads 142 on the top surface of the backside metallization stack 126. The die 114-1 may be functionally tested using the conductive pads 142 or the top metal layer in the stack 126 to determine that the die 114-1 is a KGD before further processing is performed.
[0053] FIG. 8D illustrates an assembly subsequent to mounting a second carrier 105-2 to the top surface of the assembly of FIG. 8C.[0054] FIG. 8E illustrates an assembly subsequent to inverting the assembly of FIG. 8D and removing the first carrier 105-1.[0055] FIG. 8F illustrates an assembly subsequent to forming an exposed HB interface 180 on a top surface of the assembly of FIG. 8E (e.g., on the active side metallization stack 122), where the HB interface 180 includes HB contacts 110 surrounded by HB dielectric 108.[0056] FIG. 8G illustrates an assembly subsequent to hybrid bonding a second die 114-2 to the top surface of the assembly of FIG. 8F. In particular, HB interface 180 (not labeled) of the second die 114-2 may be brought into contact with the HB interface of the first die 114-1, and heat and/or pressure may be applied to bond the contacting HB interfaces 180 to form HB region 130.[0057] FIG. 8H illustrates an assembly subsequent to inverting the assembly of FIG. 8G and removing the second carrier 105-2.[0058] FIG. 81 illustrates an assembly subsequent to forming conductive pillars 152, depositing an insulating material 133 on and around the first die 114-1 and the conductive pillars 152, and forming conductive contacts 132 for DTP interconnects on the top surfaces of the first die 114-1 and the conductive pillars 152. The conductive pillars 152 may be formed using any suitable technique, for example, a lithographic process or an additive process, such as cold spray or 3-dimensional printing. For example, the conductive pillars 152 may be formed by depositing, exposing, and developing a photoresist layer on the top surface of the die 114-2. The photoresist layer may be patterned to form cavities in the shape of the conductive pillars. Conductive material, such as copper, may be deposited in the openings in the patterned photoresist layer to form the conductive pillars 152. The conductive material may be deposited using any suitable process, such as electroplating, sputtering, or electroless plating. The photoresist may be removed to expose the conductive pillars 152. In another example, a photo-imageable dielectric may be used to form the conductive pillars 152. In some embodiments, the insulating material 133 may be initially deposited on and over the top surfaces of the first die 114-1 and the conductive pillars 152, then polished back to expose the top surface of the first die 114-1 and the conductive pillars 152. The insulating material 133 may be formed using any suitable process, including lamination, or slit coating and curing. If the insulating material 133 is formed to completely cover the first die 114-1 and conductive pillars 152, the insulating material 133 may be removed using any suitable technique, including grinding, or etching, such as a wet etch, a dry etch (e.g., a plasma etch), a wet blast, or a laser ablation (e.g., using excimer laser). In some embodiments, the thickness of the insulating material 133 may be minimized to reduce the etching time required. The die 114-1 and/or the die 114-2 may be functionally tested using the conductive contacts 132 to determine that the dies 114-1, 114-2 are KGDs before further processing is performed.
[0059] FIG. 8J illustrates an assembly subsequent to inverting the assembly of FIG. 81. The assembly of FIG. 8J may be a microelectronic assembly 100, as shown, or further manufacturing operations may be performed on the microelectronic assembly 100 of FIG. 8J to form other microelectronic assemblies 100, for example, as shown in FIG. 3. For example, the assembly of FIG. 8J may be electrically coupled to a package substrate through DTP interconnects by printing a solder paste on the conductive contacts 132, placing the assembly of FIG. 8J on a package substrate using a pick-n-place tool, subjecting the solder paste to thermal reflow, and cleaning.[0060] FIGS. 9A-9D are side, cross-sectional views of various stages in an example process for manufacturing the microelectronic assembly of FIG. 1, in accordance with various embodiments. FIG. 9A illustrates the assembly of FIG. 8E subsequent to performing the processes as described above with reference to FIGS. 8A-8E.[0061] FIG. 9B illustrates an assembly subsequent to forming conductive pillars 152 on the second carrier 105-2, depositing an insulating material 133 on and around the first die 114-1 and the conductive pillars 152, and forming an exposed HB interface 180 on a top surface of the insulating material 133, the conductive pillars 152, and the die 114-1 (e.g., on the active side metallization stack 122), where the HB interface 180 includes HB contacts 110 surrounded by HB dielectric 108. The conductive pillars 152 and the insulating material 133 may be formed as described above with reference to FIG. 81.[0062] FIG. 9C illustrates an assembly subsequent to hybrid bonding a second die 114-2 and a third die 114-3 to the top surface of the assembly of FIG. 9B, and depositing an insulating material 133 on and around the second and third dies 114-2, 114-3. In particular, HB interface 180 (not labeled) of the second die 114-2 and the third die 114-3 may be brought into contact with the HB interface of the first die 114-1 and heat and/or pressure may be applied to bond the contacting HB interfaces 180 to form HB regions 130-1 and 130-2, respectively. The insulating material 133 may be deposited as described above with reference to FIG. 81. In some embodiments, the insulating material 133 on and around the second and third dies 114-2, 114-3 may be omitted. In such an embodiment, the second and third dies 114-2, 114-3 may be supported by the underlying structure (e.g., the assembly of FIG. 9B). In some embodiments, a mechanical support substrate, such as a permanent carrier (not shown), may be attached to the top surface of the assembly of FIG. 9C (e.g., the top surfaces of the second and third dies 114-2, 114-3) to provide further mechanical support.[0063] FIG. 9D illustrates an assembly subsequent to removing the second carrier 105-2 and forming conductive contacts 132 for DTP interconnects on a bottom surface of the assembly of FIG. 9C. The dies 114-1, 114-2, 114-3 may be functionally tested using the conductive contacts 132 to determine that the dies 114-1, 114-2, 114-3 are KGDs before further processing is performed. The assembly of FIG. 9D may be a microelectronic assembly 100, as shown, or further manufacturing operations may be performed on the microelectronic assembly 100 of FIG. 9D to form other microelectronic assemblies 100, for example, as shown in FIG. 1. For example, the assembly of FIG. 9D may be electrically coupled to a
package substrate through DTP interconnects by printing a solder paste on the conductive contacts 132, placing the assembly of FIG. 9D on a package substrate using a pick-n-place tool, subjecting the solder paste to thermal reflow, and cleaning.[0064] FIGS. 10A-10G are side, cross-sectional views of various stages in an example process for manufacturing the microelectronic assembly of FIG. 5, in accordance with various embodiments. FIG. 10A illustrates an assembly of FIG. 8D subsequent to performing the processes as described above with reference to FIGS. 8A-8D, where the second carrier 105-2 mounted to the top surface includes a substrate 128 and TSVs 118 (e.g., the second carrier 105-2 becomes a permanent part of the microelectronic assembly 100 of FIG. 5).[0065] FIG. 10B illustrates an assembly subsequent to inverting the assembly of FIG. 10A, removing the first carrier 105-1, and forming an exposed HB interface 180 on a top surface of the die 114-1 (e.g., on the active side metallization stack 122), where the HB interface 180 includes HB contacts 110 surrounded by HB dielectric 108.[0066] FIG. 10C illustrates an assembly subsequent to placing a second die 114-2 and a third die 114-3 on a third carrier 105-3 with a backside (e.g., non-active side) facing towards the third carrier 105-3. The top surfaces of the second and third dies 114-2, 114-3 may include an exposed HB interface 180-1, 180-2, respectively, where the HB interface 180 includes HB contacts 110 surrounded by HB dielectric 108. In some embodiments, an insulating material 133 (not shown) may be deposited on and around the second and third dies 114-2, 114-3, as described above with reference to FIG. 81.[0067] FIG. 10D illustrates an assembly subsequent to hybrid bonding the first die 114-1 (e.g., inverting the assembly of FIG. 10B) to the second die 114-2 and the third die 114-3 (e.g., to the top surface of the assembly of FIG. 10C). In particular, HB interface 180 (not labeled) of the first die 114-1 may be brought into contact with the HB interface of the second die 114-2 and the third die 114-3, and heat and/or pressure may be applied to bond the contacting HB interfaces 180 to form HB regions 130-1 and 130-2, respectively.[0068] FIG. 10E illustrates an assembly subsequent to removing non-electrical material from the backside (e.g., the top surface) of the substrate 128 and revealing the top surface of the TSVs 118. The non-electrical material may be removed using any suitable technique, including, for example, as described above with reference to FIG. 8B.[0069] FIG. 10F illustrates an assembly subsequent to forming conductive pillars 152 on the second and third dies 114-2, 114-3, depositing an insulating material 133 on and around the first die 114-1 and the conductive pillars 152, and forming conductive contacts 132 for DTP interconnects on the top surface to the assembly. The dies 114-1, 114-2, 114-3 may be functionally tested using the conductive contacts 132 to determine that the dies 114-1, 114-2, 114-3 are KGDs before further processing is performed. In some embodiments, the insulating material 133 may be deposited on and around the second and third dies 114-2, 114-3.
[0070] FIG. 10G illustrates an assembly subsequent to inverting the assembly of FIG. 10F and removing the third carrier 105-3. The assembly of FIG. 10G may be a microelectronic assembly 100, as shown, or further manufacturing operations may be performed on the microelectronic assembly 100 of FIG. 10G to form other microelectronic assemblies 100, for example, as shown in FIG. 5. For example, the assembly of FIG. 10G may be electrically coupled to a package substrate through DTP interconnects by printing a solder paste on the conductive contacts 132, placing the assembly of FIG. 10G on a package substrate using a pick-n-place tool, subjecting the solder paste to thermal reflow, and cleaning.[0071] FIGS. 11A-11D are side, cross-sectional views of various stages in an example process for manufacturing the microelectronic assembly of FIG. 7, in accordance with various embodiments.[0072] FIG. 11A is an assembly subsequent to performing the processes described above with reference to FIGS. 10A-10B on first and second dies 114-1, 114-2 and hybrid bonding the first and second dies 114-1, 114-2 to a third die 114-3. In particular, HB interface 180 (not labeled) of the first die 114-1 and the second die 114-2 may be brought into contact with the HB interface of the third die 114- 3, and heat and/or pressure may be applied to bond the contacting HB interfaces 180 to form HB regions 130-1 and 130-2, respectively.[0073] FIG. 11B illustrates an assembly subsequent to removing the non-electrical material from the top surface (e.g., backside) of the substrate 128 of the first and second dies 114-1, 114-2 to reveal the top surfaces of the TSVs 118, forming conductive pillars 152 on the top surface of the third die 114-3, and depositing an insulating material 133 on and around the first die 114-1, the second die 114-2, and the conductive pillars 152. In some embodiments, the non-electrical material from the top surface of the substrate 128 of the first and second dies 114-1, 114-2 may be removed with the insulating material 133. The conductive pillars 152 and the insulating material 133 may be formed using any suitable technique, including as described above with reference to FIG. 81. The insulating material 133 may be removed using any suitable technique, including as described above with reference to FIG. 81. The nonelectrical material of the substrate 128 may be removed using any suitable technique, including as described above with reference to FIG. 8B.[0074] FIG. 11C illustrates an assembly subsequent to forming an RDL 148 on the top surface of the assembly of FIG. 11B. The RDL 148 may include conductive contacts on a bottom surface coupled to the first and second dies 114-1, 114-2 by the TSVs 118 in the substrate 128, and conductive contacts 132 on the top surface for coupling to a package substrate by DTP interconnects. The RDL 148 may be manufactured using any suitable technique, such as a PCB technique, a redistribution layer technique, or damascene processing.[0075] FIG. 11D illustrates an assembly subsequent to inverting the assembly of FIG. 11C. The assembly of FIG. 11D may itself be a microelectronic assembly 100, as shown. Further manufacturing operations may be performed on the microelectronic assembly 100 of FIG. 11D to form other microelectronic assemblies 100, such as shown in FIG. 7. For example, further processing may include
depositing a solder resist layer, attaching solder balls, and electrically coupling a package substrate 102 to the bottom surface of the assembly of FIG. 11D through DTP interconnects 150. The dies 114-1, 114- 2, 114-3 may be functionally tested using the conductive contacts 132 to determine that the dies 114-1, 114-2, 114-3 are KGDs before further processing is performed.[0076] The microelectronic assemblies 100 disclosed herein may be used for any suitable application. For example, in some embodiments, a microelectronic assembly 100 may be used to enable very small form factor voltage regulation for field programmable gate array (FPGA) or processing units (e.g., a central processing unit, a graphics processing unit, a FPGA, a modem, an applications processor, etc.) especially in mobile devices and small form factor devices. In another example, the die 114 in a microelectronic assembly 100 may be a processing device (e.g., a central processing unit, a graphics processing unit, a FPGA, a modem, an applications processor, etc.).[0077] The microelectronic assemblies 100 disclosed herein may be included in any suitable electronic component. FIGS. 12-15 illustrate various examples of apparatuses that may include, or be included in, any of the microelectronic assemblies 100 disclosed herein.[0078] FIG. 12 is a top view of a wafer 1500 and dies 1502 that may be included in any of the microelectronic assemblies 100 disclosed herein (e.g., as any suitable ones of the dies 114). The wafer 1500 may be composed of semiconductor material and may include one or more dies 1502 having IC structures formed on a surface of the wafer 1500. Each of the dies 1502 may be a repeating unit of a semiconductor product that includes any suitable IC. After the fabrication of the semiconductor product is complete, the wafer 1500 may undergo a singulation process in which the dies 1502 are separated from one another to provide discrete "chips" of the semiconductor product. The die 1502 may be any of the dies 114 disclosed herein. The die 1502 may include one or more transistors (e.g., some of the transistors 1640 of FIG. 13, discussed below), supporting circuitry to route electrical signals to the transistors, passive components (e.g., signal traces, resistors, capacitors, or inductors), and/or any other IC components. In some embodiments, the wafer 1500 or the die 1502 may include a memory device (e.g., a random access memory (RAM) device, such as a static RAM (SRAM) device, a magnetic RAM (MRAM) device, a resistive RAM (RRAM) device, a conductive-bridging RAM (CBRAM) device, etc.), a logic device (e.g., an AND, OR, NAND, or NOR gate), or any other suitable circuit element. Multiple ones of these devices may be combined on a single die 1502. For example, a memory array formed by multiple memory devices may be formed on a same die 1502 as a processing device (e.g., the processing device 1802 of FIG. 15) or other logic that is configured to store information in the memory devices or execute instructions stored in the memory array. In some embodiments, a die 1502 (e.g., a die 114) may be a central processing unit, a radio frequency chip, a power converter, or a network processor. Various ones of the microelectronic assemblies 100 disclosed herein may be manufactured using a die- to-wafer assembly technique in which some dies 114 are attached to a wafer 1500 that include others of the dies 114, and the wafer 1500 is subsequently singulated.
[0079] FIG. 13 is a cross-sectional side view of an IC device 1600 that may be included in any of the microelectronic assemblies 100 disclosed herein (e.g., in any of the dies 114). One or more of the IC devices 1600 may be included in one or more dies 1502 (FIG. 12). The IC device 1600 may be formed on a die substrate 1602 (e.g., the wafer 1500 of FIG. 12) and may be included in a die (e.g., the die 1502 of FIG. 12). The die substrate 1602 may be a semiconductor substrate composed of semiconductor material systems including, for example, n-type or p-type materials systems (or a combination of both). The die substrate 1602 may include, for example, a crystalline substrate formed using a bulk silicon or a silicon-on-insulator (SOI) substructure. In some embodiments, the die substrate 1602 may be formed using alternative materials, which may or may not be combined with silicon, that include, but are not limited to, germanium, indium antimonide, lead telluride, indium arsenide, indium phosphide, gallium arsenide, or gallium antimonide. Further materials classified as group ll-VI, lll-V, or IV may also be used to form the die substrate 1602. Although a few examples of materials from which the die substrate 1602 may be formed are described here, any material that may serve as a foundation for an IC device 1600 may be used. The die substrate 1602 may be part of a singulated die (e.g., the dies 1502 of FIG. 12) or a wafer (e.g., the wafer 1500 of FIG. 12).[0080] The IC device 1600 may include one or more device layers 1604 disposed on the die substrate 1602. The device layer 1604 may include features of one or more transistors 1640 (e.g., metal oxide semiconductor field-effect transistors (MOSFETs)) formed on the die substrate 1602. The device layer 1604 may include, for example, one or more source and/or drain (S/D) regions 1620, a gate 1622 to control current flow in the transistors 1640 between the S/D regions 1620, and one or more S/D contacts 1624 to route electrical signals to/from the S/D regions 1620. The transistors 1640 may include additional features not depicted for the sake of clarity, such as device isolation regions, gate contacts, and the like. The transistors 1640 are not limited to the type and configuration depicted in FIG. 13 and may include a wide variety of other types and configurations such as, for example, planar transistors, non-planar transistors, or a combination of both. Non-planar transistors may include FinFET transistors, such as double-gate transistors or tri-gate transistors, and wrap-around or all-around gate transistors, such as nanoribbon and nanowire transistors.[0081] Each transistor 1640 may include a gate 1622 formed of at least two layers, a gate dielectric and a gate electrode. The gate dielectric may include one layer or a stack of layers. The one or more layers may include silicon oxide, silicon dioxide, silicon carbide, and/or a high-k dielectric material. The high-k dielectric material may include elements such as hafnium, silicon, oxygen, titanium, tantalum, lanthanum, aluminum, zirconium, barium, strontium, yttrium, lead, scandium, niobium, and zinc. Examples of high-k materials that may be used in the gate dielectric include, but are not limited to, hafnium oxide, hafnium silicon oxide, lanthanum oxide, lanthanum aluminum oxide, zirconium oxide, zirconium silicon oxide, tantalum oxide, titanium oxide, barium strontium titanium oxide, barium titanium oxide, strontium titanium oxide, yttrium oxide, aluminum oxide, lead scandium tantalum oxide,
and lead zinc niobate. In some embodiments, an annealing process may be carried out on the gate dielectric to improve its quality when a high-k material is used.[0082] The gate electrode may be formed on the gate dielectric and may include at least one p-type work function metal or n-type work function metal, depending on whether the transistor 1640 is to be a PMOS or a NMOS transistor. In some implementations, the gate electrode may consist of a stack of two or more metal layers, where one or more metal layers are work function metal layers and at least one metal layer is a fill metal layer. Further metal layers may be included for other purposes, such as a barrier layer. For a PMOS transistor, metals that may be used for the gate electrode include, but are not limited to, ruthenium, palladium, platinum, cobalt, nickel, conductive metal oxides (e.g., ruthenium oxide), and any of the metals discussed below with reference to an NMOS transistor (e.g., for work function tuning). For an NMOS transistor, metals that may be used for the gate electrode include, but are not limited to, hafnium, zirconium, titanium, tantalum, aluminum, alloys of these metals, carbides of these metals (e.g., hafnium carbide, zirconium carbide, titanium carbide, tantalum carbide, and aluminum carbide), and any of the metals discussed above with reference to a PMOS transistor (e.g., for work function tuning).[0083] In some embodiments, when viewed as a cross-section of the transistor 1640 along the source- channel-drain direction, the gate electrode may consist of a U-shaped structure that includes a bottom portion substantially parallel to the surface of the die substrate 1602 and two sidewall portions that are substantially perpendicular to the top surface of the die substrate 1602. In other embodiments, at least one of the metal layers that form the gate electrode may simply be a planar layer that is substantially parallel to the top surface of the die substrate 1602 and does not include sidewall portions substantially perpendicular to the top surface of the die substrate 1602. In other embodiments, the gate electrode may consist of a combination of U-shaped structures and planar, non-U-shaped structures. For example, the gate electrode may consist of one or more U-shaped metal layers formed atop one or more planar, non-U-shaped layers.[0084] In some embodiments, a pair of sidewall spacers may be formed on opposing sides of the gate stack to bracket the gate stack. The sidewall spacers may be formed from materials such as silicon nitride, silicon oxide, silicon carbide, silicon nitride doped with carbon, and silicon oxynitride. Processes for forming sidewall spacers are well known in the art and generally include deposition and etching process steps. In some embodiments, a plurality of spacer pairs may be used; for instance, two pairs, three pairs, or four pairs of sidewall spacers may be formed on opposing sides of the gate stack.[0085] The S/D regions 1620 may be formed within the die substrate 1602 adjacent to the gate 1622 of each transistor 1640. The S/D regions 1620 may be formed using an implantation/diffusion process or an etching/deposition process, for example. In the former process, dopants such as boron, aluminum, antimony, phosphorous, or arsenic may be ion-implanted into the die substrate 1602 to form the S/D regions 1620. An annealing process that activates the dopants and causes them to diffuse farther into
the die substrate 1602 may follow the ion-implantation process. In the latter process, the die substrate 1602 may first be etched to form recesses at the locations of the S/D regions 1620. An epitaxial deposition process may then be carried out to fill the recesses with material that is used to fabricate the S/D regions 1620. In some implementations, the S/D regions 1620 may be fabricated using a silicon alloy such as silicon germanium or silicon carbide. In some embodiments, the epitaxially deposited silicon alloy may be doped in situ with dopants such as boron, arsenic, or phosphorous. In some embodiments, the S/D regions 1620 may be formed using one or more alternate semiconductor materials such as germanium or a group 11 l-V material or alloy. In further embodiments, one or more layers of metal and/or metal alloys may be used to form the S/D regions 1620.[0086] Electrical signals, such as power and/or input/output (I/O) signals, may be routed to and/or from the devices (e.g., transistors 1640) of the device layer 1604 through one or more interconnect layers disposed on the device layer 1604 (illustrated in FIG. 13 as interconnect layers 1606-1610). For example, electrically conductive features of the device layer 1604 (e.g., the gate 1622 and the S/D contacts 1624) may be electrically coupled with the interconnect structures 1628 of the interconnect layers 1606-1610. The one or more interconnect layers 1606-1610 may form a metallization stack (also referred to as an "ILD stack") 1619 of the IC device 1600.[0087] The interconnect structures 1628 may be arranged within the interconnect layers 1606-1610 to route electrical signals according to a wide variety of designs; in particular, the arrangement is not limited to the particular configuration of interconnect structures 1628 depicted in FIG. 13. Although a particular number of interconnect layers 1606-1610 is depicted in FIG. 13, embodiments of the present disclosure include IC devices having more or fewer interconnect layers than depicted.[0088] In some embodiments, the interconnect structures 1628 may include lines 1628a and/or vias 1628b filled with an electrically conductive material such as a metal. The lines 1628a may be arranged to route electrical signals in a direction of a plane that is substantially parallel with a surface of the die substrate 1602 upon which the device layer 1604 is formed. For example, the lines 1628a may route electrical signals in a direction in and out of the page from the perspective of FIG. 13. The vias 1628b may be arranged to route electrical signals in a direction of a plane that is substantially perpendicular to the surface of the die substrate 1602 upon which the device layer 1604 is formed. In some embodiments, the vias 1628b may electrically couple lines 1628a of different interconnect layers 1606- 1610 together.[0089] The interconnect layers 1606-1610 may include a dielectric material 1626 disposed between the interconnect structures 1628, as shown in FIG. 13. In some embodiments, the dielectric material 1626 disposed between the interconnect structures 1628 in different ones of the interconnect layers 1606- 1610 may have different compositions; in other embodiments, the composition of the dielectric material 1626 between different interconnect layers 1606-1610 may be the same.
[0090] A first interconnect layer 1606 (referred to as Metal 1 or "Ml") may be formed directly on the device layer 1604. In some embodiments, the first interconnect layer 1606 may include lines 1628a and/or vias 1628b, as shown. The lines 1628a of the first interconnect layer 1606 may be coupled with contacts (e.g., the S/D contacts 1624) of the device layer 1604.[0091] A second interconnect layer 1608 (referred to as Metal 2 or "M2") may be formed directly on the first interconnect layer 1606. In some embodiments, the second interconnect layer 1608 may include vias 1628b to couple the lines 1628a of the second interconnect layer 1608 with the lines 1628a of the first interconnect layer 1606. Although the lines 1628a and the vias 1628b are structurally delineated with a line within each interconnect layer (e.g., within the second interconnect layer 1608) for the sake of clarity, the lines 1628a and the vias 1628b may be structurally and/or materially contiguous (e.g., simultaneously filled during a dual damascene process) in some embodiments.[0092] A third interconnect layer 1610 (referred to as Metal 3 or "M3") (and additional interconnect layers, as desired) may be formed in succession on the second interconnect layer 1608 according to similar techniques and configurations described in connection with the second interconnect layer 1608 or the first interconnect layer 1606. In some embodiments, the interconnect layers that are "higher up" in the metallization stack 1619 in the IC device 1600 (i.e., farther away from the device layer 1604) may be thicker.[0093] The IC device 1600 may include a solder resist material 1634 (e.g., polyimide or similar material) and one or more conductive contacts 1636 formed on the interconnect layers 1606-1610. In FIG. 13, the conductive contacts 1636 are illustrated as taking the form of bond pads. The conductive contacts 1636 may be electrically coupled with the interconnect structures 1628 and configured to route the electrical signals of the transistor(s) 1640 to other external devices. For example, solder bonds may be formed on the one or more conductive contacts 1636 to mechanically and/or electrically couple a chip including the IC device 1600 with another component (e.g., a circuit board). The IC device 1600 may include additional or alternate structures to route the electrical signals from the interconnect layers 1606-1610; for example, the conductive contacts 1636 may include other analogous features (e.g., posts) that route the electrical signals to external components.[0094] In some embodiments in which the IC device 1600 is a double-sided die (e.g., like the die 114-1), the IC device 1600 may include another metallization stack (not shown) on the opposite side of the device layer(s) 1604. This metallization stack may include multiple interconnect layers as discussed above with reference to the interconnect layers 1606-1610, to provide conductive pathways (e.g., including conductive lines and vias) between the device layer(s) 1604 and additional conductive contacts (not shown) on the opposite side of the IC device 1600 from the conductive contacts 1636.[0095] In other embodiments in which the IC device 1600 is a double-sided die (e.g., like the die 114-1), the IC device 1600 may include one or more TSVs through the die substrate 1602; these TSVs may make contact with the device layer(s) 1604, and may provide conductive pathways between the device
layer(s) 1604 and additional conductive contacts (not shown) on the opposite side of the IC device 1600 from the conductive contacts 1636.[0096] FIG. 14 is a cross-sectional side view of an IC device assembly 1700 that may include any of the microelectronic assemblies 100 disclosed herein. In some embodiments, the IC device assembly 1700 may be a microelectronic assembly 100. The IC device assembly 1700 includes a number of components disposed on a circuit board 1702 (which may be, e.g., a motherboard). The IC device assembly 1700 includes components disposed on a first face 1740 of the circuit board 1702 and an opposing second face 1742 of the circuit board 1702; generally, components may be disposed on one or both faces 1740 and 1742. Any of the IC packages discussed below with reference to the IC device assembly 1700 may take the form of any suitable ones of the embodiments of the microelectronic assemblies 100 disclosed herein.[0097] In some embodiments, the circuit board 1702 may be a PCB including multiple metal layers separated from one another by layers of dielectric material and interconnected by electrically conductive vias. Any one or more of the metal layers may be formed in a desired circuit pattern to route electrical signals (optionally in conjunction with other metal layers) between the components coupled to the circuit board 1702. In other embodiments, the circuit board 1702 may be a non-PCB substrate. In some embodiments the circuit board 1702 may be, for example, a circuit board.[0098] The IC device assembly 1700 illustrated in FIG. 14 includes a package-on-interposer structure 1736 coupled to the first face 1740 of the circuit board 1702 by coupling components 1716. The coupling components 1716 may electrically and mechanically couple the package-on-interposer structure 1736 to the circuit board 1702, and may include solder balls (as shown in FIG. 14), male and female portions of a socket, an adhesive, an underfill material, and/or any other suitable electrical and/or mechanical coupling structure.[0099] The package-on-interposer structure 1736 may include an IC package 1720 coupled to an interposer 1704 by coupling components 1718. The coupling components 1718 may take any suitable form for the application, such as the forms discussed above with reference to the coupling components 1716. Although a single IC package 1720 is shown in FIG. 14, multiple IC packages may be coupled to the interposer 1704; indeed, additional interposers may be coupled to the interposer 1704. The interposer 1704 may provide an intervening substrate used to bridge the circuit board 1702 and the IC package 1720. The IC package 1720 may be or include, for example, a die (the die 1502 of FIG. 12), an IC device (e.g., the IC device 1600 of FIG. 13), or any other suitable component. Generally, the interposer 1704 may spread a connection to a wider pitch or reroute a connection to a different connection. For example, the interposer 1704 may couple the IC package 1720 (e.g., a die) to a set of ball grid array (BGA) conductive contacts of the coupling components 1716 for coupling to the circuit board 1702. In the embodiment illustrated in FIG. 14, the IC package 1720 and the circuit board 1702 are attached to opposing sides of the interposer 1704; in other embodiments, the IC package 1720 and the circuit board
1702 may be attached to a same side of the interposer 1704. In some embodiments, three or more components may be interconnected by way of the interposer 1704.[0100] In some embodiments, the interposer 1704 may be formed as a PCB, including multiple metal layers separated from one another by layers of dielectric material and interconnected by electrically conductive vias. In some embodiments, the interposer 1704 may be formed of an epoxy resin, a fiberglass-reinforced epoxy resin, an epoxy resin with inorganic fillers, a ceramic material, or a polymer material such as polyimide. In some embodiments, the interposer 1704 may be formed of alternate rigid or flexible materials that may include the same materials described above for use in a semiconductor substrate, such as silicon, germanium, and other group lll-V and group IV materials. The interposer 1704 may include metal interconnects 1708 and vias 1710, including but not limited to TSVs 1706. The interposer 1704 may further include embedded devices 1714, including both passive and active devices. Such devices may include, but are not limited to, capacitors, decoupling capacitors, resistors, inductors, fuses, diodes, transformers, sensors, electrostatic discharge (ESD) devices, and memory devices. More complex devices such as radio frequency devices, power amplifiers, power management devices, antennas, arrays, sensors, and microelectromechanical systems (MEMS) devices may also be formed on the interposer 1704. The package-on-interposer structure 1736 may take the form of any of the package-on-interposer structures known in the art.[0101] The IC device assembly 1700 may include an IC package 1724 coupled to the first face 1740 of the circuit board 1702 by coupling components 1722. The coupling components 1722 may take the form of any of the embodiments discussed above with reference to the coupling components 1716, and the IC package 1724 may take the form of any of the embodiments discussed above with reference to the IC package 1720.[0102] The IC device assembly 1700 illustrated in FIG. 14 includes a package-on-package structure 1734 coupled to the second face 1742 of the circuit board 1702 by coupling components 1728. The package- on-package structure 1734 may include an IC package 1726 and an IC package 1732 coupled together by coupling components 1730 such that the IC package 1726 is disposed between the circuit board 1702 and the IC package 1732. The coupling components 1728 and 1730 may take the form of any of the embodiments of the coupling components 1716 discussed above, and the IC packages 1726 and 1732 may take the form of any of the embodiments of the IC package 1720 discussed above. The package- on-package structure 1734 may be configured in accordance with any of the package-on-package structures known in the art.[0103] FIG. 15 is a block diagram of an example electrical device 1800 that may include one or more of the microelectronic assemblies 100 disclosed herein. For example, any suitable ones of the components of the electrical device 1800 may include one or more of the IC device assemblies 1700, IC devices 1600, or dies 1502 disclosed herein, and may be arranged in any of the microelectronic assemblies 100 disclosed herein. A number of components are illustrated in FIG. 15 as included in the electrical device
1800, but any one or more of these components may be omitted or duplicated, as suitable for the application. In some embodiments, some or all of the components included in the electrical device 1800 may be attached to one or more motherboards. In some embodiments, some or all of these components are fabricated onto a single system-on-a-chip (SoC) die.[0104] Additionally, in various embodiments, the electrical device 1800 may not include one or more of the components illustrated in FIG. 15, but the electrical device 1800 may include interface circuitry for coupling to the one or more components. For example, the electrical device 1800 may not include a display device 1806, but may include display device interface circuitry (e.g., a connector and driver circuitry) to which a display device 1806 may be coupled. In another set of examples, the electrical device 1800 may not include an audio input device 1824 or an audio output device 1808, but may include audio input or output device interface circuitry (e.g., connectors and supporting circuitry) to which an audio input device 1824 or audio output device 1808 may be coupled.[0105] The electrical device 1800 may include a processing device 1802 (e.g., one or more processing devices). As used herein, the term "processing device" or "processor" may refer to any device or portion of a device that processes electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory. The processing device 1802 may include one or more digital signal processors (DSPs), application-specific ICs (ASICs), central processing units (CPUs), graphics processing units (GPUs), cryptoprocessors (specialized processors that execute cryptographic algorithms within hardware), server processors, or any other suitable processing devices. The electrical device 1800 may include a memory 1804, which may itself include one or more memory devices such as volatile memory (e.g., dynamic random access memory (DRAM)), nonvolatile memory (e.g., read-only memory (ROM)), flash memory, solid state memory, and/or a hard drive. In some embodiments, the memory 1804 may include memory that shares a die with the processing device 1802. This memory may be used as cache memory and may include embedded dynamic random access memory (eDRAM) or spin transfer torque magnetic random access memory (STT-MRAM).[0106] In some embodiments, the electrical device 1800 may include a communication chip 1812 (e.g., one or more communication chips). For example, the communication chip 1812 may be configured for managing wireless communications for the transfer of data to and from the electrical device 1800. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a nonsolid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not.[0107] The communication chip 1812 may implement any of a number of wireless standards or protocols, including but not limited to Institute for Electrical and Electronic Engineers (IEEE) standards including Wi-Fi (IEEE 802.11 family), IEEE 802.16 standards (e.g., IEEE 802.16-2005 Amendment), Long-
Term Evolution (LTE) project along with any amendments, updates, and/or revisions (e.g., advanced LTE project, ultra mobile broadband (UMB) project (also referred to as "3GPP2"), etc.). IEEE 802.16 compatible Broadband Wireless Access (BWA) networks are generally referred to as WiMAX networks, an acronym that stands for Worldwide Interoperability for Microwave Access, which is a certification mark for products that pass conformity and interoperability tests for the IEEE 802.16 standards. The communication chip 1812 may operate in accordance with a Global System for Mobile Communication (GSM), General Packet Radio Service (GPRS), Universal Mobile Telecommunications System (UMLS), High Speed Packet Access (HSPA), Evolved HSPA (E-HSPA), or LTE network. The communication chip 1812 may operate in accordance with Enhanced Data for GSM Evolution (EDGE), GSM EDGE Radio Access Network (GERAN), Universal Terrestrial Radio Access Network (UTRAN), or Evolved UTRAN (E- UTRAN). The communication chip 1812 may operate in accordance with Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Digital Enhanced Cordless Telecommunications (DECT), Evolution-Data Optimized (EV-DO), and derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The communication chip 1812 may operate in accordance with other wireless protocols in other embodiments. The electrical device 1800 may include an antenna 1822 to facilitate wireless communications and/or to receive other wireless communications (such as AM or FM radio transmissions).[0108] In some embodiments, the communication chip 1812 may manage wired communications, such as electrical, optical, or any other suitable communication protocols (e.g., the Ethernet). As noted above, the communication chip 1812 may include multiple communication chips. For instance, a first communication chip 1812 may be dedicated to shorter-range wireless communications such as Wi-Fi or Bluetooth, and a second communication chip 1812 may be dedicated to longer-range wireless communications such as global positioning system (GPS), EDGE, GPRS, CDMA, WiMAX, LTE, EV-DO, or others. In some embodiments, a first communication chip 1812 may be dedicated to wireless communications, and a second communication chip 1812 may be dedicated to wired communications. [0109] The electrical device 1800 may include battery/power circuitry 1814. The battery/power circuitry 1814 may include one or more energy storage devices (e.g., batteries or capacitors) and/or circuitry for coupling components of the electrical device 1800 to an energy source separate from the electrical device 1800 (e.g., AC line power).[0110] The electrical device 1800 may include a display device 1806 (or corresponding interface circuitry, as discussed above). The display device 1806 may include any visual indicators, such as a heads-up display, a computer monitor, a projector, a touchscreen display, a liquid crystal display (LCD), a light-emitting diode display, or a flat panel display.[0111] The electrical device 1800 may include an audio output device 1808 (or corresponding interface circuitry, as discussed above). The audio output device 1808 may include any device that generates an audible indicator, such as speakers, headsets, or earbuds.
[0112] The electrical device 1800 may include an audio input device 1824 (or corresponding interface circuitry, as discussed above). The audio input device 1824 may include any device that generates a signal representative of a sound, such as microphones, microphone arrays, or digital instruments (e.g., instruments having a musical instrument digital interface (MIDI) output).[0113] The electrical device 1800 may include a GPS device 1818 (or corresponding interface circuitry, as discussed above). The GPS device 1818 may be in communication with a satellite-based system and may receive a location of the electrical device 1800, as known in the art.[0114] The electrical device 1800 may include an other output device 1810 (or corresponding interface circuitry, as discussed above). Examples of the other output device 1810 may include an audio codec, a video codec, a printer, a wired or wireless transmitter for providing information to other devices, or an additional storage device.[0115] The electrical device 1800 may include an other input device 1820 (or corresponding interface circuitry, as discussed above). Examples of the other input device 1820 may include an accelerometer, a gyroscope, a compass, an image capture device, a keyboard, a cursor control device such as a mouse, a stylus, a touchpad, a bar code reader, a Quick Response (QR) code reader, any sensor, or a radio frequency identification (RFID) reader.[0116] The electrical device 1800 may have any desired form factor, such as a computing device or a hand-held, portable or mobile computing device (e.g., a cell phone, a smart phone, a mobile internet device, a music player, a tablet computer, a laptop computer, a netbook computer, an ultrabook computer, a personal digital assistant (PDA), an ultra mobile personal computer, etc.), a desktop electrical device, a server, or other networked computing component, a printer, a scanner, a monitor, a set-top box, an entertainment control unit, a vehicle control unit, a digital camera, a digital video recorder, or a wearable computing device. In some embodiments, the electrical device 1800 may be any other electronic device that processes data.[0117] The following paragraphs provide various examples of the embodiments disclosed herein. [0118] Example 1 is a microelectronic assembly, including a first die, having a first surface and an opposing second surface, in a first layer, the first die including a first metallization stack at the first surface; a device layer, including a device, on the first metallization stack; a second metallization stack on the device layer; and an interconnect at the first surface electrically coupled to the first metallization stack; a conductive pillar in the first layer; and a second die, having a first surface and an opposing second surface, in a second layer on the first layer, wherein the first surface of the second die is coupled to the second surface of the first die by a hybrid bonding region and is coupled to the conductive pillar.[0119] Example 2 may include the subject matter of Example 1, and may further specify that the hybrid bonding region is a first hybrid bonding region, and may further include a third die, having a first surface and an opposing second surface, in the second layer, wherein the first surface of the third die is coupled to the second surface of the first die by a second hybrid bonding region.
[0120] Example 3 may include the subject matter of Example 1 or 2, and may further specify that the interconnect is part of a power delivery network.[0121] Example 4 may include the subject matter of any of Examples 1-3, and may further include a substrate layer, including a micro-through silicon via (pTSV), between the first metallization stack and the device layer.[0122] Example 5 may include the subject matter of Example 4, and may further specify that the pTSV electrically couples the device in the device layer to the first metallization stack.[0123] Example 6 may include the subject matter of any of the Examples 1-5, and may further include a package substrate electrically coupled to the first surface of the first die by the interconnect and electrically coupled to the first surface of the second die by the conductive pillar.[0124] Example 7 may include the subject matter of any of Examples 1-6, and may further specify that the interconnect is a first interconnect, and may further include a third die, having a first surface with a second interconnect and an opposing second surface, electrically coupled to the first surface of the first die by the first interconnect and electrically coupled to the first surface of the second die by the conductive pillar.[0125] Example 8 may include the subject matter of Example 7, and may further specify that the second interconnect is part of a power delivery network.[0126] Example 9 may include the subject matter of Example 7, and may further include a package substrate electrically coupled to the first surface of the third die by the second interconnect.[0127] Example 10 may include the subject matter of any of Examples 1-9, and may further specify that conductive structures of the first metallization stack are thicker than conductive structures of the second metallization stack.[0128] Example 11 is a microelectronic assembly, including a first die, having a first surface and an opposing second surface, in a first layer, the first die including a substrate including a through-substrate via (TSV) at the first surface; a first metallization stack on the substrate; a device layer, including a device, on the first metallization stack; a second metallization stack on the device layer; and an interconnect at the first surface electrically coupled to the first metallization stack by the TSV in the substrate; a conductive pillar in the first layer; and a second die, having a first surface and an opposing second surface, in a second layer on the first layer, wherein the first surface of the second die is coupled to the conductive pillar and to the second surface of the first die by a hybrid bonding region.[0129] Example 12 may include the subject matter of Example 11, and may further specify that the hybrid bonding region is a first hybrid bonding region, and may further include a third die, having a first surface and an opposing second surface, in the second layer, wherein the first surface of the third die is coupled to the second surface of the first die by a second hybrid bonding region.[0130] Example 13 may include the subject matter of Example 11 or 12, and may further specify that the interconnect is part of a power delivery network.
[0131] Example 14 may include the subject matter of any of Examples 11-13, and may further specify that the substrate is a first substrate, and may further include a second substrate, including a micro- through silicon via (pTSV), between the first metallization stack and the device layer.[0132] Example 15 may include the subject matter of Example 14, and may further specify that the pTSV electrically couples the device in the device layer to the first metallization stack.[0133] Example 16 may include the subject matter of any of Examples 11-15, and may further include a package substrate electrically coupled to the first surface of the first die by the interconnect and electrically coupled to the first surface of the second die by the conductive pillar.[0134] Example 17 may include the subject matter of any of Examples 11-16, and may further specify that the interconnect is a first interconnect, and may further include a third die, having a first surface with a second interconnect and an opposing second surface, electrically coupled to the first surface of the first die by the first interconnect and electrically coupled to the first surface of the second die by the conductive pillar.[0135] Example 18 may include the subject matter of Example 17, and may further specify that the second interconnect is part of a power delivery network.[0136] Example 19 may include the subject matter of Example 17, and may further include a package substrate electrically coupled to the first surface of the third die by the second interconnect.[0137] Example 20 may include the subject matter of any of Examples 11-19, and may further specify that conductive structures of the first metallization stack are thicker than conductive structures of the second metallization stack.[0138] Example 21 is a microelectronic assembly, including a first die in a first dielectric layer, the first dielectric layer having a first surface and an opposing second surface, and the first die including a substrate including a through-substrate via (TSV) at the first surface; a first metallization stack on the substrate; a device layer, including a device, on the first metallization stack; a second metallization stack on the device layer; and first interconnects at the first surface electrically coupled to the first metallization stack by the TSV in the substrate; a second die in the first dielectric layer, the second die including a substrate including a through-substrate via (TSV) at the first surface; a first metallization stack on the substrate; a device layer, including a device, on the first metallization stack; a second metallization stack on the device layer; and second interconnects at the first surface electrically coupled to the first metallization stack by the TSV in the substrate; a conductive pillar in the first dielectric layer; a third die, in a second dielectric layer on the second surface of the first dielectric layer, electrically coupled to the conductive pillar, electrically coupled to the first die by a first hybrid bonding region at the second surface of the first dielectric layer, and electrically coupled to the second die by a second hybrid bonding region at the second surface of the first dielectric layer; and a redistribution layer (RDL), having a first surface and an opposing second surface, at the first surface of the first dielectric layer, wherein the second surface of the RDL is electrically coupled to the first surface of the first dielectric
layer, and wherein the first surface of the RDL includes third interconnects electrically coupled to the conductive pillar, the first interconnects, and the second interconnects by conductive pathways in the RDL.[0139] Example 22 may include the subject matter of Example 21, and may further specify that the first interconnects, the second interconnects, and the third interconnects are part of a power delivery network.[0140] Example 23 may include the subject matter of Example 21 or 22, and may further specify that the substrate of the first die is a first substrate, and may further include a second substrate, including a micro-through silicon via (pTSV), between the first metallization stack and the device layer, wherein the pTSV electrically couples the device in the device layer to the first metallization stack.[0141] Example 24 may include the subject matter of any of Examples 21-23, and may further specify that the substrate of the second die is a first substrate, and may further include a second substrate, including a pTSV, between the first metallization stack and the device layer, wherein the pTSV electrically couples the device in the device layer to the first metallization stack.[0142] Example 25 may include the subject matter of any of Examples 21-24, and may further include a package substrate electrically coupled to the first surface of the RDL by the third interconnects. |
The invention relates to an input circuitry for an inter-integrated circuit system. The inter-integrated circuit input circuitry includes a pull-up current circuit and an input circuit (400). The input circuit (400) includes an output inverter (407), an input inverter (405), and a pull-up circuit (411)). The pull-up circuit (411) is coupled to an input (412G) of the input inverter (405), and includes a pull-up transistor (404) and a cascode transistor (402). The pull-up transistor (404) is coupled to the input (412G) of the input inverter (405). The cascode transistor (402) is coupled to the pull-up current circuit and the pull-up transistor (404), and configured to isolate the pull-up transistor (404) from capacitance of a conductor coupled to the pull-up current circuit and the input circuit(400). |
1.An input circuit between integrated circuits, which includes:Pull-up current circuit; andInput circuit, which includes:Signal input terminal;Signal output terminal;Output transistor, which includes:A first terminal coupled to the signal output terminal; andThe second terminal coupled to the power rail;The first transistor includes:A first terminal coupled to the input terminal; andA second terminal coupled to the first bias voltage source;The second transistor includes:A first terminal coupled to the third terminal of the first transistor; andA second terminal coupled to the third terminal of the output transistor; andThe cascode transistor includes:A first terminal coupled to the third terminal of the second transistor;A second terminal coupled to the second bias voltage source; andA third terminal coupled to the output terminal of the pull-up current circuit.2.4. The inter-integrated circuit input circuit of claim 1, further comprising:A first conductor provided between the output terminal of the pull-up current circuit and the third terminal of the cascode transistor; andA second conductor arranged between the second terminal of the second transistor and the first terminal of the cascode transistor;The first conductor is longer than the second conductor.3.The input circuit between integrated circuits according to claim 1, wherein:The output transistor is a first output transistor; andThe input circuit includes:The second output transistor includes:A first terminal coupled to the signal output terminal;A second terminal coupled to the ground rail; andA third terminal coupled to the third terminal of the first output transistor.4.4. The inter-integrated circuit input circuit of claim 1, wherein the input circuit comprises:The third transistor includes:A first terminal coupled to the power rail;A second terminal coupled to the third terminal of the first transistor;The fourth transistor includes:A first terminal coupled to the third terminal of the third transistor;A second terminal coupled to the second terminal of the third transistor; andA third terminal coupled to the third terminal of the output transistor.5.4. The inter-integrated circuit input circuit of claim 4, wherein the input circuit comprises:The fifth transistor includes:A first terminal coupled to the first terminal of the fourth transistor;A second terminal coupled to the third terminal of the fourth transistor; andThe third terminal coupled to the ground rail.6.4. The inter-integrated circuit input circuit of claim 1, wherein the input circuit comprises:The third transistor includes:A first terminal coupled to the third terminal of the output transistor;A second terminal coupled to the signal input terminal;The fourth transistor includes:A first terminal coupled to the third terminal of the third transistor;A second terminal coupled to the second terminal of the third transistor; andThe third terminal coupled to the ground rail.7.7. The inter-integrated circuit input circuit of claim 6, wherein the input circuit comprises:The fifth transistor includes:A first terminal coupled to the first terminal of the fourth transistor;A second terminal coupled to the third terminal of the output transistor;The sixth transistor includes:A first terminal coupled to the third terminal of the fifth transistor;A second terminal coupled to the second terminal of the first transistor; andThe third terminal coupled to the power rail.8.An input circuit between integrated circuits, which includes:Pull-up current circuit; andInput circuit, which includes:Output inverterInput inverterA pull-up circuit, which is coupled to the input of the input inverter, and includes:A pull-up transistor, which is coupled to the input of the input inverter;A cascode transistor that is coupled to the pull-up current circuit and the pull-up transistor, and is configured to connect the pull-up transistor to a conductor coupled to the pull-up current circuit and the input circuit Capacitive isolation.9.The input circuit between integrated circuits according to claim 8, wherein:The conductor is a first conductor; andThe input circuit includes a second conductor coupled to the pull-up transistor and the cascode transistor; andThe first conductor is longer than the second conductor.10.8. The inter-integrated circuit input circuit of claim 8, wherein the input circuit comprises:Input terminal; andA first transistor coupled to the input terminal and the input of the input inverter.11.11. The inter-integrated circuit input circuit of claim 10, further comprising:A first bias voltage source coupled to the cascode transistor; andA second bias voltage source coupled to the first transistor.12.11. The inter-integrated circuit input circuit of claim 10, wherein the input inverter includes a second input coupled to the input terminal.13.The input circuit between integrated circuits according to claim 8, wherein:The input inverter includes a high-side circuit; andThe input circuit includes a feedback circuit configured to provide a feedback signal to the high-side circuit based on a signal at the input of the output inverter.14.The input circuit between integrated circuits according to claim 8, wherein:The input inverter includes a low-side circuit; andThe input circuit includes a feedback circuit configured to provide a feedback signal to the low-side circuit based on a signal at the input of the output inverter.15.An integrated circuit, which includes:Pull-up current circuit;A first input circuit, which includes a pull-up input terminal coupled to the first output terminal of the pull-up current circuit;The second input circuit includes:A pull-up input terminal coupled to the second output terminal of the pull-up current circuit;Signal input terminal;Signal output terminal;Output transistor, which includes:A first terminal coupled to the signal output terminal;The second terminal coupled to the power rail;The first transistor includes:A first terminal coupled to the signal input terminal;A second terminal coupled to the first bias voltage source;The second transistor includes:A first terminal coupled to the third terminal of the first transistor;A second terminal coupled to the third terminal of the output transistor;The cascode transistor includes:A first terminal coupled to the third terminal of the second transistor;A second terminal coupled to the second bias voltage source; andA third terminal coupled to the second output terminal of the pull-up current circuit;The first input circuit is arranged at a first distance from the pull-up circuit, and the second input circuit is arranged at a second distance from the pull-up circuit.16.The integrated circuit of claim 15, further comprising:A first conductor connecting the second output terminal of the pull-up current circuit and the third terminal of the cascode transistor; andA second conductor connected to the third terminal of the second transistor and the third terminal of the cascode transistor;The first conductor is longer than the second conductor.17.The integrated circuit of claim 15, wherein:The output transistor is a first output transistor; andThe second input circuit includes:The second output transistor includes:A first terminal coupled to the signal output terminal;A second terminal coupled to the ground rail; andA third terminal coupled to the third terminal of the first output transistor.18.The integrated circuit of claim 15, wherein the second input circuit comprises:The third transistor includes:A first terminal coupled to the power rail;A second terminal coupled to the second terminal of the first transistor;The fourth transistor includes:A first terminal coupled to the third terminal of the third transistor; andA second terminal coupled to the third terminal of the output transistor; andThe fifth transistor includes:A first terminal coupled to the third terminal of the fourth transistor; andThe second terminal coupled to the ground rail.19.The integrated circuit of claim 18, wherein the second input circuit comprises:The sixth transistor includes:A first terminal coupled to the third terminal of the output transistor;A second terminal coupled to the signal input terminal; andA third terminal coupled to the third terminal of the fourth transistor;The seventh transistor includes:A first terminal coupled to the third terminal of the sixth transistor;A second terminal coupled to the second terminal of the sixth transistor; andThe third terminal coupled to the ground rail.20.The integrated circuit of claim 19, wherein the second input circuit comprises:An eighth transistor, which includes:A first terminal coupled to the third terminal of the first transistor;A second terminal coupled to the third terminal of the output transistor; andA third terminal coupled to the third terminal of the fifth transistor;A ninth transistor, which includes:A first terminal coupled to the third terminal of the eighth transistor;A second terminal coupled to the third terminal of the first transistor; andThe third terminal coupled to the power rail. |
Input circuit of integrated circuit systemTechnical fieldThe present disclosure relates to an integrated circuit, and more particularly to an input circuit for an inter-integrated circuit system.Background techniqueVarious serial communication buses have been developed to reduce the cost and complexity associated with communication using parallel buses. The Inter-Integrated Circuit (I2C) bus is such a serial bus. The I2C bus uses two conductors (assumed to be a common ground) to provide communication between electronic devices (for example, communication between a microcontroller and one or more peripheral devices). The first conductor connects the clock output terminal of the master device to the clock input terminal of one or more slave devices. The second conductor is connected to the data input terminal/data output terminal of the interconnection device.Summary of the inventionThis article discloses an inter-integrated circuit (I2C) input circuit, which reduces the timing variation caused by the parasitic capacitance on the pull-up current route. In one example, the I2C input circuit includes a pull-up current circuit and an input circuit. The input circuit includes a signal input terminal, a signal output terminal, an output transistor, a first transistor, a second transistor and a cascode transistor. The output transistor includes a first terminal coupled to the signal output terminal and a second terminal coupled to the power rail. The first transistor includes a first terminal coupled to the input terminal and a second terminal coupled to the first bias voltage source. The second transistor includes a first terminal coupled to the third terminal of the first transistor and a second terminal coupled to the third terminal of the output transistor. The cascode transistor includes a first terminal coupled to the third terminal of the second transistor, a second terminal coupled to the second bias voltage source, and a third terminal coupled to the output terminal of the pull-up current circuit.In another example, the I2C input circuit includes a pull-up current circuit and an input circuit. The input circuit includes an output inverter, an input inverter and a pull-up circuit. The pull-up circuit is coupled to the input of the input inverter and includes a pull-up transistor and a cascode transistor. The pull-up transistor is coupled to the input of the input inverter. The cascode transistor is coupled to the pull-up current circuit and the pull-up transistor, and is configured to isolate the pull-up transistor from the capacitance of the conductor coupled to the pull-up current circuit and the input circuit.In another example, the integrated circuit includes a pull-up current circuit, a first input circuit, and a second input circuit. The first input circuit includes a pull-up input terminal coupled to the first output terminal of the pull-up current circuit. The second input circuit includes a pull-up input terminal, a signal input terminal, a signal output terminal, an output transistor, a first transistor, a second transistor, and a cascode transistor. The pull-up input terminal is coupled to the second output terminal of the pull-up current circuit. The output transistor includes a first terminal coupled to the signal output terminal and a second terminal coupled to the power rail. The first transistor includes a first terminal coupled to the input terminal and a second terminal coupled to the first bias voltage source. The second transistor includes a first terminal coupled to the third terminal of the first transistor and a second terminal coupled to the third terminal of the output transistor. The cascode transistor includes a first terminal coupled to the third terminal of the second transistor, a second terminal coupled to the second bias voltage source, and a third terminal coupled to the output terminal of the pull-up current circuit. The first input circuit is arranged at a first distance of the pull-up circuit, and the second input circuit is arranged at a second distance of the pull-up circuit.Description of the drawingsFor a detailed description of various examples, reference will now be made to the accompanying drawings, in which:Figure 1 shows a block diagram of, for example, an inter-integrated circuit (I2C) input circuit according to the present description;Figure 2 shows an example arrangement of an I2C input circuit on an integrated circuit according to the present description;Figure 3 shows an example I2C pull-up current circuit according to this description;Figure 4 shows an example I2C input circuit according to this description; andFigure 5 shows an example signal received using an I2C input circuit according to this description.Detailed waysCertain terms are used throughout the specification and claims to refer to specific system components. As those skilled in the art will understand, different parties may use different names to refer to components. This document does not intend to distinguish between components with different names but the same functions. In the present disclosure and claims, the terms "including" and "including" are used in an open-ended manner, and therefore should be interpreted as meaning "including, but not limited to..." In addition, the term "coupled" means indirect or direct connection. Therefore, if the first device is coupled to the second device, the connection can be achieved by direct connection or by indirect connection via other devices and connectors. The expression "based on" means "based at least in part on". Therefore, if X is based on Y, then X may be a function of Y and any number of other factors.The Inter-Integrated Circuit (I2C) input circuit provided on the integrated circuit uses a pull-up current circuit to provide pull-up current to multiple input circuits. In some integrated circuits, the conductor connecting each input circuit to the current pull-up circuit is very long, and the long conductor accumulates large parasitic capacitance. Parasitic capacitance affects the propagation delay of the relevant input circuit, and in some embodiments causes excessive propagation delay in the input circuit.The I2C input circuit disclosed herein isolates the parasitic capacitance of the conductor connecting the pull-up current circuit and the input circuit from the pull-up circuit of the input circuit. Isolation is provided by adding cascode transistors in each input circuit. The cascode transistor passes the pull-up current to the pull-up circuit of the input circuit. The isolation provided by the cascode transistor eliminates the timing correlation associated with routing parasitic capacitance and improves the timing performance of the input circuit. The cascode transistor is not placed at the pull-up current circuit, but is placed at the input circuit or a part of the input circuit, and in some embodiments, the cascode transistor is the voltage generated by the pull-up current circuit Bias.FIG. 1 shows a block diagram of a portion of an example inter-integrated circuit (I2C) input circuit 100 according to this description. The I2C input circuit 100 includes a pull-up current circuit 102, an input circuit 104, an input circuit 106, and an input circuit 108. The input circuit 106 and the input circuit 108 are similar to or the same as the input circuit 104. Although the I2C input circuit 100 is shown as including an input circuit 104, an input circuit 106, and an input circuit 108, an implementation of the I2C input circuit 100 includes one or more input circuits. The pull-up current circuit 102 provides a pull-up current to each of the input circuit 104, the input circuit 106, and the input circuit 108. The pull-up current circuit 102 includes an output terminal 102A coupled to the pull-up input terminal 104E of the input circuit 104 via a conductor 110. Similarly, the pull-up current circuit 102 includes an output terminal 102D coupled to the pull-up input terminal 106E of the input circuit 106 via a conductor 112, and includes an output terminal 102E coupled to the pull-up input terminal 108E of the input circuit 108 via a conductor 114. Each of the input circuit 104, the input circuit 106, and the input circuit 108 includes a cascode transistor that isolates the pull-up circuit from the parasitic capacitance of the conductor 110, the conductor 112, and the conductor 114.The pull-up current circuit 102 also generates bias voltages for the operation of the input circuit 104, the input circuit 106, and the input circuit 108. The pull-up current circuit 102 includes a bias voltage source 120 coupled to the output terminal 102B, and the output terminal 102B is coupled to the input terminal 104C of the input circuit 104 for providing a bias voltage 116 to the input circuit 104. The pull-up current circuit 102 also includes a bias voltage source 122 coupled to the output terminal 102C, and the output terminal 102C is coupled to the input terminal 104D of the input circuit 104 for providing a bias voltage 118 to the input circuit 104. The bias voltage 116 and the bias voltage 118 are also provided to the input circuit 106 and the input circuit 108.The input circuit 104 includes a signal input terminal 104A for receiving an input signal (for example, a clock signal or a data signal) and a signal output terminal 104B for providing the received signal to an external circuit.FIG. 2 shows an example arrangement of the I2C input circuit on the integrated circuit 200 according to the present description. The integrated circuit 200 includes a pull-up current circuit 102, an input circuit 104, an input circuit 106, and an input circuit 108. The input circuit 104, the input circuit 106, and the input circuit 108 are separated from the pull-up current circuit 102 by a certain distance, and are coupled via conductors 110, 112, and 114, respectively. For example, the pull-up current circuit 102 is provided on one side of the integrated circuit 200, and the input circuit 104 is provided on the opposite side of the integrated circuit 200 and is connected to the pull-up current circuit 102 via a conductor 110 passing through the integrated circuit 200. In some embodiments of the integrated circuit 200, the input circuit 104 is disposed at a first distance from the current pull-up circuit 102, and the input circuit 106 is disposed at a second distance from the current pull-up circuit 102.FIG. 3 shows an example I2C pull-up current circuit 300 according to this description. The I2C pull-up current circuit 300 is an implementation of the pull-up current circuit 102. The I2C pull-up current circuit 300 includes a pull-up current circuit 302 and a bias voltage circuit 304. The pull-up current circuit 302 includes a current mirror circuit formed by a diode-connected transistor 306 and output transistors 308, 310, and 312. Each of the output transistors 308, 310, and 312 provides a pull-up current used by one of the input circuits 104, 106, or 108. For example, transistor 308 is coupled to output terminal 102A, transistor 310 is coupled to output terminal 102D, and transistor 312 is coupled to output terminal 102E.The bias voltage circuit 304 is an embodiment of the bias voltage source 120 and generates a bias voltage 116 for the bias circuits of the input circuits 104, 106, and 108. In some embodiments of the I2C pull-up current circuit 300, the bias voltage circuit 304 further includes a bias voltage source 122 and generates the bias voltage 118, or the I2C pull-up current circuit 300 includes a circuit similar to the bias voltage circuit 304 to The bias voltage source 122 is implemented and the bias voltage 118 is generated.FIG. 4 shows an example I2C input circuit 400 according to this description. The I2C input circuit 400 is an implementation of the input circuit 104, the input circuit 106, or the input circuit 108. The I2C input circuit 400 includes an input inverter 405, an output inverter 407, a feedback circuit 409, a transistor 406, and a pull-up circuit 411. The input inverter 405 includes a high-side circuit including a transistor 412 and a transistor 414, and includes a low-side circuit including a transistor 408 and a transistor 410. In some embodiments of the I2C input circuit 400, the transistors 412 and 414 are P-channel metal oxide semiconductor field effect transistors (MOSFET), and the transistors 408 and 410 are N-channel field effect transistors.The gate terminal 408G of the transistor 408 and the gate terminal 410G of the transistor 410 are coupled to the signal input terminal 104A for receiving the input signal 426. The source terminal 408S of the transistor 408 is coupled to the ground rail 403. The drain terminal 408D of the transistor 408 is coupled to the source terminal 410S of the transistor 410.The gate terminal 412G of the transistor 412 and the gate terminal 414G of the transistor 414 are coupled to the signal input terminal 104A via the transistor 406 for receiving an input signal. The drain terminal 412D of the transistor 412 is coupled to the drain terminal 410G of the transistor 410. The drain terminal 414D of the transistor 414 is coupled to the source terminal 412S of the transistor 412. The source terminal 414S of the transistor 414 is coupled to the power rail 401.Transistor 406 passes the input signal to transistors 412 and 414. The transistor 406 includes a gate terminal 406G coupled to the input terminal 104C for biasing, a source terminal 406S coupled to the signal input terminal 104A for receiving input signals, and a gate terminal 412G coupled to the transistor 412 and the gate of the transistor 414. Drain terminal 406D of terminal 414G. In some embodiments of the I2C input circuit 400, the transistor 406 is an N-channel field effect transistor.The pull-up circuit 411 includes a pull-up transistor 404 and a cascode transistor 402. The pull-up transistor 404 provides a pull-up current at the drain terminal 406D of the transistor 406. The pull-up transistor 404 includes a gate terminal 404G coupled to the drain terminal 410D of the transistor 410 and the drain terminal 412D of the transistor 412, a drain terminal 404D coupled to the drain terminal 406D of the transistor 406, and a cascode transistor 402 is coupled to the source terminal 404S of the pull-up input terminal 104E. The pull-up transistor 404 works as a switch, and the switch is turned on or off by a signal provided at the drain terminal 410D of the transistor 410. In some embodiments of the I2C input circuit 400, the pull-up transistor 404 and the cascode transistor 402 are P-channel field effect transistors.The cascode transistor 402 isolates the pull-up transistor 404 from the parasitic capacitance of the conductor 110. The gate terminal 402G of the cascode transistor 402 is coupled to the input terminal 104D for bias. The source terminal 402S of the cascode transistor 402 is coupled to the pull-up input terminal 104E for receiving the pull-up current. The drain terminal 402D of the cascode transistor 402 is coupled to the source terminal 404S of the pull-up transistor 404 via the conductor 430. The conductor 110 is longer than the conductor 430. The voltage at the source terminal 402S of the cascode transistor 402 is isolated from the voltage at the drain terminal 402D. As a result, when the input signal pulls down the voltage at the drain terminal 402D, the voltage at the source terminal 402S is not pulled down.The output inverter 407 includes a high-side transistor 422 and a low-side transistor 424. The gate terminal 422G of the high-side transistor 422 and the gate terminal 424G of the low-side transistor 424 are coupled to the drain terminal 410D of the transistor 410, the drain terminal 412D of the transistor 412, and the gate terminal 404G of the pull-up transistor 404. The source terminal 422S of the high-side transistor 422 is coupled to the power rail 401. The drain terminal 422D of the high-side transistor 422 is coupled to the signal output terminal 104B and the drain terminal 424D of the transistor 424. The source terminal 424S of the low-side transistor 424 is coupled to the ground rail 403. In some embodiments of the I2C input circuit 400, the high-side transistor 422 is a P-channel field effect transistor, and the low-side transistor 424 is an N-channel field effect transistor.The feedback circuit 409 provides feedback to the input inverter 405 based on the signal at the input of the output inverter 407 (for example, the gate terminal 422G of the transistor 422), and includes a transistor 416, a transistor 418, and a transistor 420. The feedback provided by the feedback circuit 409 causes the input inverter 405 to operate as a Schmitt trigger. The transistor 416 includes a gate terminal 416G coupled to the gate terminal 422G of the high-side transistor 422 to provide feedback to the transistor 412 and the transistor 414 based on the signal at the input of the output inverter 407. The source terminal 416S of the transistor 416 is coupled to the drain terminal 414D of the transistor 414 and the source terminal 412S of the transistor 412. The drain terminal 416D of the transistor 416 is coupled to the ground rail 403. In some embodiments of the I2C input circuit 400, the transistor 416 is a P-channel field effect transistor.The transistor 420 includes a gate terminal 420G coupled to the gate terminal 422G of the high-side transistor 422 to provide feedback to the transistor 408 and the transistor 410 based on the signal at the input of the output inverter 407. The source terminal 420S of the transistor 420 is coupled to the drain terminal 408D of the transistor 408 and the source terminal 410S of the transistor 410. The drain terminal 420D of the transistor 420 is coupled to the power rail 401 via the transistor 418. The gate terminal 418G of the transistor 418 is coupled to the input terminal 104C for bias. The source terminal 418S of the transistor 418 is coupled to the drain terminal 420D of the transistor 420, and the drain terminal 418D of the transistor 418 is coupled to the power rail 401. In some embodiments of the I2C input circuit 400, the transistors 418 and 420 are N-channel field effect transistors.Figure 5 shows an example signal received according to an implementation of the present description using an I2C input circuit 400. The input signal 426 is received by the I2C input circuit 400, and the output signal 428 shows the propagation delay 502 range generated by the I2C input circuit 400. For example, the I2C input circuit 400 produces a maximum propagation delay of less than 10 nanoseconds. The signal 504 is generated by an I2C input circuit lacking the cascode transistor 402 and produces a propagation delay 506 in the range of about 185 nanoseconds.The foregoing discussion is intended to illustrate the principles and various embodiments of the present invention. Once the above disclosure is fully understood, many changes and modifications will become apparent to those skilled in the art. The following claims are intended to be interpreted as encompassing all these changes and modifications. |
A chip (100) for hybrid bridged fanout chiplet connectivity, the chip comprising: a central chiplet (106); one or more first chiplets (102a-n) each coupled to the central chiplet using a plurality of fanout traces (110); and one or more second chiplets (104a-m) each coupled to the central chiplet using one or more interconnect dies (ICDs) (108a-m). |
CLAIMSWhat is claimed is:1. A chip for hybrid bridged fanout chiplet connectivity, the chip comprising: a central chiplet; one or more first chiplets each coupled to the central chiplet using a plurality of fanout traces; and one or more second chiplets each coupled to the central chiplet using one or more interconnect dies (ICDs).2. The chip of claim 1, wherein each of the one or more second chiplets are positioned nearer to the central chiplet relative to the one or more first chiplets.3. The chip of claim 1, wherein the one or more first chiplets are positioned in a first column of chiplets and the one or more second chiplets are positioned in a second column of chiplets.4. The chip of claim 1, wherein the one or more first chiplets are positioned in a first row of chiplets and the one or more second chiplets are positioned in a second row of chiplets.5. The chip of claim 1, wherein the one or more first chiplets are coupled to the central chiplet by a plurality of fanout trace layers layered on a wafer comprising the central chiplet, the one or more first chiplets, and the one or more second chiplets.6. The chip of claim 5, wherein the one or more interconnect dies are bonded to a layer of the chip layered on the plurality of fanout trace layers.7. The chip of claim 1, further comprising one or more conductive pillars.8. The chip of claim 7, further comprising a plurality of caps for the one or more conductive pillars and the one or more interconnect dies (ICDs).9. The chip of claim 8, wherein the one or more second chiplets include a plurality of second chiplets, the one or more interconnecting dies include a plurality of interconnecting dies, and wherein each of the plurality of second chiplets is coupled to the central chiplet using a respective interconnecting die of the plurality of interconnecting dies.10. An apparatus for hybrid bridged fanout chiplet connectivity, the apparatus comprising: one or more components, wherein at least one component is operatively coupled to a chip and the chip comprises: a central chiplet;
one or more first chiplets each coupled to the central chiplet using a plurality of fanout traces; and one or more second chiplets each coupled to the central chiplet using one or more interconnect dies (ICDs). The apparatus of claim 10, wherein each of the one or more second chiplets are positioned nearer to the central chiplet relative to the one or more first chiplets. The apparatus of claim 10, wherein the one or more first chiplets are positioned in a first column of chiplets and the one or more second chiplets are positioned in a second column of chiplets. The apparatus of claim 10, wherein the one or more first chiplets are positioned in a first row of chiplets and the one or more second chiplets are positioned in a second row of chiplets. The apparatus of claim 10, wherein the one or more first chiplets are coupled to the central chiplet by a plurality of fanout trace layers layered on a wafer comprising the central chiplet, the one or more first chiplets, and the one or more second chiplets. The apparatus of claim 14, wherein the one or more interconnect dies are bonded to a layer of the chip layered on the plurality of fanout trace layers. The apparatus of claim 10, wherein the chip comprises one or more conductive pillars. The apparatus of claim 16, further comprising a plurality of caps for the one or more conductive pillars and the one or more interconnect dies (ICDs). The apparatus of claim 10, wherein the one or more second chiplets include a plurality of second chiplets, the one or more interconnecting dies include a plurality of interconnecting dies, and wherein each of the plurality of second chiplets is coupled to the central chiplet using a respective interconnecting die of the plurality of interconnecting dies. A method of hybrid bridged fanout chiplet connectivity, the method comprising: coupling, to a central chiplet of a chip, one or more first chiplets using a plurality of fanout traces; and coupling, to the central chiplet, one or more second chiplets using one or more interconnect dies (ICDs). The method of claim 19, wherein each of the one or more second chiplets are positioned nearer to the central chiplet relative to the one or more first chiplets.
The method of claim 19, wherein the one or more first chiplets are positioned in a first column of chiplets and the one or more second chiplets are positioned in a second column of chiplets. The method of claim 19, wherein the one or more first chiplets are positioned in a first row of chiplets and the one or more second chiplets are positioned in a second row of chiplets. The method of claim 19, wherein coupling, to the central chipl et, the one or more first chiplets comprises layering a plurality of fanout trace layers on a wafer comprising the central chiplet, the one or more first chiplets, and the one or more second chiplets. The method of claim 19, wherein coupling, to the central chiplet, the one or more second chiplets comprises bonding the one or more interconnect dies to a layer of the chip. The method of claim 19, further comprising forming one or more conductive pillars in a layer of the chip. The method of claim 25, further comprising capping the one or more conductive pillars and the one or more interconnect dies. The method of claim 19, wherein the one or more second chiplets include a plurality of second chiplets, the one or more interconnecting dies include a plurality of interconnecting dies, and wherein each of the plurality of second chiplets is coupled to the central chiplet using a respective interconnecting die of the plurality of interconnecting dies.16 |
HYBRID BRIDGED FANOUT CHIPLET CONNECTIVITYBACKGROUND ART[0001] A chip composed of multiple chiplets may require interconnections between a central chiplet and each of the remaining chiplets. For example, interconnecting dies (ICDs) or bridges can be used to connect a central chiplet to chiplets adjacent to the central chiplet.However, an active bridge die that covers multiple chiplets can impact the power and ground connections to the dies. Fanout traces can be used to connect the chiplets to the central chiplet. However, even with high density fanout routing layers, routing all the traces from a limited area of the central chiplet (e.g., a particular side or face of the chiplet) is not possible. BRIEF DESCRIPTION OF DRAWINGS[0002] Figure 1 A is a block diagram of an example chip for hybrid bridged fanout chiplet connectivity according to some embodiments.[0003] Figure IB is a diagram of an example chip for hybrid bridged fanout chiplet connectivity according to some embodiments.[0004] Figure 2A is diagram of a stage of a fabrication process of a chip for hybrid bridged fanout chiplet connectivity according to some embodiments.[0005] Figure 2B is diagram of a stage of a fabrication process of a chip for hybrid bridged fanout chiplet connectivity according to some embodiments.[0006] Figure 2C is diagram of a stage of a fabrication process of a chip for hybrid bridged fanout chiplet connectivity according to some embodiments.[0007] Figure 2D is diagram of a stage of a fabrication process of a chip for hybrid bridged fanout chiplet connectivity according to some embodiments.[0008] Figure 3 is a flowchart of an example method for hybrid bridged fanout chiplet connectivity according to some embodiments.[0009] Figure 4 is a flowchart of an example method for hybrid bridged fanout chiplet connectivity according to some embodiments.[0010] Figure 5 is a flowchart of an example method for hybrid bridged fanout chiplet connectivity according to some embodiments.[0011] Figure 6 is a flowchart of an example method for hybrid bridged fanout chiplet connectivity according to some embodiments.[0012] Figure 7 is a flowchart of an example method for hybrid bridged fanout chiplet connectivity according to some embodiments.
DESCRIPTION OF EMBODIMENTS[0013] Hybrid bridged fanout chiplet connectivity, according to various embodiments of the present disclosure, includes: coupling, to a central chiplet of a chip, one or more first chiplets using a plurality of fanout traces. Such hybrid bridged fanout chiplet connectivity also includes coupling, to the central chiplet, one or more second chiplets using one or more interconnect dies (ICDs).[0014] In some embodiments, each of the one or more second chiplets are positioned nearer to the central chiplet relative to the one or more first chiplets. In some embodiments, the one or more first chiplets are positioned in a first column of chiplets and the one or more second chiplets are positioned in a second column of chiplets. In some embodiments, the one or more first chiplets are positioned in a first row of chiplets and the one or more second chiplets are positioned in a second row of chiplets. In some embodiments, coupling, to the central chiplet, the one or more first chiplets includes layering a plurality of fanout trace layers on a wafer comprising the central chiplet, the one or more first chiplets, and the one or more second chiplets. In some embodiments, coupling, to the central chiplet, the one or more second chiplets includes bonding the one or more interconnect dies to a layer of the chip. In some embodiments, the method further includes forming one or more conductive pillars in a layer of the chip. In some embodiments, the method further includes capping the one or more conductive pillars and the one or more interconnect dies. In some embodiments, the one or more second chiplets include a plurality of second chiplets, the one or more interconnecting dies include a plurality of interconnecting dies, and each of the plurality of second chiplets is coupled to the central chiplet using a respective interconnecting die of the plurality of interconnecting dies.[0015] In some embodiments, a chip for hybrid bridged fanout chiplet connectivity includes: a central chiplet; one or more first chiplets each coupled to the central chiplet using a plurality of fanout traces; and one or more second chiplets each coupled to the central chiplet using one or more interconnect dies (ICDs).[0016] In some embodiments, each of the one or more second chiplets are positioned nearer to the central chiplet relative to the one or more first chiplets. In some embodiments, the one or more first chiplets are positioned in a first column of chiplets and the one or more second chiplets are positioned in a second column of chiplets. In some embodiments, the one or more first chiplets are positioned in a first row of chiplets and the one or more second chiplets are positioned in a second row of chiplets. In some embodiments, the one or more first chiplets are coupled to the central chiplet by a plurality of fanout trace layers layered on
a wafer including the central chiplet, the one or more first chiplets, and the one or more second chiplets. In some embodiments, the one or more interconnect dies are bonded to a layer of the chip layered on the plurality of fanout trace layers. In some embodiments, the chip further includes one or more conductive pillars. In some embodiments, the chip further includes a plurality of caps for the one or more conductive pillars and the one or more interconnect dies (ICDs). In some embodiments, the one or more second chiplets include a plurality of second chiplets, the one or more interconnecting dies include a plurality of interconnecting dies, and each of the plurality of second chiplets is coupled to the central chiplet using a respective interconnecting die of the plurality of interconnecting dies.[0017] In some embodiments, an apparatus for hybrid bridged fanout chiplet connectivity includes: one or more components, wherein at least one component is operatively coupled to a chip and the chip includes: a central chiplet; one or more first chiplets each coupled to the central chiplet using a plurality of fanout traces; and one or more second chiplets each coupled to the central chiplet using one or more interconnect dies (ICDs).[0018] In some embodiments, each of the one or more second chiplets are positioned nearer to the central chiplet relative to the one or more first chiplets. In some embodiments, the one or more first chiplets are positioned in a first column of chiplets and the one or more second chiplets are positioned in a second column of chiplets. In some embodiments, the one or more first chiplets are positioned in a first row of chiplets and the one or more second chiplets are positioned in a second row of chiplets. In some embodiments, the one or more first chiplets are coupled to the central chiplet by a plurality of fanout trace layers layered on a wafer including the central chiplet, the one or more first chiplets, and the one or more second chiplets. In some embodiments, the one or more interconnect dies are bonded to a layer of the chip layered on the plurality of fanout trace layers. In some embodiments, the chip further includes one or more conductive pillars. In some embodiments, the chip further includes a plurality of caps for the one or more conductive pillars and the one or more interconnect dies (ICDs). In some embodiments, the one or more second chiplets include a plurality of second chiplets, the one or more interconnecting dies include a plurality of interconnecting dies, and each of the plurality of second chiplets is coupled to the central chiplet using a respective interconnecting die of the plurality of interconnecting dies.[0019] Figure 1A is a block diagram of a non-limiting example chip 100. The example chip 100 can be implemented in a variety of computing devices, including mobile devices, personal computers, peripheral hardware components, gaming devices, set-top boxes, and the like. The chip 100 includes a plurality of chiplets 102a-n, 104a-m. Each of the chiplets
102a-n, 104a-m is a functional circuit block designed to integrate with other chiplets 102a-n, 104a-m. The chip 100 also includes a central chiplet 106. The central chipl et 106 is distinguished from other chiplets 102a-n, 104a-m in that each of the other chiplets 102a-n, 104a-m is coupled (e.g., communicatively coupled, conductively coupled) to the central chiplet 106. Each of the chiplets 102a-n, 104a-m and the central chiplet 106 are located on an organic substrate. The organic substrate is composed of organic small molecules or polymers, including polycyclic aromatic compounds such as pentacene, anthracene, and rubrene. Each of the chiplets 102a-n, 104a-m and the central chiplet 106 are located within a layer of molding, such as epoxy. The molding serves to fix the chiplets 102a-n, 104a-m and the central chiplet 106 in place. The molding layer is coplanar to the chiplets 102a-n, 104a-m and the central chiplet 106 to allow for additional redistribution layers to be applied on the chiplets 102a-n, 104a-m and the central chiplet 106.[0020] A communicative connection between the central chiplet 106 and the chiplets 102a-n, 104a-m is utilized to perform input/output communications between the components of the chip. One existing solution for connecting multiple chiplets 102a-n, 104a-m to a central chiplet 106 includes utilizing an interconnecting die (ICD) or active bridge die that can be used to connect a central chiplet 106 to chiplets 104a-m adjacent to the central chiplet 106. However, such an active bridge die that covers multiple chiplets can impact the power and ground connections to the chiplet dies. An alternative existing implementation of connecting chiplets to a central chiplet includes utilizing fanout traces (e.g., embedded in redistribution layers) to connect the chiplets m to the central chiplet. However, even with high density fanout routing layers, routing all the necessary traces from a limited area of the central chiplet to many different other chiplets is often not possible and does not scale as the number of chiplets needing to be connected to the central chiplet increases.[0021] The example chip 100 of Figure 1A, however, implements interconnecting dies (ICDs) (e g., bridge dies) 108a-m to couple the central chiplet 106 to those of the chiplets 102a-n, 104a-m nearest to the central chiplet 106 and a plurality of fanout traces 110 to connect the central chiplet 106 to those of the chiplets 102a-n, 104a-m that are not connected to the central chiplet 106 using the interconnecting dies 108a-m. In this configuration, connections that implement both a fanout and an ICD to couple multiple chiplets to a central chiplet are referred to as a hybrid bridged fanout interconnect. In this way, power and ground connections are not affected by the ICD and designs that utilize such a hybrid bridged fanout interconnect enable the number of chiplets being coupled to the central chiplet to be scalable.
[0022] Interconnecting dies 108a-m are silicon dies that provide a connective coupling between two chiplets. For example, both the central chiplet 106 and the chiplets 104a-m each include multiple input/output (I/O) connection points of metal or other conductive material The interconnecting dies 108a-m include conductive pathways that terminate in I/O connection points. By aligning the I/O connection points of the interconnecting dies 108a-m with the I/O connection points of the central chiplet 106 and chiplets 104a-m and then bonding interconnecting dies 108a-m to the central chiplet 106 and chiplets 104a-m, conductive pathways are formed between the central chiplet 106 and chiplets 104a-m through the interconnecting dies 108a-m. In the example shown, the chip 100 includes two columns of chiplets, with the column of chiplets I04a-m being closest to the central chiplet 106. Accordingly, interconnecting dies 108a-m connect the chiplets 104a-m to the central chiplet 106. For example, in some embodiments, each chiplet 104a-m to be connected using an interconnecting die I08a-m is connected to the central chiplet 106 using its own dedicated interconnecting die 108a-m. In other words, to connect m-numbers of chiplets 108a-m to the central chiplet 106, m-numbers of interconnecting dies 108a-m are used. In other embodiments, a single interconnecting die 108a-m is used to connect multiple chiplets 104a- m in the same column to the central chiplet 106.[0023] The chip 100 also uses a plurality of fanout traces 110 to connect the central chiplet 106 to those of the chiplets 102a-n, I04a-m not connected to the central chiplet 106 using the interconnecting dies 108a-m. In the example chip 100, the chiplets 102a-n are connected to the central chiplet 106 using the fanout traces 110. Fanout traces 110 are traces of conductive material such as carbon, silver, aluminum, and the like traced in a layer of dielectric material, such as polyimide. The fanout traces 110 are traced into multiple layers of dielectric material, hereinafter referred to as fanout trace layers. The fanout trace layers into which the fanout traces 110 are embedded are redistribution layers. A redistribution layer generally is an extra metal layer on a chip that makes the I/O pads of an integrated circuit available in other locations of the chip for better access to the pads where necessary. Each fanout trace 110 connects the central chiplet 106 to a chiplet 102a-n via one or more conductive interconnects in the intermediary layers of the chip 100 (e.g., intermediary fanout trace layers or other redistribution layers). For example, a fanout trace 110 provides a conductive link from an VO connection point of the chiplet 102a-n to an I/O connection point of the central chiplet 106.[0024] In some embodiments, each fanout trace layer includes a fanout trace 110 from the central chiplet 106 to each chiplet 102a-n to be connected using the fanout traces 110. For
example, a first fanout trace layer includes first fanout traces 110 from the central chiplet 106 to each chiplet 102a-n, a second fanout trace layer includes second traces 110 from the central chiplet 106 to each chiplet 102a-n, etc. Thus, assuming x-numbers of fanout trace layers, each chiplet 102a-n has x-numbers of fanout traces 110 to the central chiplet. One skilled in the art would appreciate that other combinations or distributions of fanout traces 110 in fanout trace layers are possible.[0025] The chip 100 of Figure 1A implements both interconnecting dies 108a-m to couple the central chiplet 106 to the nearest chiplets 104a-m, and fanout traces 110 to connect the central chiplet 106 to the other, further chiplets 102a-n. One skilled in the art would appreciate that the arrangement of chiplets 102a-n, 104a-m and the central chiplet 106 are exemplary, and that other arrangements are possible. For example, in some embodiments, additional columns of chiplets sharing rows with the chiplets 102a-n, 104a-m are included in the chip 100. In some embodiments, an additional column of chiplets is positioned adjacent to an opposing face of the central chiplet 106 (e.g., the right face of the central chiplet 106 opposing the left face of the central chiplet 106). In such an embodiment, this additional column of chiplets is also connected to the central chiplet 106 using interconnecting dies, as this additional column of chiplets is positioned adjacent to the central chiplet 106. In some embodiments, further columns of chiplets are positioned adjacent to this additional column of chiplets and connected using additional fanout traces.[0026] One skilled in the art would also appreciate that the use of “rows” or “columns” of chiplets as used herein is relative to which face of the central chiplet 106 a particular grouping of chiplets is positioned. For example, while the preceding example discussed connecting a column of chiplets 104a-m nearest to the left face of the central chiplet 106 using interconnecting dies 108a-m, in some embodiments, a row of chiplets closest to the upper or lower face of the central chiplet 106 is connected using interconnecting dies. In this example, additional rows of chiplets further from the upper or lower face of the central chiplet 106 would also be connected using fanout traces 110.[0027] Figure IB is a diagram of an example chip for hybrid bridged fanout chiplet connectivity according to some embodiments. For example, Figure IB depicts a lateral cross-section view of the chip 100 of Figure 1A. Figure IB shows the chiplet 102a, chiplet 104a, and central chiplet 106 within a layer of molding 120. The molding 120 includes epoxy or another substance that fixes the chiplet 102a, chiplet 104a, and central chiplet 106 in position on a substrate (not shown). Such a substrate includes, for example, organic
substrates composed of organic small molecules or polymers, including polycyclic aromatic compounds such as pentacene, anthracene, and rubrene.[0028] A redistribution layer 122 is deposited on the layer of molding 120 that includes the chiplet 102a, chiplet 104a, and central chiplet 106. The redistribution layer 122 is composed of a dielectric material such as polyimide or another insulating material. The redistribution layer 122 includes conductive interconnects 124 composed of copper or another conductive material. The conductive interconnects 124 provide input/outpoint connectivity points for the chiplet 102a, chiplet 104a, and central chiplet 106. Thus, signals between any of the chiplet 102a, chiplet 104a, and central chiplet 106 use conductive pathways with the conductive interconnects 124 as endpoints.[0029] Multiple fanout trace layers 126 are layered over the redistribution layer 122. The fanout trace layers 126 are redistribution layers (e g., layers of dielectric material such as polyimide or another insulating material) that each house one or more fanout traces 110. The fanout traces 110 of each fanout trace layer 126 form signal paths between the central chiplet 106 and the chiplet 102. Each fanout trace layer 126 also includes conductive interconnects 128. Whereas the conductive interconnects 124 provide input/outpoint connectivity points for the chiplet 102a, chiplet 104a, and central chiplet 106, the conductive interconnects 128 provide a conductive pathway between fanout trace layers 126. Thus, a signal uses conductive interconnects 128 to travel between adjacent fanout trace layers 126, the redistribution layer 122, or the redistribution layer 130 to be described below.[0030] Another redistribution layer 130 is layered over the fanout trace layers 126. The redistribution layer 130 houses conductive pillars 132 of copper or another conductive material. The conductive pillars 132 provide conductive pathways between caps 134 and the chiplets 102a, 104a, and central chiplet 106 via intervening conductive interconnects 124, 128. The caps 132 are composed of a tin-silver alloy or other substance suitable for solderable connections. Also housed in the redistribution layer 130 is the interconnecting die 108a. The interconnecting die 108a forms a signal pathway between the central chiplet 106 and 104a using the conductive interconnects 128 of the intervening redistribution layers 126. The conductive pillars 130 and interconnecting die 108a are further housed in another layer of molding 134 to hold the conductive pillars 130 and interconnecting die 108a in place.[0031] Figures 2A-2D show example lateral views for fabrication stages for a chip for hybrid bridged fanout chiplet connectivity according to some embodiments. As shown in Figure 2A, silicon dies for a central chiplet 106, a chiplet 102a and a chiplet 104a are reconstituted on a carrier (not shown). Reconstituting the central chiplet 106, the chiplet 102a and the chiplet
104a includes placing the central chiplet 106, the chiplet 102a and the chiplet 104a on the carrier and applying molding 202 around the central chiplet 106, the chiplet 102a and the chiplet 104a to fix their positions in the chip 100. In some embodiments, the molding 202 includes epoxy or another material. A front side aluminum layer 204 is exposed to allow conductive connectivity the central chiplet 106, the chiplet 102a and the chiplet 104a. Although Figure 2A describes an aluminum layer 204, it is understood that the use of other conductive materials instead of or in addition to aluminum is possible.[0032] As shown in Figure 2B, fanout trace layers 206 are applied to the chip 100 on the aluminum layer 204. Each fanout trace layer 206 is a redistribution layer that includes one or more fanout traces 110 composed of copper or another conductive material. In this example, the fanout traces 110 provide a connection between the central chiplet 106 and the chiplet 102a. Each fanout trace layer 206 also includes conductive interconnects 208 providing conductive paths between fanout trace layers 206. The conductive interconnects 208 are composed of copper or another conductive material. The fanout trace layers 206 are also composed of a dielectric material such as polyimide or another insulating material. Thus, the dielectric material of the fanout trace layers 206 house the fanout traces 110 and conductive interconnects 208. By applying multiple fanout trace layers 206, multiple connection paths of the fanout traces 110 couple the central chiplet 106 to the chiplet 102a. Moreover, the conductive interconnects 208 allow for signal transfer between the fanout trace layers 206 to the chiplet 102a, chiplet 104a, and central chiplet 106. For example and as described in further detail below, signal pathways from the chiplet 102a, chiplet 104a, and central chiplet 106 are formed via the conductive interconnects 208 of the fanout trace layers 206, terminating in solderable connection points on the surface of the chip.[0033] As shown in Figure 2C, another layer of dielectric material (e.g., another redistribution layer) is applied on top of the fanout trace layers 208. Conductive pillars 210 are formed in this applied redistribution layer. In some embodiments, forming the conductive pillars 210 includes inserting preformed conductive pillars in the dielectric material forming the redistribution layer. In other embodiments, forming the conductive pillars includes extruding the conductive material to form the conductive pillars in the redistribution layer. The conductive pillars 210 are composed of copper or another conductive material. An interconnecting die 108a is placed to this redistribution layer to provide a connective coupling between the central chiplet 106 and the chiplet 104a via the conductive interconnects 208 included in the intermediary fanout trace layers 206. For example, the interconnecting die 108a includes conductive pathways that, on one end, come into contact or
are bonded to conductive interconnects 208 coupled to the central chipl et 106, and on another end, come into contact or are bonded to conductive interconnects 208 coupled to the chiplet 104a. In some embodiments the interconnecting die 108a includes one or more through- silicon vias that provide a conductive pathway through the interconnecting die 108a, from one side of the interconnecting die 108a to the opposing face of the interconnecting die 108a (e.g., from the top of the interconnecting die 108a to the opposing face in the redistribution layer). Thus, solderable connections may be formed with the interconnecting die 108a through the through-silicon vias and into the underlying fanout trace layers 206 and other components.[0034] As shown in Figure 2D, additional molding 212 is applied to the chip 100. The molding 212 is then partially ground to expose the conductive pillars 210 and, if any, the through-silicon vias of the interconnecting die 108. Thus, the ground molding 212 is coplanar with the exposed conductive pillars 210 and through-silicon vias. Caps 214 are applied to the conductive pillars 210 and the exposed through-silicon vias of the interconnecting die 108a. The caps 214 are composed of a tin-silver alloy or other substance suitable for solderable connections.[0035] Although Figures 2A-2D show a fabrication process by which layers of components are applied on the chiplets 102a, 104a, and central chiplet 106 (e.g., a “die first” fabrication process), it is understood that in some embodiments the chip 100 is fabricated using a “die last” fabrication process. For example, the chiplets 102a, 104a, and central chiplet 106 are applied as part of a last-applied layer of the chip 100.[0036] For further explanation, Figure 3 sets forth a flow chart illustrating an exemplary method for hybrid bridged fanout chiplet connectivity that includes coupling 302 (e.g., in a chip 100), to a central chiplet 106, one or more first chiplets 102a-n using a plurality of fanout traces 110. In some embodiments, the one or more first chiplets 102a-n are included in a same column of plurality of columns of chiplets 102a-n, 104a-m. In such an embodiment, the plurality of first chiplets 102a-n are those of the chiplets 102a-n, 104a-m not adjacent to the central chiplet 106 (e.g., separated from the central chiplet 106 by one or more other columns of chiplets). In some embodiments, the one or more first chiplets 102a-n are included in a same row of plurality of rows of chiplets 102a-n, 104a-m. In such an embodiment, the plurality of first chiplets 102a-n are those of the chiplets 102a-n, 104a-m not adjacent to the central chiplet 106 (e.g., separated from the central chiplet 106 by one or more other rows of chiplets). The fanout traces 110 are traces of copper or another conductive material etched into or applied to a layer of dielectric material. In some embodiments, the
fanout traces 110 couple the central chiplet 106 and chipl ets 102a-n via one or more intermediary layers through one or more conductive interconnects 208.[0037] The method of Figure 3 also includes coupling 304, to the central chiplet 106, one or more second chiplets 104a-m using one or more interconnect dies 108a-m. The interconnect dies 108a-m are dies of silicon that provide connective links between a chiplet 108a-m and the central chiplet 106. In some embodiments, each chiplet 104a-m to be connected using an interconnecting die 108a-m is connected to the central chiplet 106 using its own dedicated interconnecting die 108a-m. In other words, to connect m-numbers of chiplets 108a-m to the central chiplet 106, m-numbers of interconnecting dies 108a-m are used. In some embodiments, the one or more second chiplets 104a-m are included in a same column of plurality of columns of chiplets 102a-n, 104a-m. In such an embodiment, the plurality of second chiplets 104a-m are those of the chiplets 102a-n, 104a-m in a column of chiplets adjacent to or nearest to the central chiplet 106. In some embodiments, the one or more second chiplets 104a-m are included in a same row of a plurality of rows of chiplets 102a-n, 104a-m. In such an embodiment, the plurality of second chiplets 104a-m are those of the chiplets 102a-n, 104a-m in a row of chiplets adjacent to or nearest to the central chiplet 106. [0038] For further explanation, Figure 4 sets forth a flow chart illustrating an exemplary method for hybrid bridged fanout chiplet connectivity. The method of Figure 4 is similar to the method of Figure 3 in that, the method of Figure 4 also includes coupling 302 to a central chiplet 106, one or more first chiplets 102a-n using a plurality of fanout traces 110 and coupling 304 (e.g., in the chip 100), to the central chiplet 106, one or more second chiplets 104a-m using one or more interconnect dies 108a-m.[0039] The method of Figure 4 differs from Figure 3 in that coupling 302 (e.g., in a chip 100), to a central chiplet 106, one or more first chiplets 102a-n using a plurality of fanout traces 110 includes layering 402 a plurality of fanout trace layers 206 on a wafer comprising the central chiplet 106, the one or more first chiplets 102a-n and the one or more second chiplets 104a-m. The wafer includes the reconstituted central chiplet 106, the one or more first chiplets 102a-n and the one or more second chiplets 104a-m positioned and fixed in place using molding 202 (e.g., epoxy or another material). In some embodiments, the plurality of fanout trace layers 206 are layered on an exposed aluminum layer bonded to or connected to the central chiplet 106, the one or more first chiplets 102a-n and the one or more second chiplets 104a-m.[0040] Fanout traces 110 are traces of conductive material such as carbon, silver, aluminum, and the like traced in a later of dielectric material, such as polyimide. The fanout traces 110
are traced into multiple layers of dielectric material. Each fanout trace 110 connects the central chiplet 106 to a first chiplet 102a-n via one or more conductive interconnects in the intermediary layers of the chip 100 (e.g., intermediary fanout trace layers 206 or other layers).[0041] In some embodiments, each fanout trace layer 206 includes a fanout trace 110 from the central chiplet 106 to each second chiplet 102a-n to be connected using the fanout traces 110. For example, a first fanout trace layer includes first fanout traces 110 from the central chiplet 106 to each chiplet 102a-n, a second fanout trace layer includes first second traces 110 from the central chiplet 106 to each chiplet 102a-n, etc. Thus, assuming x-numbers of fanout trace layers 206, each chiplet 102a-n has x-numbers of fanout traces 110 to the central chiplet. One skilled in the art would appreciate that other combinations or distributions of fanout traces 110 in fanout trace layers 206 are possible.[0042] For further explanation, Figure 5 sets forth a flow chart illustrating an exemplary method for hybrid bridged fanout chiplet connectivity according some embodiments of the present disclosure. The method of Figure 5 is similar to the method of Figure 3 in that the method of Figure 5 also includes coupling 302, to a central chiplet 106, one or more first chiplets 102a-n using a plurality of fanout traces 110 and coupling 304 (e.g., in the chip 100), to the central chiplet 106, one or more second chiplets 104a-m using one or more interconnect dies 108a-m.[0043] The method of Figure 5 differs from Figure 3 in that coupling 304 (e.g., in the chip 100), to the central chiplet 106, one or more second chiplets 104a-m using one or more interconnect dies 108a-m includes bonding 502 the one or more interconnect dies 108a-m to a layer of the chip. In some embodiments, the layer to which the one or more interconnect dies 108a-m are bonded is layered on top of one or more fanout trace layers 206. Accordingly, in some embodiments, bonding 502 the one or more interconnect dies 108a-m includes bonding 502 the one or more interconnect dies 108a-m to conductive interconnects 208 in the fanout trace layers 206 that provide, for a given interconnect die 108a-m, a conductive connection to the central chiplet 106 and a corresponding second chiplet 104a-m. In some embodiments, the layer into which the one or more interconnect dies 108a-m are bonded includes a layer of dielectric material.[0044] For further explanation, Figure 6 sets forth a flow chart illustrating another exemplary method for hybrid bridged fanout chiplet connectivity according to embodiments of the present disclosure. The method of Figure 6 is similar to the method of Figure 3 in that the method of Figure 6 also includes coupling 302, to a central chiplet 106, one or more first
chiplets 102a-n using a plurality of fanout traces 110; and coupling 304 (e g., in the chip 100), to the central chipl et 106, one or more second chiplets 104a-m using one or more interconnect dies 108a-m.[0045] The method of Figure 6 differs from Figure 3 in that the method of Figure 3 also includes forming 602 one or more conductive pillars 210 in a layer of the chip 100. In some embodiments, the layer of the chip 100 into which the conductive pillars 210 are formed is a layer to which the one or more interconnective dies 108a-m are bonded. In some embodiments, the conductive pillars 210 are composed of copper or another conductive material. In some embodiments, forming 302 the conductive pillars 210 includes inserting preformed conductive pillars 210 in the dielectric material forming the layer. In other embodiments, forming the conductive pillars 210 includes extruding the conductive material to form the conductive pillars 210 in the layer. In some embodiments, molding 212 such as epoxy is applied around the conductive pillars 210.[0046] For further explanation, Figure 7 sets forth a flow chart illustrating another exemplary method for hybrid bridged fanout chiplet connectivity according to embodiments of the present disclosure. The method of Figure 7 is similar to the method of Figure 3 in that the method of Figure 7 also includes coupling 302, to a central chiplet 106, one or more first chiplets 102a-n using a plurality of fanout traces 110 and coupling 304 (e.g., in the chip 100), to the central chiplet 106, one or more second chiplets 104a-m using one or more interconnect dies 108a-m; and forming 602 one or more conductive pillars 210 in a layer of the chip 100.[0047] The method of Figure 7 differs from Figure 6 in that the method of Figure 6 also includes capping 702 the one or more conductive pillars 210 and the one or more interconnect dies 108a-m. Capping 702 the one or more conductive pillars 210 and the one or more interconnect dies 108a-m includes applying an amount of a capping material to the one or more conductive pillars 210 and the one or more interconnect dies 108a-m to facilitate soldering or other connections. For example, in some embodiments, the capping material includes a tin-silver alloy or other substance suitable for solderable connections.[0048] In view of the explanations set forth above, readers will recognize that the benefits of hybrid bridged fanout chiplet connectivity include:• Improved performance of a computing system by providing for low latency, high bandwidth connections between a central chiplet and other chiplets on the same chip set.
[0049] The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams can represent a module, segment, or portion of instructions, which includes one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block can occur out of the order noted in the figures. For example, two blocks shown in succession can, in fact, be executed substantially concurrently, or the blocks can sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.[0050] It will be understood from the foregoing description that modifications and changes can be made in various embodiments of the present disclosure. The descriptions in this specification are for purposes of illustration only and are not to be construed in a limiting sense. The scope of the present disclosure is limited only by the language of the following claims. |
To provide a method of skew equalization between a plurality of transmitters.SOLUTION: In an integrated circuit device 100 with a plurality of transmitters, the transmitter 190-1 has a corresponding data buffer 121-1 and starts a sequence for each transmitter. Latency is set for each data buffer in response to execution of the sequence. The sequence comprises: a step of obtaining a read address 108 based on a read clock signal 106; a step of obtaining a write address 109 based on a write clock signal 105; a step of calculating difference between the read address and the write address with a subtractor 110; a step of asserting a flag signal 113 according to this difference 111; and a step of adjusting the read clock signal to change the difference, finding the change in a state position with respect to the flag signal, and setting the latency for one of the data buffers.SELECTED DRAWING: Figure 2 |
A method for activating a plurality of transmitters to initiate a sequence for each of the plurality of transmitters having a corresponding data buffer and for each of the data buffers in response to execution of the sequence. To obtain the read address associated with the read clock signal, to obtain the write address associated with the write clock signal, and to obtain the read address and the write address. The data is determined by determining the difference between the data, asserting the flag signal associated with the difference, adjusting the read clock signal to change the difference, and locating the state change with respect to the flag signal. A method comprising setting the latency for a data buffer among the buffers.The method of claim 1, wherein the latency of the data buffers of the plurality of transmitters is set to at least substantially the same value.The second aspect of the invention, wherein the plurality of transmitters are associated with the plurality of lanes, and the plurality of lanes have the maximum inter-lane skew for parallel data distributed throughout the plurality of transmitters prepared for serialization. The method described.The method of claim 2 or 3, wherein each of the data buffers is set at an intermediate point after the sequence.The method of claim 4, wherein the data buffer is a first-in, first-out buffer having a register for storing data.The method of claim 4 or 5, wherein the data buffer is a first-in, first-out buffer having memory cells for storing data.The method according to any one of claims 4 to 6, wherein the plurality of transmitters are the corresponding plurality of transceivers.The sequence further comprises repeating the steps of the sequence to locate the state change position of the flag signal, where the state change position is the read time domain of the read clock signal and the write of the write clock signal. The method of claim 1, wherein the latency to one of the data buffers is set at the position of the state change, for domain crossing to and from the time domain.The method of claim 8, wherein the repeated assertion of the flag signal corresponding to the data buffer is for toggle the flag signal.When the difference is less than or equal to a part of the buffer depth of the data buffer in the data buffer, the flag signal is asserted as logic 0 and the difference is greater than the portion of the depth of the data buffer. The method of claim 8 or 9, wherein the flag signal is asserted as logic 1.If the flag signal is logic 1, the filling level of the data buffer is greater than half, and if the flag signal is logic 0, the filling level of the data buffer is less than half. Item 10. The method according to Item 10.The sequence is initiated at startup or reset of the plurality of transmitters, the flag signal asserted for the sequence corresponds to the data buffer, and both logic 0 and logic 1 for the first cycle of the sequence. 10. The method of claim 10 or 11.10. One of claims 8 to 12, wherein obtaining the read address comprises dividing the read clock signal by a divisor associated with the buffer depth of the data buffer to obtain the read address. The method described.11. The method of claim 11, wherein obtaining the write address comprises dividing the write clock signal by the divisor associated with the buffer depth of the data buffer to obtain the write address.The method of any one of claims 8-12, wherein calculating the difference comprises Gray coding. |
Transmitter lane-to-lane skew adjustmentThe following description relates to an integrated circuit device (“IC”). In particular, it relates to the skew adjustment between lanes of a plurality of transmitters in an IC.Generally, the buffer bypass mode is used at startup or reset of a transmitter used in a serializer-deserializer (“SERDES”) or the like. This buffer bypass mode is used, for example, to avoid skew caused by the transmitter buffer. This is because when serializing data over a plurality of lanes, each of the plurality of transmitters uses a buffer such as a first-in first-out buffer (“FIFO”) in parallel, which causes skew between the lanes. However, the buffer bypass mode has various other limitations, but has a problem that the overhead of the circuit becomes remarkably large.In addition, a clock network such as the H clock tree, or other clock network, can be used to supply write clocks to FIFOs that support multiple transmitters. Clock networks are highly temperature dependent and vary significantly with temperature. To compensate for such fluctuations, a delay aligner is used to adjust the write clock to the clock network. The delay aligner is a complex analog circuit. Therefore, the use of a delay aligner increases the circuit overhead in buffer bypass mode of operation. Furthermore, the phase fluctuation between the write clock and the read clock cuts the timing margin, and as the integrated circuit becomes denser and larger in scale, the integrated circuit is greatly affected by the fluctuation of the signal propagation delay. Become. That is, it is greatly affected by the skew between lanes.Therefore, it is desirable to perform skew equalization between a plurality of transmitters without having one or more of the above limits.The method relates to the activation of multiple transmitters. In this method, each of the plurality of transmitters has a data buffer, and each transmitter activates a sequence. Latency (delay time) is set for each data buffer that responds to sequence execution. In this sequence, a step of obtaining a read address based on a read clock signal, a step of obtaining a write address based on a write clock signal, a step of calculating a difference between a read address and a write address, and a flag according to the difference. It includes a step of asserting a signal and a step of adjusting the read clock signal to change the difference to find the position of the state change with respect to the flag signal and set the latency for one of a plurality of data buffers.Optionally, each of the data buffers of the plurality of transmitters may be set to at least approximately the same value for latency.Optionally, multiple transmitters may be associated with multiple lanes, with multiple lanes providing maximum lane-to-lane skew for parallel data distributed across multiple transmitters ready for serialization. May have.Optionally, each of the data buffers may be set to a midpoint after the sequence.Optionally, the data buffer may be a first-in, first-out buffer with registers for storing data.Optionally, the data buffer may be a first-in, first-out buffer with memory cells for storing data.Optionally, the plurality of transmitters may be a plurality of corresponding transceivers.Another method involves activating multiple transmitters. In that method, the plurality of transmitters each have a corresponding data buffer to activate a sequence for each transmitter. Latency is set for each data buffer in response to sequence execution. In this sequence, a step of obtaining a read address based on a read clock signal, a step of obtaining a write address based on a write clock signal, a step of calculating a difference between a read address and a write address, and a flag corresponding to this difference are performed. It includes a step of asserting a signal, a step of adjusting the read clock signal to change the difference, and a step of repeating the above steps of the sequence to obtain a change in the state position of the flag signal. The change in state position is for domain crossing between the read time domain of the read clock signal and the write time domain of the write clock signal. Latency is set to change state position for one of multiple data buffers.Optionally, the repeated assertion of the flag signal corresponding to the data buffer may be for toggle the flag signal.Optionally, if the difference is less than or equal to a portion of the buffer depth of one of the data buffers, the flag signal is asserted with logic 0, and if the difference is greater than a portion of the data buffer depth, the flag signal is logical. It may be asserted with 1.Optionally, if the flag signal is logic 1, the data buffer fill level may be greater than half, and if the flag signal is logic 0, the data buffer fill level is half. It may be 1 or less.Optionally, the sequence may be started at startup or reset of multiple transmitters, the flag signal asserted for the sequence may correspond to a data buffer, and is logical for the first cycle of the sequence. Both 0 and logic 1 may be included.Obtaining the read address, optionally, may include dividing the read clock signal by a divisor with respect to the buffer depth of the data buffer to obtain the read address.Obtaining the write address, optionally, may include dividing the write clock signal by a divisor with respect to the buffer depth of the data buffer to obtain the write address.Calculating the difference, optionally, may include Gray coding.Optionally, the method further comprises setting a first latency for the first first-in first-out buffer of the data buffer in response to the first change in the state position of the first flag signal, and a second. In response to the second change in the state position of the flag signal of, the step of setting the second latency for the second first-in first-out buffer of the data buffer and the first data in the first first-in first-out buffer of the data buffer. It may include a step of receiving and a step of receiving in the second first-in first-out buffer of the second data data buffer. As an acceptable inter-lane skew, the first latency for the first data through the first first-in first-out buffer is at least sufficiently close to the second latency for the second data through the second first-in first-out buffer. Good.Integrated circuit devices relate to those having multiple transmitters. In this integrated circuit device, the first transmitter of the plurality of transmitters includes a first data buffer and a first input / output control block coupled to the first data buffer. The second transmitter of the plurality of transmitters includes a second data buffer and a second input / output control block coupled to the second data buffer. The first data buffer has a first delay and the second data buffer has a second delay. The first input / output control block is configured to be coupled to receive a write clock signal and a first read clock signal to generate a first write address and a first read address, respectively. .. The first input / output control block is configured to calculate the first difference between the first write address and the first read address and assert the first flag signal according to the first difference. ing. The first input / output control block is coupled to feed back the first flag signal, adjusts the first read clock signal, adjusts the first difference, and is the first with respect to the first flag signal. Make the position of the state change of 1 reach the position associated with the first position of the first data buffer. The second input / output control block is coupled to receive a write clock signal and a second read clock signal, and is configured to generate a second write address and a second read address, respectively. The second input / output control block is configured to calculate the second difference between the second write address and the second read address and assert the second flag signal according to the second difference. Has been done. The second input / output control block is coupled to feed back the second flag signal, adjusting the second read clock signal and adjusting the second difference to the second flag signal. Make sure that the position of the second state change reaches the position associated with the second position of the second data buffer.Optionally, both the first and second transmitters will receive parallel data and provide this parallel data in the first and second lanes for conversion to serial data, respectively. A serializer-deserializer transmitter coupled to.Optionally, the first input / output control block and the second input / output control block include a first phase interpolation circuit and a second phase interpolation circuit, respectively. The first phase interpolation circuit may be coupled to receive the first flag signal as feedback for adjusting the first read clock signal, and the second phase interpolation circuit may be the second read clock. It may be coupled to receive a second flag signal as feedback for adjusting the signal.Optionally, the first and second positions correspond to the first point of the unit spacing of the first phase interpolation circuit and the second point of the unit spacing of the second phase interpolation circuit. The first point may be associated with the first delay. The second point may be associated with a second delay. The first phase interpolation circuit uses the first flag signal to locate the first domain crossing between the first read time domain of the first read clock signal and the first write domain of the write clock signal. It may be adjustable in response. The second phase interpolation circuit can be adjusted in response to the second flag signal to locate the second read time domain of the second read clock signal and the second write domain of the write clock signal. You may. The first phase interpolation circuit and the second phase interpolation circuit can be adjusted to equalize the first delay and the second delay associated with the first domain crossing and the second domain crossing, respectively. You may.Other features will be recognized by considering the following detailed description and claims.The accompanying drawings show exemplary equipment and / or methods. However, the accompanying drawings should not be construed as limiting the scope of the claims, but are for illustration and understanding purposes only.FIG. 6 is a block diagram illustrating an exemplary integrated circuit device with a transmitter. FIG. 5 is a block diagram showing an exemplary transmitter of the integrated circuit device of FIG. FIG. 5 is a block diagram showing an exemplary transmitter of the integrated circuit device of FIG. It is a flow chart which shows the flow of the exemplary latency adjustment for activating the transmitter of FIG. It is a flow diagram which shows the exemplary setting sequence which may be used for the operation of the latency adjustment flow of FIG. FIG. 6 is a simplified block diagram representing an exemplary columnar field programmable gate array (“FPGA”) architecture.In the following description, a number of specific details are given to make a more complete description of the specific examples described herein. However, it should be apparent to those skilled in the art that one or more other examples and / or variants of these examples may be performed without any specific details given below. In other cases, well-known features are not described in detail so as not to obscure the description of the examples herein. For ease of description, the same numeric label is used in different drawings to refer to the same item, but in alternative examples, each item may be different.Before giving the examples illustrated in some figures, I will give a general introduction for a better understanding.When data can be serialized across multiple lanes, multiple transmitters can result in inter-lane skew by using their respective buffers in parallel, such as first-in first-out buffers (“FIFOs”). .. As described in more detail below, the phase interpolation circuitry of each transmitter is used to adjust the phase of the read clock signal (“read clock”) provided by such a phase interpolation circuit to provide data from the transmitter FIFO. Can be read. Therefore, each read clock may suffer some differences in propagation delay, but from a commonly supplied write clock signal (“write clock”), ie, a common write clock, provided to each FIFO of the transmitter. It can be phased to the supplied write clock.Traditionally, FIFOs are in the transmitter data path before serializing the data. In general, data, which may be from the FPGA fabric or other circuitry, is input or written to the FIFO in response to the pulse edge (s) of the input clock signal or the write clock signal. This data can have an N bit width, where N is greater than or equal to 1. Such a FIFO may be a memory and / or a set of registers. The FIFO may be able to store multiple sets of inputs, i.e., multiple words. To that end, FIFOs may be described as having a depth of M to store up to M words, each of which may be N-bit wide. Therefore, up to M instances of N-bit wide words may be stored in the FIFO at one time.The transmitter's FIFO can be used, at least in part, to buffer data by absorbing skew associated with differences in propagation delay of input or write clock signals provided over the clock network. it can. Clock networks for distributing input or write clock signals to multiple transmitters may have significant inter-FIFO skew due to propagation delay, which may be voltage and / or temperature sensitive. is there. In contrast, the output clock signal or read clock signal may be provided by a phase interpolation circuit or other circuit inside the transmitter or transmit receiver, thus causing significant skew between FIFOs due to their proximity. Never have. Thus, for example, a delay aligner for driving a read clock signal over a clock network to clock control the input or write side of a FIFO, and a transmitter for clock control of the output or read side of such a FIFO. The phase interpolation circuit and the same reference clock provided in may have substantially different propagation delays. To that end, the depth of the FIFO has traditionally been adapted to the maximum propagation delay, i.e. the FIFO has traditionally been an input or write clock signal for starting or resetting an integrated circuit device. The size is adjusted according to the propagation delay of.In addition, the transmitter FIFO may be reset or activated in an unknown state. For example, resetting the transmitter FIFO may be the last operation in the reset procedure for SERDES. After resetting the transmitter's FIFO, such a FIFO may be "roughly" half full. However, the data transmitted by the multiple FIFOs of the SERDES transmitter is not entirely phased, or at least fully phased by a "roughly" half-filled state. In other words, a channel can be in one of three states: more filled than empty, emptyer than full, or midpoint, such between channels. Fluctuations may not be sufficient for some applications.In the past, phase equalization between the read clock signal and the commonly supplied write clock signal was used in conjunction with the FIFO "bypass" to form SERDES. In "buffer bypass mode", phase alignment was maintained using a complex analog phase detector circuit, sometimes referred to as a delay aligner. Using the delay aligner of each transmitter, the phase interpolation circuit of each transmitter ensures that the corresponding read clock signal has a known phase relationship with the commonly supplied write clock signal, and each FIFO has only one word. Operated at depth. However, during operation, the use of the operating transmitter's phase interpolation circuit results in unwanted jitter, so the phase of the common write clock signal is adjusted to keep the write clock signal in phase with each read clock signal. did. During operation, the write clock signal may be out of phase with the read clock signal due to problems associated with the clock tree, a clock network sometimes referred to as an H tree, or other types of clock networks. For example, clock networks can be temperature sensitive, and therefore temperature changes can cause differences in the propagation delay of commonly supplied write clock signals provided to different transmitters. Again, the delay aligner of each transmitter is used to compensate for such differences and the corresponding transmitter write clock signal supplied from the common write clock signal is in phase with the corresponding read clock signal of such transmitter. Adjusted to be. Therefore, the inter-lane skew can be within a phase matching window, such as between transmitters, with a FIFO reduced to store only one vector at a time. However, with larger clock networks, wider data buses, and / or narrower phasing windows, the use of buffer bypass mode is a significant overhead involved when using complex analog delay aligners. Apart from that, the problem became bigger.As described in more detail below, all SERDES transmitter data buffers may be set to exactly half full, rather than "roughly", according to PI particle size. Such settings can be used to make the latency between all data buffers of the SERDES transmitter virtually equal, resulting in a substantial reduction in inter-lane skew. To that effect, a flag signal is generated to indicate the state of the data buffer. This flag signal is fed back for phase adjustment of the output clock signal provided in such a data buffer. This adjustment may be repeatedly incremented or decremented to find a position in such a data buffer, and the state of such a flag signal toggles, for example, from logic 1 to logic 0 and vice versa. .. All SERDES transmitter data buffers may be similarly configured in such positions to equalize latency across such data buffers.With the above overall understanding in mind, the various configurations for integrated circuit devices with multiple transmitters are described in general below.FIG. 1 is a block diagram illustrating an exemplary integrated circuit device 100 with a plurality of transmitters 190. In this example, only two transmitters 190-1 and 190-2 are exemplified, but in other cases, three or more transmitters may be used. Further, the transmitter 190 may be a corresponding transceiver or may be a separate transmitter. Transmitters 190-1 and 190-2 can correspond to lanes 191-1 and 191-2. Lanes 191-1 and 191-2 may have maximum inter-lane skew for parallel data distributed across transmitters 190-1 and 190-2 ready for serialization.The first transmitter 190-1 of the transmitter 190 includes a first data buffer 121-1 and a first input / output control block 161-1 coupled to the first data buffer 121-1. The second transmitter 190-2 of the transmitter 190 includes a second data buffer 121-2 and a second input / output control block 161-2 coupled to the second data buffer 121-2.The first data buffer 121-1 has a first delay and the second data buffer 121-2 has a second delay. The first and second delays may be different from each other, including, but not limited to, the case of starting or resetting the integrated circuit device 100. For this purpose, the input data 120-1 may be input when the input to the first data buffer 121-1 is clock-controlled, and the input data 120-2 may be input to the second data buffer 121-2. It may be input when the input is clock-controlled. The output data 122-1 may be output in response to clock control of the output of the first data buffer 121-1, and the output data 122-2 clock-controls the main force of the second data buffer 121-2. It may be output according to the operation.In this example, the first input data 120-1 and the second input data 120-2 are fed from the integrated circuit programmable fabric to the serializer-deserializer 180, as shown at boundary 150. Specifically, the integrated circuit programmable fabric may be a field programmable gate array. However, the first input data 120-1 and the second input data 120-2 are serializers formed from two or more transmitters of the integrated circuit device 100 from any integrated circuit programmable fabric and / or special purpose resource. -Data to the deserializer ("SERDES") 180, or even passing data.Based on the above purpose, the first output data 122-1 and the second output data 122-2 are input to the parallel input serial output data converter (“PISO”) 192, and the serial data 193 is output from the PISO. Can be obtained. Due to the difference between the first and second delays of the data buffers 121-1 and 121-2, respectively, there is an interlane skew for the first output data 122-1 and the second output data 122-2. obtain. This difference can be compensated as described below. In other words, both the first transmitter 190-1 and the second transmitter 190-2 take parallel data as an N-bit wide word or part of a word via input data 120-1 and 120-2. It may be a SERDES transmitter 180 coupled to receive. To convert to serial data by PISO, the SERDES transmitter 180 will provide such parallel data in the first lane with output data 122-1 and the second lane with output data 122-2, respectively. May be combined.FIG. 2 is a block diagram showing an exemplary transmitter 190-1 of the integrated circuit device 100, and FIG. 3 is a block diagram showing an exemplary transmitter 190-2 of the integrated circuit device 100. The integrated circuit device 100 will be further described with reference to FIGS. 1 to 3 at the same time.The write clock signal 105 of the transmitter 190 may be commonly supplied from the reference clock signal 101. The reference clock signal 101 is supplied to each phase interpolation circuit 102 of the transmitter 190, and is also supplied to the clock network of the integrated circuit device 100. For clarity, buffer 103 represents, but is not limited to, a clock network. Thus, the reference clock signal 101 can be input to the buffer 103 to distribute the local write clock signals 105 to the transmitter 190, respectively. Accordingly, the local write clock signals 105 have different propagation delays, so not all local write clock signals 105 arrive at their respective transmitter destinations at the same time. That is, such local write clock signals 105 may not only be out of phase with each other, but may also be out of phase with the corresponding local read clock signal 106 of such transmitter 190. Further, clock tree imbalances in different SERDES can be achieved by embedding an unbalanced number in the registers of the input / output control block 161, for example, adding or subtracting an offset address from the write address 109 output from the divider 107. By doing so, it can be nominally compensated. Finally, the read clock signal 106 is generated locally with respect to the corresponding transmitter 190. Therefore, the read clock signals 106 between the transmitters 190 are generally out of phase with each other.The first input / output control block 161-1 is for controlling an input such as writing or loading input data 120-1 to the first data buffer 121-1, and outputs data 122-1. It may be for controlling the output such as reading or unloading from the data buffer 121-1 of 1. Similarly, the second input / output control block 161-2 is for controlling inputs such as writing or loading input data 120-2 to the second data buffer 121-2, and output data 122-. It may be for controlling the output such as reading or unloading 2 from the second data buffer 121-2.Since the first and second transmitters 190 have the same configuration, for clarity, basically only one of the transmitters 190 will be described in detail below. Further, for the sake of clarity, the data buffer 121 is assumed to be a first-in first-out data buffer (“FIFO”), but the present invention is not limited to this. Such a FIFA 121 can consist of a set of memory cells and / or registers. For clarity, but not limitation, write and read clocks shall be used to generate a set of vectors. That is, it is assumed that the FIFA 121 is formed from a memory that is accessed sequentially for clarity.The first input / output control block 161-1 is coupled to receive the write clock signal 105 and the first read clock signal 106 to the dividers 107 and 104, respectively, and the first write address to the dividers 107 and 104. The 109 and the first read address 108 are generated, respectively. The first input / output control block 161-1 may be configured to calculate a first difference 111 between the first write address 109 and the first read address 108. The first input / output control block 161-1 may be configured to assert the first flag signal 113 in response to this first difference 111. The first input / output control block 161-1 is coupled to feed back the first flag signal 113, adjusting the first read clock signal 106 and adjusting the first difference 111. The position of the first state change with respect to the first flag signal 113 is made to reach the position associated with the position of the first data buffer 121-1.The second input / output control block 161-2 is coupled to receive the write clock signal 105 and the second read clock signal 106, and depending on these signals, the second write address 109 and the second read address 109. It may be configured to generate each read address 108. The second input / output control block 161-2 may be configured to calculate a second difference 111 between the second write address 109 and the second read address 108. The second input / output control block 161-2 may be configured to assert the second flag signal 113 according to the second difference 111. The second input / output control block 161-2 is coupled to feed back the second flag signal 113, adjusting the second read clock signal 106 and adjusting the second difference 111. The position of the second state change with respect to the second flag signal 113 reaches the position associated with the position of the second data buffer 121-2.The first input / output control block 161-1 may include a first phase interpolation circuit (“PI”) 102. Similarly, the second input / output control block 161-2 may include a second phase interpolation circuit 102.The first phase interpolation circuit 102 may be coupled to receive the first flag signal 113 as feedback in order to adjust the first read clock signal 106. More specifically, the first increment / decrement control block 114, which is usually included in the phase interpolation circuit 102 but is shown exemplarily separately here for clarity, is provided by the adjustment signal 115. A first flag signal 113 can be received to determine whether the first position to be selected should be incremented, decremented, or maintained, such first. The position of is associated with the delay of such a first read clock signal 106 output from such a first phase interpolation circuit 102.Similarly, the second phase interpolation circuit 102 of the second input / output control block 161-2 so as to receive the second flag signal 113 as feedback in order to adjust the second read clock signal 106. May be combined with. The second increment / decrement control block 114 of such a second phase interpolation circuit 102 should increment, decrement, or maintain a second position as selected by the adjustment signal 115. Such a second flag signal 113 can be received to determine if it should be done so that such a second position is output from such a second phase interpolation circuit 102. Is associated with the delay of the second read clock signal 106.Such a first position and such a second position are the first position or point of the unit spacing (“UI”) of such a first phase interpolation circuit 102, and such a second position. Corresponds to the second position or point of the unit interval of the phase interpolation circuit 102. For example, at boot time, such a first point may initially be associated with a first delay in FIFA 121-1, such a first delay being adjustable and such a second. The points may initially be associated with a second delay in FIFA 121-2, such a second delay being adjustable.In response to the first flag signal 113, the first phase interpolation circuit 102 is the first read domain between the first read time domain of the first read clock signal 106 and the first write domain of the write clock signal 105. It may be adjusted to locate domain crossing. Similarly, the second phase interpolation circuit 102 responds to the second flag signal 113 with a second read time domain of the second read clock signal 106 and a second write of such a write clock signal 105. It may be adjusted to locate the domain.Since only the FIFO 121 is directly inside the data path of the data flow, the depth M123 of each FIFO 121 includes, but is not limited to, the startup or reset of the transmitter 190 as a result of such a FIFO. Different delays may occur between them, resulting in different latencies between two or more FIFOs 121. In this example, each FIFA 121 can hold an N-bit M word, where M is an integer greater than or equal to 2.The first phase interpolation circuit 102 and the second phase interpolation circuit 102 are the first associated with FIFA 121-1, respectively associated with such a first domain crossing and such a second domain crossing. It may be adjusted to equalize the delay and the second delay associated with FIFA 121-2. Therefore, the output data 122-1 and 122-2 from FIFO 121-1 and 121-2, respectively, may be substantially in phase with the data. In short, the FIFO 121 may be in phase with each other without matching the phase of each read clock signal with the commonly supplied write clock signal. Therefore, it is possible to avoid the use of the transmitter's complex analog delay aligner circuitry to maintain phase matching of the local write clock signal to the local read clock signal. Further, the components of the input / output control block 161 may be digital circuits, and some of these digital circuits in some transmitters provide indicators of corresponding FIFO underflow and / or overflow. May be generally available for. Therefore, by having such components work to provide the feedback flag signal 113 as described above, each transmitter of SERDES can be tuned independently, and all such transmitters of such SERDES After each is so independently adjusted, it is possible to have at least substantially the same data latency, i.e., the inter-lane skew is substantially reduced.With the above description in mind, FIG. 4 is a flow diagram illustrating an exemplary latency adjustment flow 400 for activating a plurality of transmitters 190. FIG. 4 will be further described with reference to FIGS. 1 to 4 at the same time.At 401, the transmitter 190 of the integrated circuit device 100 is activated as part of a startup or reset sequence. At 402, transmitter 190-1 is set or adjusted to have the latency of FIFA 121 to FIFA 121-1. This latency may be associated with the position of the state change of the flag signal 113 of the transmitter 190-1, and such an adjustment is such a flag to increment or decrement the PI 102 of the transmitter 190-1. It may be an iterative adjustment in response to the feedback of signal 113.In parallel with the operation at 402, at 403, transmitter 190-2 is set or adjusted to have latency for FIFA 121-2 for FIFA 121-2. This latency may be associated with the position of the state change of flag signal 113 on transmitter 190-2, such adjustments to increment or decrement PI 102 on transmitter 190-2. It may be an iterative adjustment in response to the feedback of signal 113.At 404, the input data 120-1 may be received by the FIFA 121-1 and such input data 120-1 may be output from the FIFA 121-1 as the output data 122-1. At 405, in parallel with the operation at 404, input data 120-2 may be received by FIFA 121-2, and such input data 120-2 is output from FIFA 121-2 as output data 122-2. May be good. Interlane skew for output data 122-1 and 122-2 is such parallel output data 122-1 and 122 because FIFA 121-1 and 121-2 can be adjusted to have at least approximately the same latency. It may be small enough, if any, for the serialization at -2 406. In other words, the latency of the input data 120-1 passing through the FIFA 121-1 is at least sufficient for the latency of the input data 120-2 passing through the FIFA 121-2 with respect to the acceptable inter-lane skew for serialization. Close to.To summarize, upon activation or reset of transmitter 190, the sequence may be initiated at 401 for each such transmitter 190 having a corresponding data buffer, such as FIFA 121. Such sequences can be used, for example, in operations 402 and 403 to set the latency for each such data buffer in parallel in response to the execution of such initiated sequences. Parallel data may then be input and output from such data buffers for subsequent serialization.FIG. 5 is a flow diagram illustrating an exemplary configuration sequence 500 that can be used for the operations at 402 and 403 of FIG. FIG. 5 will be further described with reference to FIGS. 2 to 5 at the same time.At 501, the read address 108 associated with the read clock signal 106 may be obtained. Each read address 108 may be obtained, for example, by inputting the read clock signal 106 to the divider 104 of an input / output control block such as the input / output control block 161-1. For clarity with examples without limitation, only the operation of transmitter 190-1 will be further described in relation to the setting of sequence 500, but each of the transmitters 190 such as SERDES is similarly set. Please understand that it is also good. Generally, at the same time that the read clock signal 106 is provided to the divider 104, such read clock signal 106 is provided to a FIFO such as, for example, a FIFO, reading output data 122-1 from such a FIFO. be able to.At the same time that the read address is obtained at 501, the write address 109 associated with the write clock signal 105 may be obtained at 502. The operations at 501 and 502 may be performed in parallel. Each write address 109 may be obtained by inputting a write clock signal 105 to the divider 107 of the input / output control block 161-1. Generally, at the same time that the write clock signal 105 is provided to the divider 107, such a write clock signal 105 is provided to the FIFO 121-1 and can write the input data 120-1 to such a FIFO.Obtaining the read address 108 may include dividing the read clock signal 106 by a divisor associated with the buffer depth to obtain such a read address 108. Similarly, obtaining a write address 109 is a write clock signal with the same divisor used to obtain a read address, i.e. a divisor associated with such a buffer depth to obtain such a write address 109. It may include dividing 105.Dividers 104 and 107 may be configured to be divided by M dividers, respectively, where M is the depth of FIFA 121-1 as described above. The dividers 104 and 107 may be programmable and the respective dividers 104 and 107 may be preconfigured to split at M as part of a reset. However, more generally, for reset, the maximum phase difference, such as between write address 109 and read address 108, is at least either a read or write clock signal because the read and write clocks generally have the same frequency. The difference between the dividers 107 and 104 is generally set in advance as half of the clock period of. This difference can be used to ensure that the read from the FIFA 121-1 does not occur before the corresponding write to the FIFA 121-1 due to the propagation delay of the clock signal 105. The clock period in this case may be any number of multiples of the UI of the PI 102, and such a PI can have resolutions such as 1/16, 1/32, 1/64 of the UI. Therefore, precise phase matching of the write and read clock signals can be obtained.Each read address 108 and each write address 109 output from the dividers 104 and 107, respectively, are input to the subtractor 110, and the difference 111 can be determined. To that effect, 503 may determine the difference 111 between the read address 108 and the write address 109, which may be associated with each other. Each such difference 111 output from the subtractor 110 may be provided to the comparator 112. Each such difference 111 can indicate a phase difference between the read clock signal 106 and the corresponding write clock signal 105. The difference 111 may be used to determine domain crossing. Such domain crossing may be identified using Gray coding. Accordingly, such write and read addresses 108 are generated from two separate clock signals that may be out of phase with each other, so that the subtractor 110 simply makes the difference between the two addresses. It's more complicated than calculating. In other words, read and write addresses can develop at clocks where the frequency is much lower than the line rate, so latency measurements can be coarse. Since read and write addresses may be expanded on clocks with an unknown relative phase relationship, the difference 111 uses Gray coding for the address value to make both the read and write addresses relative to the common clock. It may be obtained by moving the clock. Thus, the difference may be associated with crossing domains in a particular direction, and therefore Gray coding is a counter (eg, not shown) within the subtractor 110 combined to use Gray coding. It is an example of generating such a difference by having a read address counter and a write address counter). However, other types of coding may be used.The comparator 112 checks the difference 111 to determine if such a difference is less than or equal to a portion of the depth of a data buffer in the transmitter's data buffer, eg, the corresponding FIFA 121-1 of the transmitter 190-1. .. In this example, the flag signal 113 is asserted as logic 0 if the difference is less than or equal to such a portion of the depth of FIFA 121-1. However, if such a difference 111 is greater than such a portion of the depth of FIFA 121-1, the flag signal 113 is asserted as logic 1. This portion may be a FIFO underflow condition, an overflow condition, a half-full condition, or some other filling level.In this example, the comparator 112 checks each of the differences 111 described as encoded and determines if such a difference is M / 2, ie less than half the depth of FIFA 121-1. To do. The comparator 112 may be programmable or may be preconfigured for comparison to M / 2. In other words, the comparator 112 can be used to determine if the difference 111 is less than or equal to the midpoint of FIFA 121-1. At 504, the comparator 112 can assert whether the flag signal 113 associated with each such difference 111, i.e. the difference 111, is less than or equal to the midpoint of FIFA 121-1. In this example, if the flag signal 113 is logic 1, the filling level of FIFA 121-1 is greater than half, and if such a flag signal is logic 0, the filling level of FIFA 121-1 is It is less than half.In other words, the phase of some parallel data transmitted from / within each associated lane is shifted forward over time by using PI 102 and the flag signal 113 is all such. The FIFA 121-1 can be made more and more empty until it reaches the point or tap of PI 102 transitioning from logic 0 to logic 1 for the channel and is transmitted from / within each associated lane. The other phase of such parallel data is shifted backwards over time by using PI 102, and the point or point of PI 102 where the flag signal 113 transitions from logic 1 to logic 0 for all such channels. The FIFA 121-1 can be made more and more full until it reaches the tap. Both of these transition positions may be used, but in order to further equalize the latency between all such channels, either the logic 0 transition position or the logic 1 transition position is set. May be good. In other words, the flag signal 113 may be either logic 1 or logic 0 for all channels after latency equalization. Any remaining lane-to-lane skew of the FIFA 121 can be kept within limits by the resolution of the PI 102, for example 1/64 of the UI of some PIs.In this example, each data buffer, such as FIFA 121 of the transmitter 190, may be adjusted to be set to the midpoint after the startup or reset sequence. However, other setting positions within the data buffer other than the midpoint setting may be used. Further, the data buffers may be set at different positions from each other. In short, such settings may be adjusted depending on the application and / or integrated circuit device to minimize or otherwise reduce inter-lane skew.At 505, the flag signal 113 may be provided to the increment / decrement control block 114 to provide the adjustment signal 115. The adjustment signal 115 is provided to the PI 102 to adjust the read clock signal 106 and changes the difference 111 to locate the state change with respect to the flag signal 113 and corresponds to a data buffer among the data buffers, eg, corresponding. The latency of the transmitter 190 for the FIFA 121 to the FIFA 121-1 can be set. As indicated by the connection line 506, the operations 501 to 505 are repeated, and the position of the state change of the flag signal 113 can be determined.Once the PI 102 receives the adjustment signal 115 indicating the position of the state change, as determined by 507, the position of such a state change is the read time domain of the read clock signal 106 and its read time domain. Such may be for domain crossing with the write time domain of the write clock signal 105 corresponding to the read clock signal 106. The PI 102 can ignore such a current adjustment signal 115, as the latency to FIFA 121-1 is effectively set at such a state change position. Therefore, the PI 102 can effectively terminate the configuration sequence 500 at 508. If the location of the state change has not yet been found, the PI 102 can continue the configuration sequence 500 as indicated by the connecting line 510 as a whole. In general, once the data buffers of the SERDES transmitter 190 are all set to the same latency, eg, the midpoint of their respective FIFA 121, such latency adjustments need to be readjusted after the transmitter 190 startup sequence or reset. It is not necessary to have. In other words, rather than having a circuit that imposes constant latency on voltage and / or temperature fluctuations to equalize the latencies of multiple channels of the transmitter SERDES, it is tuned as described herein. The latency of a FIFO simply changes with voltage and / or temperature and remains at least sufficiently equal to each other. In other words, during operation, changes in temperature and / or supply voltage can even significantly change the latency of FIFOs, but the latencies of all such FIFOs remain generally equal to each other.The PI 102 of each transmitter 190 stops as soon as the flag signal 113 switches the state determined by 507 from logic 1 to logic 0 or from logic 0 to logic 1. This transition of the logical state marks the end of the tuning phase, which may be used to equalize the latency for all transmitters 190 with very high accuracy, depending on the resolution of the corresponding PI 102. Can only be limited. Therefore, after the execution of the setting sequence 500 is completed, such a sequence may be terminated so that the PI 102 is not continuously activated. Running the PI continuously reduces the output quality due to the jitter caused by such a continuously running PI.Each data buffer in the corresponding transmitter 190 data buffer 121 may be set to at least approximately the same latency value. More specifically, each data buffer in the corresponding transmitter 190 data buffer 121 may be configured to have the same latency value according to the grain size of PI 102.By repeatedly asserting the flag signal 113 of the corresponding FIFA 121 of the transmitter 190, such a flag signal can be effectively toggled. In this way, the PI 102 may be controlled so that the associated flag signal 113 is toggled, i.e., "on the edge" of the transition position. To that effect, the flag signal 113 asserted for the sequence corresponding to the data buffer should include both logic 0 and logic 1 for the first cycle of such sequence at startup or reset. Can be done. This means that two or more identical transmitters may be at different filling levels, so that these transmitters can escape the reset with different latencies. In the transmitter, this can be caused by an unknown phase relationship between the external FIFO reset signal and the internal read clock signal 106. If the FIFA 121 is N-bit wide, the plurality of transmitters 190 may exhibit a maximum N-bit inter-lane skew. When N is equal to about 16, conventional SERDES have a parallel input bit width greater than N. However, protocols such as XAUI, PCI Express, and SFI-5 have much smaller inter-lane skew specifications than 16. These protocols may require having a minimum inter-lane skew, but may have to be equal for all lanes. Latency can vary during operation.As described above, the inter-lane skew adjustment can be performed without using the delay aligner and with the PI turned off during the data transfer operation. Further, the input / output control block is a digital circuit, and apart from feeding back the flag signal 113 as described above, such a circuit may exist in the existing transmitter.The PI 102 allows the phase of the transmitted data to be shifted separately for each SERDES transmitter. Optionally, bit skipping of the fast transmit divider may be used, but PI 102 may be used for greater accuracy in reducing the remaining inter-lane skew. However, it is not necessary to make an actual latency measurement, but rather a flag triggered by a specified FIFO filling level may be used. Such flag generation is a much simpler circuit than the circuit used to measure latency. In addition, flag generation can be done with high resolution in terms of phase difference. Although the term "flag" has been used, overflow / underflow level signals may be used for such flag signals.The above description was for the operation of the transmitter SERDES. However, the above description may be used for the receiver if it is locked in reference mode and driven by the same signal. To that end, the receiver provides an equalized inter-lane transmitter skew to such receivers where the transmitter is in a serial loopback when locked in reference mode, and of the input data. It can be equalized by performing interleaved receiver oversampling.Since one or more of the examples described herein may be implemented in an FPGA, a detailed description of such an IC will be given. However, it should be understood that other types of ICs can benefit from the techniques described herein.A programmable logic device (“PLD”) is a well-known type of integrated circuit that can be programmed to perform a specified logic function. A field programmable gate array (“FPGA”), a type of PLD, typically includes an array of programmable tiles. These programmable tiles are, for example, input / output blocks (“IOB”), configurable logic blocks (“CLB”), dedicated random access memory blocks (“BRAM”), multipliers, digital signal processing blocks. (“DSP”), processors, clock managers, delay lock loops (“DLL”), and the like. As used herein, "includes" and "includes" means include without limitation.Each programmable tile typically contains both programmable interconnects and programmable logic. Programmable interconnects typically include a large number of interconnect lines of varying lengths interconnected by programmable interconnect points (“PIPs”). The programmable logic implements user-designed logic using programmable elements that can include, for example, function generators, registers, arithmetic logic, and the like.Programmable interconnects and programmable logic are typically programmed by loading into internal configuration memory cells a stream of configuration data that defines how programmable elements are configured. Will be done. The configuration data may be read from memory (eg, from an external PROM) or written to the FPGA by an external device. In that case, the aggregated state of the individual memory cells determines the function of the FPGA.Another type of PLD is a composite programmable logic device, or CPLD. A CPLD contains two or more "functional blocks" that are connected together by an interconnect switch matrix and to an input / output ("I / O") resource. Each functional block of a CPLD contains a two-level AND / OR structure similar to that used in programmable logic array (“PLA”) and programmable array logic (“PAL”) devices. In CPLDs, configuration data is typically stored on a chip in non-volatile memory. In some CPLDs, the configuration data is stored on a chip in non-volatile memory and then downloaded to volatile memory as part of the initial configuration (programming) sequence.For all of these programmable logic devices (“PLD”), the functionality of the device is controlled by the data bits provided to the device for that purpose. The data bits can be volatile memory (eg, static memory cells such as in FPGAs and some CPLDs), non-volatile memory (eg, FLASH memory as in some CPLDs), or any other. It may be stored in a memory cell of type.Other PLDs are programmed by providing a processing layer, such as a metal layer, that programmablely interconnects various elements on the device. These PLDs are known as mask programmable devices. PLDs may also be implemented in other ways, for example using fuse or anti-fuse technology. The terms "PLD" and "programmable logic device" include, but are not limited to, these exemplary devices, as well as merely partially programmable devices. For example, a type of PLD includes a combination of hard-coded transistor logic and a programmable switch fabric that programmables the hard-coded transistor logic.As mentioned above, advanced FPGAs can include several different types of programmable logic blocks within an array. For example, FIG. 6 shows a multi-gigabit transmit receiver (“MGT”) 601, a configurable logic block (“CLB”) 602, a random access memory block (“BRAM”) 603, and an input / output block (“IOB”) 604. , Configuration and clock logic (“CONFIG / CLOCK”) 605, digital signal processing block (“DSP”) 606, special input / output block (“I / O”) 607 (eg, configuration port and clock port). , And an FPGA architecture 600 containing a number of different programmable tiles, including other programmable logic 608s such as digital clock managers, analog-to-digital converters, and system monitoring logic. Also, some FPGAs include a dedicated processor block (“PROC”) 610.In some FPGAs, each programmable tile includes a programmable interconnect element (“INT”) 611 that has a standardized connection between the corresponding interconnect elements of each adjacent tile. Therefore, by adopting programmable interconnect elements together, a programmable interconnect structure is implemented for the illustrated FPGA. As shown by the example included in the upper part of FIG. 6, the programmable interconnect element 611 also includes a connection between programmable logic elements within the same tile.For example, CLB602 may include configurable logic elements (“CLE”) 612 that can be programmed to implement a single programmable interconnect element (“INT”) 611 in addition to user logic. it can. The BRAM 603 can include a BRAM logic element (“BRL”) 613 in addition to one or more programmable interconnect elements. Typically, the number of interconnect elements contained in a tile depends on the height of the tile. In the illustrated embodiment, the BRAM tiles have the same height as the five CLBs, but other numbers (eg, four) can be used. The DSP tile 606 can include a DSP logic element (“DSPL”) 614 in addition to an appropriate number of programmable interconnect elements. The IOB 604 can include, for example, one instance of a programmable interconnect element 611 plus two instances of an input / output logic element (“IOL”) 615. As will be apparent to those skilled in the art, for example, the actual I / O pad connected to the I / O logic element 615 is typically not limited to the region of the input / output logic element 615.In the illustrated embodiment, the horizontal area near the center of the die (shown in FIG. 6) is used for configuration, clock, and other control logic. A vertical column 609 extending from this horizontal region or column is used to distribute the clock and configuration signals over the width of the FPGA.Some FPGAs that utilize the architecture shown in FIG. 6 include additional logical blocks that disrupt the regular column structure that makes up the majority of the FPGA. The additional logic blocks may be programmable blocks and / or dedicated logic. For example, processor block 610 spans several columns of CLB and BRAM.Note that FIG. 6 is intended merely to show an exemplary FPGA architecture. For example, the number of logical blocks in a column, the relative width of the columns, the number and order of columns, the types of logical blocks contained in a column, the relative size of logical blocks, and the interconnects included at the top of FIG. / Logic embodiments are purely exemplary. For example, in a real FPGA, wherever CLBs appear, they typically contain two or more adjacent CLB columns, but adjacent to each other, to facilitate efficient implementation of user logic. The number of CLB columns to play varies with the overall size of the FPGA.Although the aforementioned matters describe exemplary devices and / or methods, other and additional examples in one or more aspects described herein do not deviate from the scope of the specification. The scope of which may be devised in the following is determined by the following claims and their equivalents. Claims enumerating steps do not imply any order of steps. Trademarks are the property of their respective owners. |
Systems and methods for cooling a datacenter are disclosed. In at least one embodiment, a liquid-to-air heat exchanger is associated with a fan wall of a rack and enables a datacenter cooling system to address a first cooling requirement of the rack in a first mode by air through the rack from the fan wall and to address a second cooling requirement of a fluid from at least one cold plate in the rack using the air through the liquid-to-air heat exchanger. |
CLAIMSWHAT IS CLAIMED IS:1. A datacenter cooling system, comprising: a liquid-to-air heat exchanger associated with a fan wall of a rack, the datacenter cooling system to address a first cooling requirement of the rack in a first mode by air through the rack enabled by the fan wall with the liquid-to-air heat exchanger disabled, and to address a second cooling requirement of a fluid from at least one cold plate in the rack using the air to cause cooling associated with the liquid-to-air heat exchanger that is enabled to comprise the fluid circulating therein.2. The datacenter cooling system of claim 1, further comprising: at least one processor to determine a temperature associated with a computing device in the rack, and to cause the datacenter cooling system to operate in the first mode or the second mode.3. The datacenter cooling system of claim 1, further comprising: an immersive-cooled server within the rack, the immersive-cooled server to comprise a dielectric engineered fluid surrounding the computing device and to comprise a second heat exchanger to exchange heat between the dielectric engineered fluid and the fluid.4. The datacenter cooling system of claim 1, further comprising: the cold plate associated with the computing device and having first ports for a first portion of microchannels to support a secondary coolant distinctly from a second portion of the microchannels that support the fluid.5. The datacenter cooling system of claim 1, further comprising: at least one processor to receive sensor inputs from sensors associated with the computing device, the rack, a secondary coolant, or the fluid, the at least one processor to determine the first cooling requirement and the second cooling requirement based in part on the sensor inputs.6. The datacenter cooling system of claim 5, further comprising: one or more neural networks to receive the sensor inputs and to infer the first cooling requirement and the second cooling requirement.7. The datacenter cooling system of claim 1, further comprising: at least one processor to cause at least one flow controller to enable flow of the fluid through the liquid-to-air heat exchanger and to prevent flow of the fluid to a secondary cooling loop.8. The datacenter cooling system of claim 1, further comprising: at least one processor to cause one or more fans of the fan wall to be adjusted in the first mode differently than the second mode.9. The datacenter cooling system of claim 1, further comprising: a latching mechanism to enable the association of the liquid-to-air heat exchanger with the fan wall of the rack.10. The datacenter cooling system of claim 1, further comprising: at least one processor to receive sensor inputs from sensors associated with at least one computing device, the at least one processor to determine a change in a coolant state based in part on the sensor inputs and to cause the datacenter cooling system to operate in the first mode or the second mode.11. A processor comprising one or more circuits, the one or more circuits to determine cooling requirements for a datacenter cooling system, the processor to cause the datacenter cooling system to operate in a first mode to address a first cooling requirement by air through the rack enabled by a fan wall with an associated liquid-to-air heat exchanger disabled, and to cause the datacenter cooling system to operate in a second mode to address a second cooling requirement by the air through the liquid-to-air heat exchanger to cool fluid circulating therein from at least one cold plate in the rack.12. The processor of claim 11, further comprising: an output to provide signals for one or more flow controllers to enable flow of the fluid through the liquid-to-air heat exchanger and to prevent flow of the fluid to a secondary cooling loop in the second mode of the datacenter cooling system.13. The processor of claim 11, further comprising: an input to receive sensor inputs from sensors associated with at least one
computing device, the rack, a secondary coolant, or the fluid, the processor to determine the first cooling requirement and the second cooling requirement based in part on the sensor inputs.14. The processor of claim 13, further comprising: one or more neural networks to receive the sensor inputs and to infer the first cooling requirement and the second cooling requirement.15. The processor of claim 11, further comprising: one or more neural networks to infer a failure of a secondary cooling loop or a primary cooling loop, the one or more circuits to cause one or more flow controllers to support the second mode.16. A processor comprising one or more circuits to cause a datacenter cooling system to operate in a first mode or a second mode, the datacenter cooling system comprising a fan wall and an associated liquid-to-air heat exchanger, the one or more circuits to train one or more neural networks to infer cooling requirements from sensor inputs of sensors associated with a rack or with a fluid from at least one cold plate, the first mode to address a first cooling requirement by air through the rack enabled by the fan wall with the liquid-to-air heat exchanger disabled and the second mode to address a second cooling requirement by the air through the liquid-to-air heat exchanger to cool the fluid circulating therein.17. The processor of claim 16, further comprising: an output to provide signals for one or more flow controllers to enable flow of the fluid through the liquid-to-air heat exchanger and to prevent flow of the fluid to a secondary cooling loop in the second mode of the datacenter cooling system.18. The processor of claim 16, further comprising: the one or more neural networks to receive the sensor inputs and to be trained to infer the first cooling requirement and the second cooling requirement as part of an analysis of prior sensor inputs and prior cooling requirements.19. The processor of claim 16, further comprising: an output to provide signals to cause one or more fans of the fan wall to be adjusted in the first mode differently than the second mode.20. The processor of claim 16, further comprising: an input to receive the sensor inputs associated with a temperature from the at least one computing device or the fluid, the one or more neural networks trained to infer a change in coolant state has occurred based in part on the temperature and on prior temperatures, the one or more circuits to cause the datacenter cooling system to operate in the first mode or the second mode.21. A processor comprising one or more circuits to cause a datacenter cooling system to operate in a first mode or a second mode, the datacenter cooling system comprising a fan wall and an associated liquid-to-air heat exchanger, the one or more circuits to comprise one or more neural networks to infer cooling requirements from sensor inputs of sensors associated with a rack or with a fluid from at least one cold plate, the first mode to address a first cooling requirement by air through the rack enabled by the fan wall and the second mode to address a second cooling requirement by the air through the liquid-to-air heat exchanger to cool the fluid circulating therein.22. The processor of claim 21, further comprising: an output to provide signals for one or more flow controllers to enable flow of the fluid through the liquid-to-air heat exchanger and to prevent flow of the fluid to a secondary cooling loop in the second mode of the datacenter cooling system.23. The processor of claim 21, further comprising: the one or more neural networks to receive the sensor inputs and to be trained to infer the first cooling requirement and the second cooling requirement as part of an analysis of prior sensor inputs and prior cooling requirements.24. The processor of claim 21, further comprising: an output to provide signals to cause one or more fans of the fan wall to be adjusted in the first mode differently than the second mode.25. The processor of claim 21, further comprising: an input to receive the sensor inputs associated with a temperature from the at least one computing device or the fluid, the one or more neural networks trained to infer a change in coolant state has occurred based in part on the temperature and on prior temperatures,
the one or more circuits to cause the datacenter cooling system to operate in the first mode or the second mode.26. A method for datacenter cooling system, comprising: providing a liquid-to-air heat exchanger associated with a fan wall of a rack; determining cooling requirements for at least one computing device of the rack; enabling the datacenter cooling system to address a first cooling requirement of the rack by air through the rack enabled by the fan wall with the liquid-to-air heat exchanger disabled; and enabling the datacenter cooling system to address a second cooling requirement of a fluid from at least one cold plate in the rack using the air through the liquid-to-air heat exchanger that is enabled to comprise the fluid circulating therein.27. The method of claim 26, further comprising: determining, using at least one processor, a temperature associated with a computing device in the rack; and determining the first cooling requirement or the second cooling requirement using the temperature.28. The method of claim 26, further comprising: receiving, in at least one processor, sensor inputs from sensors associated with the computing device, the rack, a secondary coolant, or the fluid; and determining, using the at least one processor, the first cooling requirement and the second cooling requirement based in part on the sensor inputs.29. The method of claim 26, further comprising: enabling, using a latching mechanism, the association of the liquid-to-air heat exchanger with the fan wall of the rack.30. The method of claim 26, further comprising: receiving, by at least one processor, sensor inputs from sensors associated with at least one computing device; determining, by the at least one processor, a change in a coolant state based in part on the sensor inputs; and
causing, based in part on the change in the coolant state, the air to flow through the rack from the fan wall with the liquid-to-air heat exchanger disabled or the air to cause cooling associated with the liquid-to-air heat exchanger that is enabled to comprise the fluid circulating therein. |
INTELLIGENT DUAL PURPOSE HEAT EXCHANGER AND FAN WALL FOR A DATACENTER COOLING SYSTEMCROSS REFERENCE TO RELATED APPLICATIONS[0001] This application claims priority to U.S. Non-Provisional Patent Application Serial No. 17/149,171, filed January 14, 2021, and entitled “Intelligent Dual Purpose Heat Exchanger and Fan Wall for a Datacenter Cooling System” which is hereby incorporated by reference herein in its entirety for all intents and purposes.FIELD[0002] At least one embodiment pertains to cooling systems, including systems and methods for operating those cooling systems. In at least one embodiment, such a cooling system can be utilized in a datacenter containing one or more racks or computing servers.BACKGROUND[0003] Datacenter cooling systems use fans to circulate air through server components. Certain supercomputers or other high capacity computers may use water or other cooling systems instead of air-cooling systems to draw heat away from the server components or racks of the datacenter to an area external to the datacenter. The cooling systems may include a chiller within the datacenter area, which may include area external to the datacenter itself. Further, the area external to the datacenter may include a cooling tower or other external heat exchanger that receives heated coolant from the datacenter and that disperses the heat by forced air or other means to the environment (or an external cooling medium). The cooled coolant is recirculated back into the datacenter. The chiller and the cooling tower together form a chilling facility.BRIEF DESCRIPTION OF THE DRAWINGS[0004] Figure 1 illustrates an exemplary datacenter cooling system subject to improvements described in at least one embodiment;[0005] Figure 2 illustrates server-level features associated with an intelligent dual purpose heat exchanger and fan wall for a datacenter cooling system, according to at least one embodiment;[0006] Figure 3 illustrates rack-level features associated with an intelligent dual purpose heat
exchanger and fan wall for a datacenter cooling system, according to at least one embodiment.[0007] Figure 4 illustrates datacenter-level features associated with an intelligent dual purpose heat exchanger and fan wall for a datacenter cooling system, according to at least one embodiment;[0008] Figure 5 illustrates a method associated with a datacenter cooling system of Figure 2 - 4, according to at least one embodiment;[0009] Figure 6 illustrates a distributed system, in accordance with at least one embodiment;[0010] Figure 7 illustrates an exemplary datacenter, in accordance with at least one embodiment;[0011] Figure 8 illustrates a client-server network, in accordance with at least one embodiment;[0012] Figure 9 illustrates a computer network, in accordance with at least one embodiment;[0013] Figure 10A illustrates a networked computer system, in accordance with at least one embodiment;[0014] Figure 10B illustrates a networked computer system, in accordance with at least one embodiment;[0015] Figure 10C illustrates a networked computer system, in accordance with at least one embodiment;[0016] Figure 11 illustrates one or more components of a system environment in which services may be offered as third-party network services, in accordance with at least one embodiment;[0017] Figure 12 illustrates a cloud computing environment, in accordance with at least one embodiment;[0018] Figure 13 illustrates a set of functional abstraction layers provided by a cloud computing environment, in accordance with at least one embodiment;[0019] Figure 14 illustrates a supercomputer at a chip level, in accordance with at least one embodiment;
[0020] Figure 15 illustrates a supercomputer at a rack module level, in accordance with at least one embodiment;[0021] Figure 16 illustrates a supercomputer at a rack level, in accordance with at least one embodiment;[0022] Figure 17 illustrates a supercomputer at a whole system level, in accordance with at least one embodiment;[0023] Figure 18A illustrates inference and/or training logic, in accordance with at least one embodiment;[0024] Figure 18B illustrates inference and/or training logic, in accordance with at least one embodiment;[0025] Figure 19 illustrates training and deployment of a neural network, in accordance with at least one embodiment;[0026] Figure 20 illustrates an architecture of a system of a network, in accordance with at least one embodiment;[0027] Figure 21 illustrates an architecture of a system of a network, in accordance with at least one embodiment;[0028] Figure 22 illustrates a control plane protocol stack, in accordance with at least one embodiment;[0029] Figure 23 illustrates a user plane protocol stack, in accordance with at least one embodiment;[0030] Figure 24 illustrates components of a core network, in accordance with at least one embodiment;[0031] Figure 25 illustrates components of a system to support network function virtualization (NFV), in accordance with at least one embodiment;[0032] Figure 26 illustrates a processing system, in accordance with at least one embodiment;[0033] Figure 27 illustrates a computer system, in accordance with at least one embodiment;
[0034] Figure 28 illustrates a system, in accordance with at least one embodiment;[0035] Figure 29 illustrates an exemplary integrated circuit, in accordance with at least one embodiment;[0036] Figure 30 illustrates a computing system, according to at least one embodiment;[0037] Figure 31 illustrates an APU, in accordance with at least one embodiment;[0038] Figure 32 illustrates a CPU, in accordance with at least one embodiment;[0039] Figure 33 illustrates an exemplary accelerator integration slice, in accordance with at least one embodiment;[0040] Figures 34A-34B illustrate exemplary graphics processors, in accordance with at least one embodiment;[0041] Figure 35 A illustrates a graphics core, in accordance with at least one embodiment;[0042] Figure 35B illustrates a GPGPU, in accordance with at least one embodiment;[0043] Figure 36A illustrates a parallel processor, in accordance with at least one embodiment;[0044] Figure 36B illustrates a processing cluster, in accordance with at least one embodiment;[0045] Figure 36C illustrates a graphics multiprocessor, in accordance with at least one embodiment;[0046] Figure 37 illustrates a software stack of a programming platform, in accordance with at least one embodiment;[0047] Figure 38 illustrates a CUDA implementation of a software stack of Figure 37, in accordance with at least one embodiment;[0048] Figure 39 illustrates a ROCm implementation of a software stack of Figure 37, in accordance with at least one embodiment;[0049] Figure 40 illustrates an OpenCL implementation of a software stack of Figure 37, in accordance with at least one embodiment;[0050] Figure 41 illustrates software that is supported by a programming platform, in accordance with at least one embodiment; and
[0051] Figure 42 illustrates compiling code to execute on programming platforms of Figures 37 - 40, in accordance with at least one embodiment.DETAILED DESCRIPTION[0052] In at least one embodiment, an exemplary datacenter 100 can be utilized as illustrated in Figure 1, which has a cooling system subject to improvements described herein. In at least one embodiment, numerous specific details are set forth to provide a thorough understanding, but concepts herein may be practiced without one or more of these specific details. In at least one embodiment, datacenter cooling systems can respond to sudden high heat requirements caused by changing computing-loads in present day computing components. In at least one embodiment, as these requirements are subject to change or tend to range from a minimum to a maximum of different cooling requirements, these requirements must be met in an economical manner, using an appropriate cooling system. In at least one embodiment, for moderate to high cooling requirements, liquid cooling system may be used. In at least one embodiment, high cooling requirements are economically satisfied by localized immersion cooling. In at least one embodiment, these different cooling requirements also reflect different heat features of a datacenter. In at least one embodiment, heat generated from these components, servers, and racks are cumulatively referred to as a heat feature or a cooling requirement as cooling requirement must address a heat feature entirely.[0053] In at least one embodiment, a datacenter liquid cooling system is disclosed. In at least one embodiment, this datacenter cooling system addresses heat features in associated computing or datacenter devices, such as in graphics processing units (GPUs), in switches, in dual inline memory module (DIMMs), or central processing units (CPUs). In at least one embodiment, these components may be referred to herein as high heat density computing components. Furthermore, in at least one embodiment, an associated computing or datacenter device may be a processing card having one or more GPUs, switches, or CPUs thereon. In at least one embodiment, each of GPUs, switches, and CPUs may be a heat generating feature of a computing device. In at least one embodiment, a GPU, a CPU, or a switch may have one or more cores, and each core may be a heat generating feature.[0054] In at least one embodiment, a liquid-to-air (L2A) heat exchanger may be associated with a fan wall in a datacenter cooling system. In at least one embodiment, a fan wall of a
datacenter cooling system may enable air cooling for a rack associated with a fan wall. In at least one embodiment, association of an L2A heat exchanger with a fan wall may address dual purposes or at least two cooling requirements. In at least one embodiment, a first cooling requirement may be for air-cooling of racks requiring air-cooling. In at least one embodiment, a second cooling requirement may be for air-cooling of a fluid or liquid that is circulated out from at least one cold plate of a computing device in a rack.[0055] In at least one embodiment, fan walls are able to provide economical cooling by only air-cooling in one mode with an L2A heat exchanger disabled, but is also able to provide further economical cooling by cooling fluid within an L2A heat exchanger instead of requiring a secondary cooling loop, a primary cooling loop, a coolant distribution unit (CDU), and a chilling facility. In at least one embodiment, an L2A heat exchanger, in association with a fan wall, in a datacenter cooling system enables dual cooling requirements of providing air-cooling for racks requiring air-cooling, and of providing air-cooling of fluid or liquid that is circulated from at least one cold plate of a computing device. In at least one embodiment, fluid or liquid circulated out from at least one cold plate may be secondary coolant diverted from a secondary cooling loop.[0056] In at least one embodiment, an intelligent dual purpose heat exchanger and fan wall may be able to address a problem of migration from air to liquid cooled servers where substantial changes in a datacenter cooling system may have been required. In at least one embodiment, an intelligent dual purpose heat exchanger and fan wall addresses cooling requirement for a rack having both air and liquid-cooled (including, immersive-cooled) servers, which may have been difficult to cool from a fan wall alone.[0057] In at least one embodiment, an intelligent dual purpose heat exchanger and fan wall can repurpose fan walls to perform dual roles under different cooling requirements from different types of a servers within a rack. In at least one embodiment, intelligent features described herein enable such repurposing of fan walls and enable addressing different cooling requirements by sensing and responding to one or more of workloads, temperatures, humidity, and power levels with a rack or at least one computing device of a server within a rack.[0058] In at least one embodiment, an exemplary datacenter 100 can be utilized as illustrated
in Figure 1, which has a cooling system subject to improvements described herein. In at least one embodiment, a datacenter 100 may be one or more rooms 102 having racks 110 and auxiliary equipment to house one or more servers on one or more server trays. In at least one embodiment, a datacenter 100 is supported by a cooling tower 104 located external to a datacenter 100. In at least one embodiment, a cooling tower 104 dissipates heat from within a datacenter 100 by acting on a primary cooling loop 106. In at least one embodiment, a cooling distribution unit (CDU) 112 is used between a primary cooling loop 106 and a second or secondary cooling loop 108 to enable extraction of heat from a second or secondary cooling loop 108 to a primary cooling loop 106. In at least one embodiment, a secondary cooling loop 108 can access various plumbing into a server tray as required, in an aspect. In at least one embodiment, loops 106, 108 are illustrated as line drawings, but a person of ordinary skill would recognize that one or more plumbing features may be used. In at least one embodiment, flexible polyvinyl chloride (PVC) pipes may be used along with associated plumbing to move fluid along in each provided loop 106; 108. In at least one embodiment, one or more coolant pumps may be used to maintain pressure differences within coolant loops 106, 108 to enable movement of coolant according to temperature sensors in various locations, including in a room, in one or more racks 110, and/or in server boxes or server trays within one or more racks 110.[0059] In at least one embodiment, coolant in a primary cooling loop 106 and in a secondary cooling loop 108 may be at least water and an additive. In at least one embodiment, an additive may be glycol or propylene glycol. In operation, in at least one embodiment, each of a primary and a secondary cooling loops may have their own coolant. In at least one embodiment, coolant in secondary cooling loops may be proprietary to requirements of components in a server tray or in associated racks 110. In at least one embodiment, a CDU 112 is capable of sophisticated control of coolants, independently or concurrently, within provided coolant loops 106, 108. In at least one embodiment, a CDU may be adapted to control flow rate of coolant so that coolant is appropriately distributed to extract heat generated within associated racks 110. In at least one embodiment, more flexible tubing 114 is provided from a secondary cooling loop 108 to enter each server tray to provide coolant to electrical and/or computing components therein.[0060] In at least one embodiment, tubing 118 that forms part of a secondary cooling loop 108 may be referred to as room manifolds. Separately, in at least one embodiment, further tubing
116 may extend from row manifold tubing 118 and may also be part of a secondary cooling loop 108 but may be referred to as row manifolds. In at least one embodiment, coolant tubing 114 enters racks as part of a secondary cooling loop 108 but may be referred to as rack cooling manifold within one or more racks. In at least one embodiment, row manifolds 116 extend to all racks along a row in a datacenter 100. In at least one embodiment, plumbing of a secondary cooling loop 108, including coolant manifolds 118, 116, and 114 may be improved by at least one embodiment herein. In at least one embodiment, a chiller 120 may be provided in a primary cooling loop within datacenter 102 to support cooling before a cooling tower. In at least one embodiment, additional cooling loops that may exist in a primary control loop and that provide cooling external to a rack and external to a secondary cooling loop, may be taken together with a primary cooling loop and is distinct from a secondary cooling loop, for this disclosure.[0061] In at least one embodiment, in operation, heat generated within server trays of provided racks 110 may be transferred to a coolant exiting one or more racks 110 via flexible tubing of a row manifold 114 of a second cooling loop 108. In at least one embodiment, second coolant (in a secondary cooling loop 108) from a CDU 112, for cooling provided racks 110, moves towards one or more racks 110 via provided tubing. In at least one embodiment, second coolant from a CDU 112 passes from on one side of a room manifold having tubing 118, to one side of a rack 110 via a row manifold 116, and through one side of a server tray via different tubing 114. In at least one embodiment, spent or returned second coolant (or exiting second coolant carrying heat from computing components) exits out of another side of a server tray (such as enter left side of a rack and exits right side of a rack for a server tray after looping through a server tray or through components on a server tray). In at least one embodiment, spent second coolant that exits a server tray or a rack 110 comes out of different side (such as exiting side) of tubing 114 and moves to a parallel, but also exiting side of a row manifold 116. In at least one embodiment, from a row manifold 116, spent second coolant moves in a parallel portion of a room manifold 118 and is going in an opposite direction than incoming second coolant (which may also be renewed second coolant), and towards a CDU 112.[0062] In at least one embodiment, spent second coolant exchanges its heat with a primary coolant in a primary cooling loop 106 via a CDU 112. In at least one embodiment, spent second coolant may be renewed (such as relatively cooled when compared to a temperature at a spent
second coolant stage) and ready to be cycled back to through a second cooling loop 108 to one or more computing components. In at least one embodiment, various flow and temperature control features in a CDU 112 enable control of heat exchanged from spent second coolant or flow of second coolant in and out of a CDU 112. In at least one embodiment, a CDU 112 may be also able to control a flow of primary coolant in primary cooling loop 106.[0063] In at least one embodiment, server-level features 200 as illustrated in Figure 2 can be associated with an intelligent dual purpose heat exchanger and fan wall for a datacenter cooling system. In at least one embodiment, server-level features 200 include a server tray or box 202. In at least one embodiment, a server tray or box 202 includes a server manifold 204 to be intermediately coupled between provided cold plates 210A-D of a server tray or box 202 and rack manifolds of a rack hosting a server tray or box 202. In at least one embodiment, a server tray or box 202 includes one or more cold plates 210A-D associated with one or more computing or datacenter components or devices 220A-D. In at least one embodiment, one or more serverlevel cooling loops 214A, B may be provided between a server manifold 204 and one or more colds plates 210A-D. In at least one embodiment, each server-level cooling loop 214A; B includes an inlet line 210 and an outlet line 212. In at least one embodiment, when there are series configured cold plates 210A, B, an intermediate line 216 may be provided. In at least one embodiment, one or more cold plates 210A-D may support distinct ports and channels for a secondary coolant of a secondary cooling loop or a different local coolant, such as a fluid circulated from a pre-loaded L2A heat exchanger associated with a fan wall. In at least one embodiment, fluid for cooling may be provided to a server manifold 204 via provided inlet and outlines 206 A, 206B.[0064] In at least one embodiment, a server tray 202 is an immersive-cooled server tray that may be flooded by fluid. In at least one embodiment, a fluid for an immersive-cooled server tray may be a dielectric engineered fluid capable of being used in an immersive-cooled server. In at least one embodiment, a secondary coolant or local coolant may be used to cool engineered fluid. In at least one embodiment, a local coolant may be used to cool engineered fluid when a primary cooling loop associated with a secondary cooling loop circulating a secondary coolant has failed or is failing. In at least one embodiment, at least one cold plate therefore has ports for a secondary cooling loop and for a local cooling loop, and can support a local cooling loop that is
activated in an event of a failure in a primary cooling loop. In at least one embodiment, an intelligent dual purpose heat exchanger and fan wall may be used without a secondary cooling loop.[0065] In at least one embodiment, at least one dual-cooling cold plate 210B; 250 may be configured to work alongside regular cold plates 210A, C, D. In at least one embodiment, a three-dimensional (3D) blow-up illustration (cold plate 250) provides internal detail of at least some features that may be included in a dual-cooling cold plate 210B. In at least one embodiment, a regular cold plate may have one set of microchannels 264; 270 instead of two sets illustrated. In at least one embodiment, a dual-cooling cold plate 250 has distinct paths 264, 270 (each path also referred to as microchannels) for secondary coolant of a secondary cooling loop and for local coolant of a local cooling loop featuring a modular unit. In at least one embodiment, secondary or local coolant may not be dielectric in property. In at least one embodiment, in a use case of an immersive-cooled server, fluid that may be a dielectric engineered fluid may be adapted for both, a cold plate application and an immersive-cooled server tray application.[0066] In at least one embodiment, reference to cold plate, along with its dual-cooling features, implies a reference to a cold plate that can support at least two types of cooling loops, unless otherwise stated. In at least one embodiment, both types of colds plates receive fluid for cooling from a same secondary cooling loop and can both support a local cooling loop. In at least one embodiment, a standard coolant, such as facility water may be used in both a secondary cooling loop and a local cooling loop. In at least one embodiment, secondary coolant already within a cold plate is diverted to a local cooling loop and may be mixed with pre-loaded secondary coolant already within an L2A heat exchanger.[0067] In at least one embodiment, local coolant may therefore be same or similar to a secondary coolant to avoid issues regarding chemistry differences and manufacturer requirements of cold plates used in a datacenter cooling system. In at least one embodiment, a fluid may only support cold plate usage and may not be available for immersive cooling. In at least one embodiment, each type of cold plate receives different fluid from respective secondary or other cooling loops interfacing with a primary cooling loop. In at least one embodiment, in situations where different fluids are used with different coolant distribution units (CDUs) of
different secondary loops, then a local cooling loop may be suited for a dual-cooling cold plate so that different channels may be used for each of a local coolant and different secondary coolants.[0068] In at least one embodiment, a dual-cooling cold plate 250 is adapted to receive a two types of fluids (such as a secondary coolant and a local coolant) and to keep two types of fluids distinct from each other via their distinct ports 252, 272; 268, 262 and their distinct paths 264, 270. In at least one embodiment, each distinct path is a fluid path. In at least one embodiment, fluid (such as local coolant) from a fluid source and a secondary coolant may be of a same or similar composition and may be restocked from a same source in a datacenter cooling system.[0069] In at least one embodiment, a dual-cooling cold plate 250 includes ports 252, 272 to receive fluid into a cold plate 250 and to pass fluid out of a cold plate 250. In at least one embodiment, a dual-cooling cold plate 250 includes ports 268, 262 to receive a secondary coolant into a cold plate 250 and to pass a secondary coolant out of a cold plate 250. In at least one embodiment, ports 252, 272 may have valve covers 254, 260 that may be directional, and pressure controlled. In at least one embodiment, valve covers may be associated with all provided ports. In at least one embodiment, provided valve covers 254, 260 are mechanical features of associated flow controllers that also have corresponding electronic features (such as at least one processor to execute instructions stored in associated memory and to control mechanical features for associated flow controllers).[0070] In at least one embodiment, each valve may be actuated by an electronic feature of an associated flow controller. In at least one embodiment, electronic and mechanical features of provided flow controllers are integrated. In at least one embodiment, electronic and mechanical features of provided flow controllers are physically distinct. In at least one embodiment, reference to flow controllers may be to one or more of provided electronic and mechanical features or to their union but is at least in reference to features enabling control of flow of coolant or fluid through each cold plate or an immersion-cooled server tray or box.[0071] In at least one embodiment, electronic features of provided flow controllers receive control signals and assert control over mechanical features. In at least one embodiment, electronic features of provided flow controllers may be actuators or other electronic parts of
other similar electromechanical features. In at least one embodiment, flow pumps may be used as flow controllers. In at least one embodiment, impellers, pistons, or bellows may be mechanical features, and an electronic motor and circuitry form electronic features of provided flow controllers.[0072] In at least one embodiment, circuitry of provided flow controllers may include processors, memories, switches, sensors, and other components, altogether forming electronic features of provided flow controllers. In at least one embodiment, provided ports 252, 262, 272, 268 of provided flow controllers are adapted to either allow entry or to allow egress of an immersive fluid. In at least one embodiment, flow controllers 280 may be associated with fluid lines 276 (also 256, 274) that enable entry and egress of fluid (such as a local coolant) to a cold plate 210B. In at least one embodiment, other flow controllers may be similarly associated with coolant lines 210, 216, 212 (also 266, 258) to enable entry and egress of a secondary coolant to a cold plate 210B.[0073] In at least one embodiment, fluid (such as a local coolant) enters provided fluid lines 276 via dedicated fluid inlet and outlet lines 208A, B. In at least one embodiment, a server manifold 204 is adapted with channels therein (illustrated by dotted lines) to support distinct paths to distinct fluid lines 276 (also 256, 274) and to any remaining loops 214A, B that are associated with secondary coolant inlet and outlet lines 206 A, B. In at least one embodiment, there may be multiple manifolds to support fluid (a local coolant) and secondary coolant distinctly. In at least one embodiment, there may be multiple manifolds to support entry and egress, distinctly, for each of a fluid and a secondary coolant. In at least one embodiment, if a fluid is same or similar as a secondary coolant, then at least two different flows via a same fluid path (at least within a cold plate or a server tray) to a fluid source and to a secondary coolant row manifold (such as row manifold 350 in Figure 3) are enabled.[0074] In at least one embodiment, a first flow may be to enable fluid (such as local coolant) to flow through one or more provided ports 252, 272 and an associated path 270. In at least one embodiment, a dual-cooling cold plate 250 may have isolated plate sections that are flooded with a fluid and/or a secondary coolant, while being kept distinct from each other by gaskets or seals. In at least one embodiment, a second flow may be to enable secondary coolant to flow through provided ports 268, 262, and an associated path 264.
[0075] In at least one embodiment, flow controllers 278 may be associated with a fluid inlet 276 and outlet portions at a server manifold 204 instead of provided flow controllers 280 at respective cold plates. In at least one embodiment, a first flow uses only local coolant and may be enabled when a failure is determined in a secondary cooling loop or a primary cooling loop, so that a secondary coolant is unable to effectively extract heat from at least one computing device. In at least one embodiment, a failure may be that a secondary coolant is not sufficiently cooled via a CDU and so it may be unable to extract sufficient heat of at least one computing device via its associated cold plate.[0076] In at least one embodiment, rack-level features 300 as illustrated in Figure 3 can be associated with an intelligent dual purpose heat exchanger and fan wall for a datacenter cooling system. In at least one embodiment, rack-level features 300 include a rack 302 having brackets 304, 306 to hang cooling manifolds 314A, B. In at least one embodiment, while a rack 330 is separately illustrated from a rack 302, this rack 330 may be illustrative of a rear perspective view of a rack 302. In at least one embodiment, as such, brackets 334, 336 provided on rack 330 are perspective views of brackets 304, 306 provided on rack 302. In at least one embodiment, brackets 304, 306 provided for a rack are flat structures against an inner wall of a rack. In at least one embodiment, brackets 304, 306 provided for a rack extend from an inner wall of a rack. In at least one embodiment, brackets 304, 306 provided for a rack are affixed to an inner wall of a rack and have multiple mounting points facing one or more directions, including inside or towards a rear of a rack.[0077] In at least one embodiment, cooling manifolds 314A, B may be provided to pass secondary coolant or local coolant between server-level features 200 (and illustrated in Figure 3 as server trays or boxes 308) and a CDU (such as CDU 406 of Figure 4) of a secondary cooling loop or a local cooling loop of a datacenter cooling system. In at least one embodiment, different CDUs may serve different racks. In at least one embodiment, different rack cooling manifolds may be distinctly part of a secondary cooling loop and a local cooling loop.[0078] In at least one embodiment, row manifold 350 may be part of a secondary cooling loop to feed an inlet rack manifold 314A via provided lines 310A, 310. In at least one embodiment, secondary coolant proceeds via a provided line 316 to cold plate 326 to extract heat from associated computing device 324 within a server 308; and proceeds via a provided line 318 to
outlet rack manifold 314B and through provided lines 312, 312 A, and back into a same or a different row manifold 350. In at least one embodiment, an intelligent dual purpose heat exchanger and fan wall can work independent of a secondary cooling loop via provided lines 312B, 310B for a local cooling loop. In at least one embodiment, one or more diverter flow controllers 310C, 312C isolates each of a secondary cooling loop and a local cooling loop.[0079] In at least one embodiment, a datacenter cooling system includes a liquid-to-air (L2A) heat exchanger 340 that is associated with a fan wall 338 of a rack 330. In at least one embodiment, a fan wall 338 is part of or incorporated within a rear door of a rack 302 (or 330). In at least one embodiment, fans 360 of a fan wall 338 may be directed to suction air from or to blow air through a rack. In at least one embodiment, an L2A heat exchanger 340 includes channels 348 to pass fluid for cooling. In at least one embodiment, a datacenter cooling system is able to address a first cooling requirement of a rack 330 (or 302), in a first mode, by air through a rack 330 that is enabled a fan wall 338. In at least one embodiment, in a first mode, an L2A heat exchanger may be disabled. In at least one embodiment, in a second mode, a datacenter cooling system is able to address a second cooling requirement of a fluid from at least one cold plate 326 in a rack 330 (or 302) using air enabled by a fan wall 338 through an L2A heat exchanger 340 that is enabled to comprise fluid circulating therein.[0080] In at least one embodiment, fans 360 of a fan wall 338 blows air through a rack 330 in a first mode, but suctions air from a rack and through an L2A heat exchanger 340 in a second mode. In at least one embodiment, an L2A heat exchanger 340 is located behind a fan wall 338. In at least one embodiment, suction air enabled by a fan wall 338 (operating in a first direction of rotation of blades therein) is used to circulate air through an L2A heat exchanger 340 to cause cooling of second fluid therein, in a second mode of operation of a datacenter cooling system. In at least one embodiment, with an L2A heat exchanger 340 disabled, a fan wall 338 may be operated in an second direction (opposite rotation than a first direction as referenced above) so that blades therein cause blown air through a rack in a first mode for a datacenter cooling system. In at least one embodiment, a fan wall 338 may be operated in a first direction of blades therein, with an L2A heat exchanger disabled, to suction air through a rack in a first mode for a datacenter cooling system. In at least one embodiment, suction air may be used with a fan wall 338 located between an L2A heat exchanger and server tray of a rack so that heat is removed
wholly from a rack and from an L2A heat exchanger in a second mode of operation of a datacenter cooling system. In at least one embodiment, in all modes of use of a datacenter cooling system, all air is directed away from a rack or server tray towards a hot aisle of a datacenter so that no air previously removed (and having heat) is directed back to a server tray or box of a rack.[0081] In at least one embodiment, a first cooling requirement and a second cooling requirement may pertain to different heat features of a datacenter. In at least one embodiment, a first cooling requirement may be associated with heat generated from one or more computing devices that may be addressed by air alone. In at least one embodiment, a second cooling requirement may be associated with heat generated from one or more computing devices by retained within a fluid, via a cold plate, for instance. In at least one embodiment, an amount of heat generated, extracted, or retained may be temperature value that needs to below an operating value or an operating range; or that needs to be maintained at an operating value or range.[0082] In at least one embodiment, at least one processor may be provided to determine a temperature associated with a computing device 324 in a rack 330 (or 302). In at least one embodiment, at least one processor is able to cause a datacenter cooling system to operate in a first mode or a second mode based at least in part on a temperature associated with or determined from a computing device 324. In at least one embodiment, an immersive-cooled server 352 within a rack 302 (or 330) may have its cooling requirements addressed at a same time as an aircooled server 308 within a rack 302 (or 330). In at least one embodiment, an immersive-cooled server 352 may include a dielectric engineered fluid surrounding a computing device. In at least one embodiment, an immersive-cooled server 352 may include a second heat exchanger to exchange heat between an dielectric engineered fluid and fluid to be circulated in an L2A heat exchanger 340.[0083] In at least one embodiment, a cold plate 326 may be associated with a computing device 324. In at least one embodiment, a cold plate may have first ports for a first portion of microchannels to support a secondary coolant distinctly from a second portion of microchannels that support a fluid of an L2A heat exchanger. In at least one embodiment, at least one processor may be adapted to receive sensor inputs from sensors associated with a computing device 324. In at least one embodiment, sensors may also be associated with one or more of a rack, a
secondary coolant, or a fluid. In at least one embodiment, at least one processor may be adapted to determine a first cooling requirement and a second cooling requirement based in part on sensor inputs. In at least one embodiment, sensor inputs may be temperature sensed at one or more time intervals from sensors as described.[0084] In at least one embodiment, one or more neural networks are adapted to receive sensor inputs from provided sensors, and are adapted to infer a first cooling requirement and a second cooling requirement for a datacenter cooling system. In at least one embodiment, at least one processor may cause at least one flow controller to enable flow of fluid through an L2A heat exchanger and to prevent flow of fluid to a secondary cooling loop. In at least one embodiment, one or more diverter flow controllers 310C, 312C may be enabled to cause such flow and prevention of flow of fluid. In at least one embodiment, provided lines 310B, 312B may be provided to fluidly couple with inlet line 342 and outlet line 344 of an L2A heat exchanger 340. In at least one embodiment, further flow controllers 346 on an L2A heat exchanger 340 may be enabled to prevent or cause flow of fluid through an L2A heat exchanger 340.[0085] In at least one embodiment, at least one processor may cause one or more fans 360 of a fan wall 338 to be adjusted in a first mode differently than a second mode. In at least one embodiment, a latching mechanism 356 may be provided to enable association of an L2A heat exchanger 340 with a fan wall 338 of a rack 330 (or 302). In at least one embodiment, electrical coupling may be provided to power at least one component of a flow controller 346 or of a fan wall 338. In at least one embodiment, at least one processor may be adapted to receive sensor inputs from sensors associated with at least one computing device, such as computing device 324. In at least one embodiment, at least one processor may determine a change in a coolant state based in part on sensor inputs. In at least one embodiment, a coolant state may be a relating to temperature of coolant, a flow rate, a flow volume, or status (such as flowing or not).[0086] In at least one embodiment, a coolant state may be sensed from an egress or an entry to one or more of a cold plate, a rack, or a cooling manifold. In at least one embodiment, at least one processor can cause a datacenter cooling system to operate in a first mode or a second mode based in part on a change determined for a coolant state. In at least one embodiment, when it is determined that coolant temperatures at an egress from a cold plate are not beyond a threshold (implying that not much heat is being generated by an associated computing device), a first mode
for a fan wall may be enabled and coolant flow may be stopped by disabling an L2A heat exchanger. In at least one embodiment, this enables economical use of a datacenter cooling system. In at least one embodiment, when air temperature at a hot aisle of a rack is determined to be beyond a threshold (implying that more heat is being generated by an associated computing device than can be handled by air alone), a second mode for a datacenter cooling system may be enabled. In at least one embodiment, a second mode engages or enables an L2A heat exchanger to circulate coolant into a cold plate associated with a computing device to provide further cooling than an air cooling. In at least one embodiment, air cools coolant of an L2A heat exchanger in a second mode of a datacenter cooling system.[0087] In at least one embodiment, datacenter-level features 400 as illustrated in Figure 4 can be associated with an intelligent dual purpose heat exchanger and fan wall for a datacenter cooling system. In at least one embodiment, datacenter-level features 400, within a datacenter 402, may include racks 404 for hosting one or more server trays or boxes; one or more CDUs 406 for exchanging heat between a secondary cooling loop 412 and a primary cooling loop 422; one or more row manifolds 410 for distributing coolant from a CDU 406; and associated various flow controllers 424, and inlet and outlet lines 412, 414, 416, 418.[0088] In at least one embodiment, an intelligent dual purpose heat exchanger and fan wall are provided on each of rear doors of each of provided racks 404 in a datacenter 402. In at least one embodiment, an aisle behind racks 404 is a hot aisle for discharging heat from at least one computing device in at least one rack during a first and a second mode of operation of a datacenter cooling system. In at least one embodiment, different row manifolds may be associated with different racks. In at least one embodiment, different coolant may be a chemical match or mismatch with respect to a local coolant. In at least one embodiment, different fluid sources are provided as redundant features to different CDUs depending on chemistries of different secondary coolant used with each of different provided CDUs. In at least one embodiment, there need not be a secondary cooling loop and CDU for one or more racks 404. In at least one embodiment, these racks not associated with a secondary cooling loop may be sufficiently addressed by an intelligent dual purpose heat exchanger and fan wall.[0089] In at least one embodiment, a rack 404 may be associated with at least one processor for operating an intelligent dual purpose heat exchanger and fan wall thereon. In at least one
embodiment, a processor may include one or more circuits. In at least one embodiment, one or more circuits of a processor may be adapted to determine cooling requirements for a datacenter cooling system. In at least one embodiment, a processor may cause a first mode of operation for a datacenter cooling system to address a first cooling requirement by air enabled through a rack 404 from a fan wall, which an associated L2A heat exchanger is disabled. In at least one embodiment, a processor may cause a second mode of operation for a datacenter cooling system to address a second cooling requirement by air enabled by a fan wall acting through an L2A heat exchanger. In at least one embodiment, air enabled by a fan wall acts to cool fluid circulating within an L2A heat exchanger from at least one cold plate in a rack 404.[0090] In at least one embodiment, a fan wall may enable air by suction or by blowing in a vicinity of an L2A heat exchanger. In at least one embodiment, air is blown through an L2A heat exchanger. In at least one embodiment, air flows through an L2A heat exchanger and through a fan wall, by suction caused by a fan wall. In at least one embodiment, any air through an L2A heat exchanger is enabled to be away from racks or server trays so that hot air does not return to a rack or a server tray.[0091] In at least one embodiment, a processor used with an intelligent dual purpose heat exchanger and fan wall includes an output to provide signals for one or more flow controllers. In at least one embodiment, one or more flow controllers may enable flow of fluid through an L2A heat exchanger and may prevent flow of fluid to a secondary cooling loop in a second mode of a datacenter cooling system.[0092] In at least one embodiment, a processor used with an intelligent dual purpose heat exchanger and fan wall includes an input to receive sensor inputs from sensors associated with at least one computing device of a rack 404. In at least one embodiment, sensors may be also or separately associated with a rack, a secondary coolant, or fluid from an associated cold plate of a rack. In at least one embodiment, a processor may determine a first cooling requirement and a second cooling requirement based in part on sensor inputs from these associated sensors.[0093] In at least one embodiment, one or more neural networks may be provided within at least one processor to receive sensor inputs and to infer a first cooling requirement and a second cooling requirement from computing devices or aspects of a datacenter cooling system. In at
least one embodiment, one or more neural networks may infer a failure of a secondary cooling loop or a primary cooling loop. In at least one embodiment, based in part on sensor inputs associated with flow rates, flow volumes, temperature, humidity, and leaks, one or more circuits of a processor may cause one or more flow controllers to support a second mode.[0094] In at least one embodiment, a processor used with a rack 404 and an intelligent dual purpose heat exchanger and fan wall includes one or more circuits. In at least one embodiment, one or more circuits of a processor may cause a first mode or a second mode of different modes of operation for a datacenter cooling system. In at least one embodiment, causing a first mode or a second mode is in reference to causing a datacenter cooling system to operate in a first mode or a second mode. In at least one embodiment, a datacenter cooling system includes a fan wall that may act on an associated L2A heat exchanger. In at least one embodiment, one or more circuits of a processor may be provided to train one or more neural networks to infer cooling requirements from sensor inputs of sensors associated with a rack or with a fluid from at least one cold plate of a rack. In at least one embodiment, a processor may cause a first mode to address a first cooling requirement by air through a rack from a fan wall. In at least one embodiment, an L2A heat exchanger may be disabled in a first mode. In at least one embodiment, a processor may cause a second mode to address a second cooling requirement by air enabled by a fan wall acting on an L2A heat exchanger to cool fluid circulating therein.[0095] In at least one embodiment, an output of a processor used with an intelligent dual purpose heat exchanger and fan wall may be adapted to provide signals for one or more flow controllers. In at least one embodiment, this enables flow of fluid through an L2A heat exchanger and enables prevention of flow of fluid to a secondary cooling loop in a second mode of a datacenter cooling system. In at least one embodiment, a secondary cooling loop is not used with an intelligent dual purpose heat exchanger and fan wall; however, when used, if chemistry matches between a secondary coolant and a local coolant to be used with an L2A heat exchanger, then it is possible to use at least one diversion flow controller to divert secondary coolant for use with an intelligent dual purpose heat exchanger and fan wall.[0096] In at least one embodiment, one or more neural networks of a processor may be adapted to receive sensor inputs. In at least one embodiment, one or more neural networks may be trained to infer a first cooling requirement and a second cooling requirement as part of an
analysis of prior sensor inputs and prior cooling requirements. In at least one embodiment, one or more neural networks may be trained with correlated data of prior sensor inputs and prior cooling requirements so that new sensor inputs within thresholds of prior sensor inputs may be correlated to prior cooling requirements or variations thereof.[0097] In at least one embodiment, an output of processor used with an intelligent dual purpose heat exchanger and fan wall may be adapted to provide signals to cause one or more fans of a fan wall to be adjusted in a first mode so that is operates differently than a second mode. In at least one embodiment, air cooling may be reduced or a fan wall direction may be reversed to cause suction of air away from a rack blowing air into a rack. In at least one embodiment, either action of a fan wall is to move extracted heat out and away from a fluid circulating in an associated L2A heat exchanger, and out and away from a rack or a server tray.[0098] In at least one embodiment, an input of a processor used with an intelligent dual purpose heat exchanger and fan wall is adapted to receive sensor inputs associated with a temperature from at least one computing device or from fluid exiting a cold plate. In at least one embodiment, one or more neural networks of a processor may be trained to infer that a change in coolant state has occurred based in part on a temperature and on prior temperatures of at least one computing device or fluid. In at least one embodiment, one or more circuits of a processor may be adapted to cause a first mode or a second mode of operation for a datacenter cooling system.[0099] In at least one embodiment, a processor to be used with an intelligent dual purpose heat exchanger and fan wall includes one or more circuits to cause a first mode or a second mode of operation for a datacenter cooling system. In at least one embodiment, one or more circuits or a processor is to include one or more neural networks to infer cooling requirements from sensor inputs of sensors associated with a rack 404 or with fluid from at least one cold plate. In at least one embodiment, a processor may be adapted to cause a first mode to address a first cooling requirement by air through a rack 404 enabled by a fan wall. In at least one embodiment, a processor may be adapted to also cause a second mode to address a second cooling requirement by air enabled by a fan wall acting on an L2A heat exchanger to cool fluid circulating therein.[0100] In at least one embodiment, each of at least one processor described throughout Figures
1 - 4 has inference and/or training logic 1815 that may include, without limitation, code and/or data storage 1801 to store forward and/or output weight and/or input/output data, and/or other parameters to configure neurons or layers of a neural network trained and/or used for inferencing in aspects of one or more embodiments. In at least one embodiment, training logic 1815 may include, or be coupled to code and/or data storage 1801 to store graph code or other software to control timing and/or order, in which weight and/or other parameter information may be to be loaded to configure, logic, including integer and/or floating point units (collectively, arithmetic logic units (ALUs). In at least one embodiment, code, such as graph code, loads weight or other parameter information into processor ALUs based on an architecture of a neural network to which such code corresponds. In at least one embodiment, code and/or data storage 1801 stores weight parameters and/or input/output data of each layer of a neural network trained or used in conjunction with one or more embodiments during forward propagation of input/output data and/or weight parameters during training and/or inferencing using aspects of one or more embodiments. In at least one embodiment, any portion of code and/or data storage 1801 may be included with other on-chip or off-chip data storage, including a processor’s LI, L2, or L3 cache or system memory.[0101] In at least one embodiment, an inference and/or training logic 1815 of at least one processor may be part of a building management system (BMS) for controlling flow controllers at one or more of a server-level, a rack-level, and a row-level. In at least one embodiment, a determination to engage a flow controller associated with a local cooling loop, an intelligent dual purpose heat exchanger and fan wall, a CDU, cold plates, or other cooling manifolds may be provided to one or more neural networks of an inference and/or training logic 1815 to cause one or more neural networks to infer which flow controllers to gracefully engage or disengage for coolant requirements for one or more cold plates, servers, or racks from either an L2A heat exchanger together with a fan wall or a fan wall alone. In at least one embodiment, increase or decrease of fluid flow through an L2A heat exchanger may be enabled by flow controllers that are controlled by an inference and/or training logic 1815 of at least one processor associated with control logic that is associated with a local cooling loop.[0102] In at least one embodiment, at least one processor may be associated with a local cooling loop and with a secondary cooling loop. In at least one embodiment, at least one
processor may be associated with an intelligent dual purpose heat exchanger and fan wall. In at least one embodiment, at least one processor includes control logic, such as inference and/or training logic 1815 and is associated with at least one flow controller. In at least one embodiment, at least one flow controller may have their own respective processor or micro controller. In at least one embodiment, a processor or a micro controller performs instructions sent to it from a control logic. In at least one embodiment, a control logic may be to determine a change in a coolant state, such as a failure in a secondary cooling loop (such as a CDU and cooling manifolds) or a primary cooling loop (such as a chilling facility, cooling manifolds, and also an associated CDU). In at least one embodiment, a failure may also occur with a cooling manifold requiring replacement. In at least one embodiment, a control logic may cause at least one flow controller to provide a coolant response, such as by engaging a local cooling loop having a fluid source to provide local coolant or secondary coolant for at least one computing device.[0103] In at least one embodiment, a control logic may cause a first signal to at least one flow controller to enable a stopping of a secondary coolant from a secondary cooling loop as part of a coolant response. In at least one embodiment, a control logic may cause a second signal to at least one flow controller to enable a starting of a local coolant from a local cooling loop as part of a coolant response. In at least one embodiment, a control logic may receive sensor inputs from sensors associated with secondary coolant of a CDU, local coolant, and/or at least one computing device. In at least one embodiment, at least one processor can determine a change in a coolant state based in part on sensor inputs. In at least one embodiment, one or more neural networks of an inference and/or training logic 1815 may be adapted to receive sensor inputs and to infer a change in a coolant state.[0104] In at least one embodiment, at least one processor may include one or more circuits for one or more neural networks, such as an inference and/or training logic 1815. In at least one embodiment, an inference and/or training logic 1815 may be adapted to infer, from sensor inputs associated with at least one server or at least one rack, a change in a coolant state, such as coolant from a CDU being ineffective or retaining too much heat upon entry into a rack. In at least one embodiment, one or more circuits may be adapted to cause at least one flow controller to provide a coolant response from a local cooling loop.
[0105] In at least one embodiment, control logic associated with one or more circuits may cause a first signal (along with any associated signals) to at least one flow controller to enable a coolant response - either from a secondary cooling loop or a local cooling loop having an intelligent dual purpose heat exchanger and fan wall. In at least one embodiment, a second signal may be provided to at least flow controller and may also enable only air cooling of a rack without liquid cooling. In at least one embodiment, a distributed or an integrated architecture is enabled by one or more circuits of at least one processor. In at least one embodiment, a distributed architecture may be supported by distinctly located circuits of one or more circuits.[0106] In at least one embodiment, one or more neural networks of an inference and/or training logic 1815 may be adapted to infer that an increase or a decrease in cooling requirements of at least one computing component of at least one server. In at least one embodiment, one or more circuits may be adapted to cause a cooling loop to economically address decreased cooling requirements or to supplement increased cooling requirements for at least one computing component. In at least one embodiment, enabling a cooling loop represents a coolant response from a local cooling loop to preempt a respective increase or a respective decrease in cooling requirements of at least one computing component of at least one server based in part on workload sent to at least one computing component.[0107] In at least one embodiment, at least one processor includes one or more circuits, such as an inference and/or training logic 1815, to train one or more neural networks to make inferences from provided data. In at least one embodiment, inference and/or training logic 1815 may infer, from sensor inputs associated with at least one server or at least one rack, a change in a coolant state. In at least one embodiment, an inference may be used to enable one or more circuits to cause at least one flow controller of a local cooling loop to provide a coolant response. In at least one embodiment, a coolant response may be to cause a coolant response from a local cooling loop to absorb heat into a local coolant of a cooling manifold, and to exchange absorbed heat to an environment, instead of a secondary cooling loop having a CDU.[0108] In at least one embodiment, one or more circuits may be adapted to train one or more neural networks to infer that an increase or a decrease in cooling requirements of at least one computing component of at least one server. In at least one embodiment, one or more circuits may be adapted to train one or more neural networks to infer that an increase or a decrease in
flow output from a secondary cooling loop is associated with an improper flow of secondary coolant because of a failed CDU or a respective increase or a respective decrease in power requirements of at least one computing component of at least one server.[0109] In at least one embodiment, one or more neural networks may be trained to make inferences by prior associated heat features or cooling requirements from computing devices, servers, or racks, and cooling capacity or capabilities indicated by a fluid source of a local cooling loop, such as by an L2A heat exchanger having a specific cooling capability that is above an air cooling capability. In at least one embodiment, prior cooling requirements satisfied by a local cooling loop may be used to cause one or more neural networks to make similar inferences for future similar cooling requirements (in consideration of small variations there from) to be satisfied by adjusting one or more flow controllers to engage a local cooling loop.[0110] Figure 5 illustrates a method 500 associated with a datacenter cooling system of Figure 2 - 4, according to at least one embodiment. In at least one embodiment, a method 500 includes a step 502 for providing a liquid-to-air heat exchanger associated with a fan wall of a rack. In at least one embodiment, method 500 includes a further step 504 for enabling a datacenter center cooling system to address cooling requirements of a rack using different modes. In at least one embodiment, in step 506, a verification may be performed that at least one cooling requirement is determined for at least one computing device of a rack. In at least one embodiment, step 508 enables a datacenter cooling system to address a first cooling requirement of a rack in a first mode by air through a rack enabled by a fan wall, while an L2A heat exchanger is disabled. In at least one embodiment, step 508 may be performed if a verification in step 506 is confirmed. In at least one embodiment, step 504 may be otherwise performed when a verification in step 506 is not confirmed. In at least one embodiment, this enables a datacenter cooling system to continue to provide a first mode of cooling, such as air cooling, till a second cooling requirement is determined, where a second cooling requirement indicates more cooling required or that a first cooling requirement has been exceeded.[0111] In at least one embodiment, step 510 may be also performed in method 500 for enabling a datacenter cooling system to address a second cooling requirement of a fluid from at least one cold plate in a rack. In at least one embodiment, this second cooling requirement may use air cooling that is enabled by a fan wall and acting through an L2A heat exchanger, where an
L2A heat exchanger is enabled to include fluid circulating therein, from a cold plate.[0112] In at least one embodiment, method 500 may include a further step or a sub-step for determining, using at least one processor, a temperature associated with a computing device in a rack. In at least one embodiment, such a determination may lead to a further step or sub-step for causing a first mode or a second mode for a datacenter cooling system. In at least one embodiment, such a determination may be part of steps 506-510.[0113] In at least one embodiment, method 500 may include a further step or a sub-step for receiving, in at least one processor, sensor inputs from sensors associated with a computing device, a rack, a secondary coolant, or fluid from a cold plate. In at least one embodiment, such inputs may be used in a further step or sub-step for determining, using at least one processor, a first cooling requirement and a second cooling requirement for a datacenter cooling system.[0114] In at least one embodiment, method 500 may include a further step or a sub-step for enabling, using a latching mechanism, an association of an L2A heat exchanger with a fan wall of a rack. In at least one embodiment, such a feature may be performed between steps 502 and 504. In at least one embodiment, method 500 may include a further step or a sub-step for receiving, by at least one processor, sensor inputs from sensors associated with at least one computing device. In at least one embodiment, method 500 may include a further step or a substep for determining, by at least one processor, a change in a coolant state based in part on sensor inputs, such as from sensors as described. In at least one embodiment, method 500 may include a further step of causing a first mode or a second mode for a datacenter cooling system based in part on such a determination for a change in coolant state being made.Servers and Data Centers[0115] The following figures set forth, without limitation, exemplary network server and datacenter based systems that can be used to implement at least one embodiment.[0116] Figure 6 illustrates a distributed system 600, in accordance with at least one embodiment. In at least one embodiment, distributed system 600 includes one or more client computing devices 602, 604, 606, and 608, which are configured to execute and operate a client application such as a web browser, proprietary client, and/or variations thereof over one or more network(s) 610. In at least one embodiment, server 612 may be communicatively coupled with
remote client computing devices 602, 604, 606, and 608 via network 610.[0117] In at least one embodiment, server 612 may be adapted to run one or more services or software applications such as services and applications that may manage session activity of single sign-on (SSO) access across multiple datacenters. In at least one embodiment, server 612 may also provide other services or software applications can include non-virtual and virtual environments. In at least one embodiment, these services may be offered as web-based or cloud services or under a Software as a Service (SaaS) model to users of client computing devices 602, 604, 606, and/or 608. In at least one embodiment, users operating client computing devices 602, 604, 606, and/or 608 may in turn utilize one or more client applications to interact with server 612 to utilize services provided by these components.[0118] In at least one embodiment, software components 618, 620 and 622 of system 600 are implemented on server 612. In at least one embodiment, one or more components of system 600 and/or services provided by these components may also be implemented by one or more of client computing devices 602, 604, 606, and/or 608. In at least one embodiment, users operating client computing devices may then utilize one or more client applications to use services provided by these components. In at least one embodiment, these components may be implemented in hardware, firmware, software, or combinations thereof. It should be appreciated that various different system configurations are possible, which may be different from distributed system 600. The embodiment shown in Figure 6 is thus at least one embodiment of a distributed system for implementing an embodiment system and is not intended to be limiting.[0119] In at least one embodiment, client computing devices 602, 604, 606, and/or 608 may include various types of computing systems. In at least one embodiment, a client computing device may include portable handheld devices (e.g., an iPhone®, cellular telephone, an iPad®, computing tablet, a personal digital assistant (PDA)) or wearable devices (e.g., a Google Glass® head mounted display), running software such as Microsoft Windows Mobile®, and/or a variety of mobile operating systems such as iOS, Windows Phone, Android, BlackBerry 10, Palm OS, and/or variations thereof. In at least one embodiment, devices may support various applications such as various Internet-related apps, e-mail, short message service (SMS) applications, and may use various other communication protocols. In at least one embodiment, client computing devices may also include general purpose personal computers including, by way of at least one
embodiment, personal computers and/or laptop computers running various versions of Microsoft Windows®, Apple Macintosh®, and/or Linux operating systems.[0120] In at least one embodiment, client computing devices can be workstation computers running any of a variety of commercially-available UNIX® or UNIX-like operating systems, including without limitation a variety of GNU/Linux operating systems, such as Google Chrome OS. In at least one embodiment, client computing devices may also include electronic devices such as a thin-client computer, an Internet-enabled gaming system (e.g., a Microsoft Xbox gaming console with or without a Kinect® gesture input device), and/or a personal messaging device, capable of communicating over network(s) 610. Although distributed system 600 in Figure 6 is shown with four client computing devices, any number of client computing devices may be supported. Other devices, such as devices with sensors, etc., may interact with server 612.[0121] In at least one embodiment, network(s) 610 in distributed system 600 may be any type of network that can support data communications using any of a variety of available protocols, including without limitation TCP/IP (transmission control protocol/Intemet protocol), SNA (systems network architecture), IPX (Internet packet exchange), AppleTalk, and/or variations thereof. In at least one embodiment, network(s) 610 can be a local area network (LAN), networks based on Ethernet, Token-Ring, a wide-area network, Internet, a virtual network, a virtual private network (VPN), an intranet, an extranet, a public switched telephone network (PSTN), an infra-red network, a wireless network (e.g., a network operating under any of the Institute of Electrical and Electronics (IEEE) 802.11 suite of protocols, Bluetooth®, and/or any other wireless protocol), and/or any combination of these and/or other networks.[0122] In at least one embodiment, server 612 may be composed of one or more general purpose computers, specialized server computers (including, by way of at least one embodiment, PC (personal computer) servers, UNIX® servers, mid-range servers, mainframe computers, rackmounted servers, etc.), server farms, server clusters, or any other appropriate arrangement and/or combination. In at least one embodiment, server 612 can include one or more virtual machines running virtual operating systems, or other computing architectures involving virtualization. In at least one embodiment, one or more flexible pools of logical storage devices can be virtualized to maintain virtual storage devices for a server. In at least one embodiment, virtual networks can
be controlled by server 612 using software defined networking. In at least one embodiment, server 612 may be adapted to run one or more services or software applications.[0123] In at least one embodiment, server 612 may run any operating system, as well as any commercially available server operating system. In at least one embodiment, server 612 may also run any of a variety of additional server applications and/or mid-tier applications, including HTTP (hypertext transport protocol) servers, FTP (file transfer protocol) servers, CGI (common gateway interface) servers, JAVA® servers, database servers, and/or variations thereof. In at least one embodiment, exemplary database servers include without limitation those commercially available from Oracle, Microsoft, Sybase, IBM (International Business Machines), and/or variations thereof.[0124] In at least one embodiment, server 612 may include one or more applications to analyze and consolidate data feeds and/or event updates received from users of client computing devices 602, 604, 606, and 608. In at least one embodiment, data feeds and/or event updates may include, but are not limited to, Twitter® feeds, Facebook® updates or real-time updates received from one or more third party information sources and continuous data streams, which may include real-time events related to sensor data applications, financial tickers, network performance measuring tools (e.g., network monitoring and traffic management applications), clickstream analysis tools, automobile traffic monitoring, and/or variations thereof. In at least one embodiment, server 612 may also include one or more applications to display data feeds and/or real-time events via one or more display devices of client computing devices 602, 604, 606, and 608.[0125] In at least one embodiment, distributed system 600 may also include one or more databases 614 and 616. In at least one embodiment, databases may provide a mechanism for storing information such as user interactions information, usage patterns information, adaptation rules information, and other information. In at least one embodiment, databases 614 and 616 may reside in a variety of locations. In at least one embodiment, one or more of databases 614 and 616 may reside on a non-transitory storage medium local to (and/or resident in) server 612. In at least one embodiment, databases 614 and 616 may be remote from server 612 and in communication with server 612 via a network-based or dedicated connection. In at least one embodiment, databases 614 and 616 may reside in a storage-area network (SAN). In at least one
embodiment, any necessary files for performing functions attributed to server 612 may be stored locally on server 612 and/or remotely, as appropriate. In at least one embodiment, databases 614 and 616 may include relational databases, such as databases that are adapted to store, update, and retrieve data in response to SQL-formatted commands.[0126] Figure 7 illustrates an exemplary datacenter 700, in accordance with at least one embodiment. In at least one embodiment, datacenter 700 includes, without limitation, a datacenter infrastructure layer 710, a framework layer 720, a software layer 730 and an application layer 740.[0127] In at least one embodiment, as shown in Figure 7, datacenter infrastructure layer 710 may include a resource orchestrator 712, grouped computing resources 714, and node computing resources (“node C.R.s”) 716(1)-716(N), where “N” represents any whole, positive integer. In at least one embodiment, node C.R.s 716(1)-716(N) may include, but are not limited to, any number of central processing units (“CPUs”) or other processors (including accelerators, field programmable gate arrays (“FPGAs”), graphics processors, etc.), memory devices (e.g„ dynamic read-only memory), storage devices (e.g„ solid state or disk drives), network input/output ("NW I/O”) devices, network switches, virtual machines (“VMs”), power modules, and cooling modules, etc. In at least one embodiment, one or more node C.R.s from among node C.R.s 716(1)-716(N) may be a server having one or more of above-mentioned computing resources.[0128] In at least one embodiment, grouped computing resources 714 may include separate groupings of node C.R.s housed within one or more racks (not shown), or many racks housed in datacenters at various geographical locations (also not shown). Separate groupings of node C.R.s within grouped computing resources 714 may include grouped compute, network, memory or storage resources that may be configured or allocated to support one or more workloads. In at least one embodiment, several node C.R.s including CPUs or processors may grouped within one or more racks to provide compute resources to support one or more workloads. In at least one embodiment, one or more racks may also include any number of power modules, cooling modules, and network switches, in any combination.[0129] In at least one embodiment, resource orchestrator 712 may configure or otherwise control one or more node C.R.s 716(1)-716(N) and/or grouped computing resources 714. In at
least one embodiment, resource orchestrator 712 may include a software design infrastructure (“SDI”) management entity for datacenter 700. In at least one embodiment, resource orchestrator 712 may include hardware, software or some combination thereof.[0130] In at least one embodiment, as shown in Figure 7, framework layer 720 includes, without limitation, a job scheduler 732, a configuration manager 734, a resource manager 736 and a distributed file system 738. In at least one embodiment, framework layer 720 may include a framework to support software 752 of software layer 730 and/or one or more application(s) 742 of application layer 740. In at least one embodiment, software 752 or application(s) 742 may respectively include web-based service software or applications, such as those provided by Amazon Web Services, Google Cloud and Microsoft Azure. In at least one embodiment, framework layer 720 may be, but is not limited to, a type of free and open-source software web application framework such as Apache SparkTM (hereinafter “Spark”) that may utilize distributed file system 738 for large-scale data processing (e.g„ "big data"). In at least one embodimentjob scheduler 732 may include a Spark driver to facilitate scheduling of workloads supported by various layers of datacenter 700. In at least one embodiment, configuration manager 734 may be capable of configuring different layers such as software layer 730 and framework layer 720, including Spark and distributed file system 738 for supporting large-scale data processing. In at least one embodiment, resource manager 736 may be capable of managing clustered or grouped computing resources mapped to or allocated for support of distributed file system 738 and job scheduler 732. In at least one embodiment, clustered or grouped computing resources may include grouped computing resource 714 at datacenter infrastructure layer 710. In at least one embodiment, resource manager 736 may coordinate with resource orchestrator 712 to manage these mapped or allocated computing resources.[0131] In at least one embodiment, software 752 included in software layer 730 may include software used by at least portions of node C.R.s 716(1 )-716(N), grouped computing resources 714, and/or distributed file system 738 of framework layer 720. One or more types of software may include, but are not limited to, Internet web page search software, e-mail virus scan software, database software, and streaming video content software.[0132] In at least one embodiment, application(s) 742 included in application layer 740 may include one or more types of applications used by at least portions of node C.R.s 716(l)-716(N),
grouped computing resources 714, and/or distributed file system 738 of framework layer 720. In at least one or more types of applications may include, without limitation, CUD A applications, 5G network applications, artificial intelligence application, datacenter applications, and/or variations thereof.[0133] In at least one embodiment, any of configuration manager 734, resource manager 736, and resource orchestrator 712 may implement any number and type of self-modifying actions based on any amount and type of data acquired in any technically feasible fashion. In at least one embodiment, self-modifying actions may relieve a datacenter operator of datacenter 700 from making possibly bad configuration decisions and possibly avoiding underutilized and/or poor performing portions of a datacenter.[0134] Figure 8 illustrates a client-server network 804 formed by a plurality of network server computers 802 which are interlinked, in accordance with at least one embodiment. In at least one embodiment, each network server computer 802 stores data accessible to other network server computers 802 and to client computers 806 and networks 808 which link into a wide area network 804. In at least one embodiment, configuration of a client-server network 804 may change over time as client computers 806 and one or more networks 808 connect and disconnect from a network 804, and as one or more trunk line server computers 802 are added or removed from a network 804. In at least one embodiment, when a client computer 806 and a network 808 are connected with network server computers 802, client-server network includes such client computer 806 and network 808. In at least one embodiment, the term computer includes any device or machine capable of accepting data, applying prescribed processes to data, and supplying results of processes.[0135] In at least one embodiment, client-server network 804 stores information which is accessible to network server computers 802, remote networks 808 and client computers 806. In at least one embodiment, network server computers 802 are formed by main frame computers minicomputers, and/or microcomputers having one or more processors each. In at least one embodiment, server computers 802 are linked together by wired and/or wireless transfer media, such as conductive wire, fiber optic cable, and/or microwave transmission media, satellite transmission media or other conductive, optic or electromagnetic wave transmission media. In at least one embodiment, client computers 806 access a network server computer 802 by a similar
wired or a wireless transfer medium. In at least one embodiment, a client computer 806 may link into a client-server network 804 using a modem and a standard telephone communication network. In at least one embodiment, alternative carrier systems such as cable and satellite communication systems also may be used to link into client-server network 804. In at least one embodiment, other private or time-shared carrier systems may be used. In at least one embodiment, network 804 is a global information network, such as the Internet. In at least one embodiment, network is a private intranet using similar protocols as the Internet, but with added security measures and restricted access controls. In at least one embodiment, network 804 is a private, or semi-private network using proprietary communication protocols.[0136] In at least one embodiment, client computer 806 is any end user computer, and may also be a mainframe computer, mini-computer or microcomputer having one or more microprocessors. In at least one embodiment, server computer 802 may at times function as a client computer accessing another server computer 802. In at least one embodiment, remote network 808 may be a local area network, a network added into a wide area network through an independent service provider (ISP) for the Internet, or another group of computers interconnected by wired or wireless transfer media having a configuration which is either fixed or changing over time. In at least one embodiment, client computers 806 may link into and access a network 804 independently or through a remote network 808.[0137] Figure 9 illustrates a computer network 908 connecting one or more computing machines, in accordance with at least one embodiment. In at least one embodiment, network 908 may be any type of electronically connected group of computers including, for instance, the following networks: Internet, Intranet, Local Area Networks (LAN), Wide Area Networks (WAN) or an interconnected combination of these network types. In at least one embodiment, connectivity within a network 908 may be a remote modem, Ethernet (IEEE 802.3), Token Ring (IEEE 802.5), Fiber Distributed Datalink Interface (FDDI), Asynchronous Transfer Mode (ATM), or any other communication protocol. In at least one embodiment, computing devices linked to a network may be desktop, server, portable, handheld, set-top box, personal digital assistant (PDA), a terminal, or any other desired type or configuration. In at least one embodiment, depending on their functionality, network connected devices may vary widely in processing power, internal memory, and other performance aspects.
[0138] In at least one embodiment, communications within a network and to or from computing devices connected to a network may be either wired or wireless. In at least one embodiment, network 908 may include, at least in part, the world-wide public Internet which generally connects a plurality of users in accordance with a client-server model in accordance with a transmission control protocol/internet protocol (TCP/IP) specification. In at least one embodiment, client-server network is a dominant model for communicating between two computers. In at least one embodiment, a client computer (“client”) issues one or more commands to a server computer (“server”). In at least one embodiment, server fulfills client commands by accessing available network resources and returning information to a client pursuant to client commands. In at least one embodiment, client computer systems and network resources resident on network servers are assigned a network address for identification during communications between elements of a network. In at least one embodiment, communications from other network connected systems to servers will include a network address of a relevant server/network resource as part of communication so that an appropriate destination of a data/request is identified as a recipient. In at least one embodiment, when a network 908 comprises the global Internet, a network address is an IP address in a TCP/IP format which may, at least in part, route data to an e-mail account, a website, or other Internet tool resident on a server. In at least one embodiment, information and services which are resident on network servers may be available to a web browser of a client computer through a domain name (e.g. www.site.com) which maps to an IP address of a network server.[0139] In at least one embodiment, a plurality of clients 902, 904, and 906 are connected to a network 908 via respective communication links. In at least one embodiment, each of these clients may access a network 908 via any desired form of communication, such as via a dial-up modem connection, cable link, a digital subscriber line (DSL), wireless or satellite link, or any other form of communication. In at least one embodiment, each client may communicate using any machine that is compatible with a network 908, such as a personal computer (PC), work station, dedicated terminal, personal data assistant (PDA), or other similar equipment. In at least one embodiment, clients 902, 904, and 906 may or may not be located in a same geographical area.[0140] In at least one embodiment, a plurality of servers 910, 912, and 914 are connected to a
network 918 to serve clients that are in communication with a network 918. In at least one embodiment, each server is typically a powerful computer or device that manages network resources and responds to client commands. In at least one embodiment, servers include computer readable data storage media such as hard disk drives and RAM memory that store program instructions and data. In at least one embodiment, servers 910, 912, 914 run application programs that respond to client commands. In at least one embodiment, server 910 may run a web server application for responding to client requests for HTML pages and may also run a mail server application for receiving and routing electronic mail. In at least one embodiment, other application programs, such as an FTP server or a media server for streaming audio/video data to clients may also be running on a server 910. In at least one embodiment, different servers may be dedicated to performing different tasks. In at least one embodiment, server 910 may be a dedicated web server that manages resources relating to web sites for various users, whereas a server 912 may be dedicated to provide electronic mail (email) management. In at least one embodiment, other servers may be dedicated for media (audio, video, etc.), file transfer protocol (FTP), or a combination of any two or more services that are typically available or provided over a network. In at least one embodiment, each server may be in a location that is the same as or different from that of other servers. In at least one embodiment, there may be multiple servers that perform mirrored tasks for users, thereby relieving congestion or minimizing traffic directed to and from a single server. In at least one embodiment, servers 910, 912, 914 are under control of a web hosting provider in a business of maintaining and delivering third party content over a network 918.[0141] In at least one embodiment, web hosting providers deliver services to two different types of clients. In at least one embodiment, one type, which may be referred to as a browser, requests content from servers 910, 912, 914 such as web pages, email messages, video clips, etc. In at least one embodiment, a second type, which may be referred to as a user, hires a web hosting provider to maintain a network resource such as a web site, and to make it available to browsers. In at least one embodiment, users contract with a web hosting provider to make memory space, processor capacity, and communication bandwidth available for their desired network resource in accordance with an amount of server resources a user desires to utilize.[0142] In at least one embodiment, in order for a web hosting provider to provide services for
both of these clients, application programs which manage a network resources hosted by servers must be properly configured. In at least one embodiment, program configuration process involves defining a set of parameters which control, at least in part, an application program's response to browser requests and which also define, at least in part, a server resources available to a particular user.[0143] In one embodiment, an intranet server 916 is in communication with a network 908 via a communication link. In at least one embodiment, intranet server 916 is in communication with a server manager 918. In at least one embodiment, server manager 918 comprises a database of an application program configuration parameters which are being utilized in servers 910, 912, 914. In at least one embodiment, users modify a database 920 via an intranet 916, and a server manager 918 interacts with servers 910, 912, 914 to modify application program parameters so that they match a content of a database. In at least one embodiment, a user logs onto an intranet server 916 by connecting to an intranet 916 via computer 902 and entering authentication information, such as a username and password.[0144] In at least one embodiment, when a user wishes to sign up for new service or modify an existing service, an intranet server 916 authenticates a user and provides a user with an interactive screen display/control panel that allows a user to access configuration parameters for a particular application program. In at least one embodiment, a user is presented with a number of modifiable text boxes that describe aspects of a configuration of a user's web site or other network resource. In at least one embodiment, if a user desires to increase memory space reserved on a server for its web site, a user is provided with a field in which a user specifies a desired memory space. In at least one embodiment, in response to receiving this information, an intranet server 916 updates a database 920. In at least one embodiment, server manager 918 forwards this information to an appropriate server, and a new parameter is used during application program operation. In at least one embodiment, an intranet server 916 is configured to provide users with access to configuration parameters of hosted network resources (e.g., web pages, email, FTP sites, media sites, etc.), for which a user has contracted with a web hosting service provider.[0145] Figure 10A illustrates a networked computer system 1000A, in accordance with at least one embodiment. In at least one embodiment, networked computer system 1000A comprises a
plurality of nodes or personal computers ("PCs") 1002, 1018, 1020. In at least one embodiment, personal computer or node 1002 comprises a processor 1014, memory 1016, video camera 1004, microphone 1006, mouse 1008, speakers 1010, and monitor 1012. In at least one embodiment, PCs 1002, 1018, 1020 may each run one or more desktop servers of an internal network within a given company, for instance, or may be servers of a general network not limited to a specific environment. In at least one embodiment, there is one server per PC node of a network, so that each PC node of a network represents a particular network server, having a particular network URL address. In at least one embodiment, each server defaults to a default web page for that server's user, which may itself contain embedded URLs pointing to further subpages of that user on that server, or to other servers or pages on other servers on a network.[0146] In at least one embodiment, nodes 1002, 1018, 1020 and other nodes of a network are interconnected via medium 1022. In at least one embodiment, medium 1022 may be, a communication channel such as an Integrated Services Digital Network ("ISDN"). In at least one embodiment, various nodes of a networked computer system may be connected through a variety of communication media, including local area networks ("LANs"), plain-old telephone lines ("POTS"), sometimes referred to as public switched telephone networks ("PSTN"), and/or variations thereof. In at least one embodiment, various nodes of a network may also constitute computer system users inter-connected via a network such as the Internet. In at least one embodiment, each server on a network (running from a particular node of a network at a given instance) has a unique address or identification within a network, which may be specifiable in terms of an URL.[0147] In at least one embodiment, a plurality of multi-point conferencing units ("MCUs") may thus be utilized to transmit data to and from various nodes or "endpoints" of a conferencing system. In at least one embodiment, nodes and/or MCUs may be interconnected via an ISDN link or through a local area network ("LAN"), in addition to various other communications media such as nodes connected through the Internet. In at least one embodiment, nodes of a conferencing system may, in general, be connected directly to a communications medium such as a LAN or through an MCU, and that a conferencing system may comprise other nodes or elements such as routers, servers, and/or variations thereof.[0148] In at least one embodiment, processor 1014 is a general-purpose programmable
processor. In at least one embodiment, processors of nodes of networked computer system 1000A may also be special -purpose video processors. In at least one embodiment, various peripherals and components of a node such as those of node 1002 may vary from those of other nodes. In at least one embodiment, node 1018 and node 1020 may be configured identically to or differently than node 1002. In at least one embodiment, a node may be implemented on any suitable computer system in addition to PC systems.[0149] Figure 10B illustrates a networked computer system 1000B, in accordance with at least one embodiment. In at least one embodiment, system 1000B illustrates a network such as LAN 1024, which may be used to interconnect a variety of nodes that may communicate with each other. In at least one embodiment, attached to LAN 1024 are a plurality of nodes such as PC nodes 1026, 1028, 1030. In at least one embodiment, a node may also be connected to the LAN via a network server or other means. In at least one embodiment, system 1000B comprises other types of nodes or elements, for at least one embodiment including routers, servers, and nodes.[0150] Figure 10C illustrates a networked computer system 1000C, in accordance with at least one embodiment. In at least one embodiment, system 1000C illustrates a WWW system having communications across a backbone communications network such as Internet 1032, which may be used to interconnect a variety of nodes of a network. In at least one embodiment, WWW is a set of protocols operating on top of the Internet, and allows a graphical interface system to operate thereon for accessing information through the Internet. In at least one embodiment, attached to Internet 1032 in WWW are a plurality of nodes such as PCs 1040, 1042, 1044. In at least one embodiment, a node is interfaced to other nodes of WWW through a WWW HTTP server such as servers 1034, 1036. In at least one embodiment, PC 1044 may be a PC forming a node of network 1032 and itself running its server 1036, although PC 1044 and server 1036 are illustrated separately in Figure 10C for illustrative purposes.[0151] In at least one embodiment, WWW is a distributed type of application, characterized by WWW HTTP, WWW's protocol, which runs on top of the Internet's transmission control protocol/Internet protocol ("TCP/IP"). In at least one embodiment, WWW may thus be characterized by a set of protocols (i.e., HTTP) running on the Internet as its "backbone."[0152] In at least one embodiment, a web browser is an application running on a node of a
network that, in WWW-compatible type network systems, allows users of a particular server or node to view such information and thus allows a user to search graphical and text-based files that are linked together using hypertext links that are embedded in documents or files available from servers on a network that understand HTTP. In at least one embodiment, when a given web page of a first server associated with a first node is retrieved by a user using another server on a network such as the Internet, a document retrieved may have various hypertext links embedded therein and a local copy of a page is created local to a retrieving user. In at least one embodiment, when a user clicks on a hypertext link, locally-stored information related to a selected hypertext link is typically sufficient to allow a user's machine to open a connection across the Internet to a server indicated by a hypertext link.[0153] In at least one embodiment, more than one user may be coupled to each HTTP server, through a LAN such as LAN 1038 as illustrated with respect to WWW HTTP server 1034. In at least one embodiment, system 1000C may also comprise other types of nodes or elements. In at least one embodiment, a WWW HTTP server is an application running on a machine, such as a PC. In at least one embodiment, each user may be considered to have a unique "server," as illustrated with respect to PC 1044. In at least one embodiment, a server may be considered to be a server such as WWW HTTP server 1034, which provides access to a network for a LAN or plurality of nodes or plurality of LANs. In at least one embodiment, there are a plurality of users, each having a desktop PC or node of a network, each desktop PC potentially establishing a server for a user thereof. In at least one embodiment, each server is associated with a particular network address or URL, which, when accessed, provides a default web page for that user. In at least one embodiment, a web page may contain further links (embedded URLs) pointing to further subpages of that user on that server, or to other servers on a network or to pages on other servers on a network.Cloud Computing and Services[0154] The following figures set forth, without limitation, exemplary cloud-based systems that can be used to implement at least one embodiment.[0155] In at least one embodiment, cloud computing is a style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet. In at least one embodiment, users need not have knowledge of, expertise in, or control over
technology infrastructure, which can be referred to as “in the cloud,” that supports them. In at least one embodiment, cloud computing incorporates infrastructure as a service, platform as a service, software as a service, and other variations that have a common theme of reliance on the Internet for satisfying computing needs of users. In at least one embodiment, a typical cloud deployment, such as in a private cloud (e.g., enterprise network), or a datacenter (DC) in a public cloud (e.g., Internet) can consist of thousands of servers (or alternatively, VMs), hundreds of Ethernet, Fiber Channel or Fiber Channel over Ethernet (FCoE) ports, switching and storage infrastructure, etc. In at least one embodiment, cloud can also consist of network services infrastructure like IPsec VPN hubs, firewalls, load balancers, wide area network (WAN) optimizers etc. In at least one embodiment, remote subscribers can access cloud applications and services securely by connecting via a VPN tunnel, such as an IPsec VPN tunnel.[0156] In at least one embodiment, cloud computing is a model for enabling convenient, on- demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.[0157] In at least one embodiment, cloud computing is characterized by on-demand self- service, in which a consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human inter-action with each service's provider. In at least one embodiment, cloud computing is characterized by broad network access, in which capabilities are available over a network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs). In at least one embodiment, cloud computing is characterized by resource pooling, in which a provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically as-signed and reassigned according to consumer demand. In at least one embodiment, there is a sense of location independence in that a customer generally has no control or knowledge over an exact location of provided resources, but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter).[0158] In at least one embodiment, resources include storage, processing, memory, network bandwidth, and virtual machines. In at least one embodiment, cloud computing is characterized
by rapid elasticity, in which capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. In at least one embodiment, to a consumer, capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time. In at least one embodiment, cloud computing is characterized by measured service, in which cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to a type of service (e.g., storage, processing, bandwidth, and active user accounts). In at least one embodiment, resource usage can be monitored, controlled, and reported providing transparency for both a provider and consumer of a utilized service.[0159] In at least one embodiment, cloud computing may be associated with various services. In at least one embodiment, cloud Software as a Service (SaaS) may refer to as service in which a capability provided to a consumer is to use a provider's applications running on a cloud infrastructure. In at least one embodiment, applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based email). In at least one embodiment, consumer does not manage or control underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with a possible exception of limited user-specific application configuration settings.[0160] In at least one embodiment, cloud Platform as a Service (PaaS) may refer to a service in which a capability provided to a consumer is to deploy onto cloud infrastructure consumer- created or acquired applications created using programming languages and tools supported by a provider. In at least one embodiment, consumer does not manage or control underlying cloud infrastructure including networks, servers, operating systems, or storage, but has control over deployed applications and possibly application hosting environment configurations.[0161] In at least one embodiment, cloud Infrastructure as a Service (laaS) may refer to a service in which a capability provided to a consumer is to provision processing, storage, networks, and other fundamental computing resources where a consumer is able to deploy and run arbitrary software, which can include operating systems and applications. In at least one embodiment, consumer does not manage or control underlying cloud infrastructure, but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls).
[0162] In at least one embodiment, cloud computing may be deployed in various ways. In at least one embodiment, a private cloud may refer to a cloud infrastructure that is operated solely for an organization. In at least one embodiment, a private cloud may be managed by an organization or a third party and may exist on-premises or off-premises. In at least one embodiment, a community cloud may refer to a cloud infrastructure that is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). In at least one embodiment, a community cloud may be managed by organizations or a third party and may exist on-premises or off- premises. In at least one embodiment, a public cloud may refer to a cloud infrastructure that is made available to a general public or a large industry group and is owned by an organization providing cloud services. In at least one embodiment, a hybrid cloud may refer to a cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities, but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load-balancing between clouds). In at least one embodiment, a cloud computing environment is service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability.[0163] Figure 11 illustrates one or more components of a system environment 1100 in which services may be offered as third party network services, in accordance with at least one embodiment. In at least one embodiment, a third party network may be referred to as a cloud, cloud network, cloud computing network, and/or variations thereof. In at least one embodiment, system environment 1100 includes one or more client computing devices 1104, 1106, and 1108 that may be used by users to interact with a third party network infrastructure system 1102 that provides third party network services, which may be referred to as cloud computing services. In at least one embodiment, third party network infrastructure system 1102 may comprise one or more computers and/or servers.[0164] It should be appreciated that third party network infrastructure system 1102 depicted in Figure 11 may have other components than those depicted. Further, Figure 11 depicts an embodiment of a third party network infrastructure system. In at least one embodiment, third party network infrastructure system 1102 may have more or fewer components than depicted in Figure 11, may combine two or more components, or may have a different configuration or
arrangement of components.[0165] In at least one embodiment, client computing devices 1104, 1106, and 1108 may be configured to operate a client application such as a web browser, a proprietary client application, or some other application, which may be used by a user of a client computing device to interact with third party network infrastructure system 1102 to use services provided by third party network infrastructure system 1102. Although exemplary system environment 1100 is shown with three client computing devices, any number of client computing devices may be supported. In at least one embodiment, other devices such as devices with sensors, etc. may interact with third party network infrastructure system 1102. In at least one embodiment, network(s) 1110 may facilitate communications and exchange of data between client computing devices 1104, 1106, and 1108 and third party network infrastructure system 1102.[0166] In at least one embodiment, services provided by third party network infrastructure system 1102 may include a host of services that are made available to users of a third party network infrastructure system on demand. In at least one embodiment, various services may also be offered including without limitation online data storage and backup solutions, Web-based e- mail services, hosted office suites and document collaboration services, database management and processing, managed technical support services, and/or variations thereof. In at least one embodiment, services provided by a third party network infrastructure system can dynamically scale to meet needs of its users.[0167] In at least one embodiment, a specific instantiation of a service provided by third party network infrastructure system 1102 may be referred to as a “service instance.” In at least one embodiment, in general, any service made available to a user via a communication network, such as the Internet, from a third party network service provider's system is referred to as a “third party network service.” In at least one embodiment, in a public third party network environment, servers and systems that make up a third party network service provider's system are different from a customer's own on-premises servers and systems. In at least one embodiment, a third party network service provider's system may host an application, and a user may, via a communication network such as the Internet, on demand, order and use an application.[0168] In at least one embodiment, a service in a computer network third party network
infrastructure may include protected computer network access to storage, a hosted database, a hosted web server, a software application, or other service provided by a third party network vendor to a user. In at least one embodiment, a service can include password-protected access to remote storage on a third party network through the Internet. In at least one embodiment, a service can include a web service-based hosted relational database and a script-language middleware engine for private use by a networked developer. In at least one embodiment, a service can include access to an email software application hosted on a third party network vendor's web site.[0169] In at least one embodiment, third party network infrastructure system 1102 may include a suite of applications, middleware, and database service offerings that are delivered to a customer in a self-service, subscription-based, elastically scalable, reliable, highly available, and secure manner. In at least one embodiment, third party network infrastructure system 1102 may also provide “big data” related computation and analysis services. In at least one embodiment, term “big data” is generally used to refer to extremely large data sets that can be stored and manipulated by analysts and researchers to visualize large amounts of data, detect trends, and/or otherwise interact with data. In at least one embodiment, big data and related applications can be hosted and/or manipulated by an infrastructure system on many levels and at different scales. In at least one embodiment, tens, hundreds, or thousands of processors linked in parallel can act upon such data in order to present it or simulate external forces on data or what it represents. In at least one embodiment, these data sets can involve structured data, such as that organized in a database or otherwise according to a structured model, and/or unstructured data (e.g., emails, images, data blobs (binary large objects), web pages, complex event processing). In at least one embodiment, by leveraging an ability of an embodiment to relatively quickly focus more (or fewer) computing resources upon an objective, a third party network infrastructure system may be better available to carry out tasks on large data sets based on demand from a business, government agency, research organization, private individual, group of like-minded individuals or organizations, or other entity.[0170] In at least one embodiment, third party network infrastructure system 1102 may be adapted to automatically provision, manage and track a customer's subscription to services offered by third party network infrastructure system 1102. In at least one embodiment, third
party network infrastructure system 1102 may provide third party network services via different deployment models. In at least one embodiment, services may be provided under a public third party network model in which third party network infrastructure system 1102 is owned by an organization selling third party network services and services are made available to a general public or different industry enterprises. In at least one embodiment, services may be provided under a private third party network model in which third party network infrastructure system 1102 is operated solely for a single organization and may provide services for one or more entities within an organization. In at least one embodiment, third party network services may also be provided under a community third party network model in which third party network infrastructure system 1102 and services provided by third party network infrastructure system 1102 are shared by several organizations in a related community. In at least one embodiment, third party network services may also be provided under a hybrid third party network model, which is a combination of two or more different models.[0171] In at least one embodiment, services provided by third party network infrastructure system 1102 may include one or more services provided under Software as a Service (SaaS) category, Platform as a Service (PaaS) category, Infrastructure as a Service (laaS) category, or other categories of services including hybrid services. In at least one embodiment, a customer, via a subscription order, may order one or more services provided by third party network infrastructure system 1102. In at least one embodiment, third party network infrastructure system 1102 then performs processing to provide services in a customer's subscription order.[0172] In at least one embodiment, services provided by third party network infrastructure system 1102 may include, without limitation, application services, platform services and infrastructure services. In at least one embodiment, application services may be provided by a third party network infrastructure system via a SaaS platform. In at least one embodiment, SaaS platform may be configured to provide third party network services that fall under a SaaS category. In at least one embodiment, SaaS platform may provide capabilities to build and deliver a suite of on-demand applications on an integrated development and deployment platform. In at least one embodiment, SaaS platform may manage and control underlying software and infrastructure for providing SaaS services. In at least one embodiment, by utilizing services provided by a SaaS platform, customers can utilize applications executing on a third
party network infrastructure system. In at least one embodiment, customers can acquire an application services without a need for customers to purchase separate licenses and support. In at least one embodiment, various different SaaS services may be provided. In at least one embodiment, this may include, without limitation, services that provide solutions for sales performance management, enterprise integration, and business flexibility for large organizations.[0173] In at least one embodiment, platform services may be provided by third party network infrastructure system 1102 via a PaaS platform. In at least one embodiment, PaaS platform may be configured to provide third party network services that fall under a PaaS category. In at least one embodiment, platform services may include without limitation services that enable organizations to consolidate existing applications on a shared, common architecture, as well as an ability to build new applications that leverage shared services provided by a platform. In at least one embodiment, PaaS platform may manage and control underlying software and infrastructure for providing PaaS services. In at least one embodiment, customers can acquire PaaS services provided by third party network infrastructure system 1102 without a need for customers to purchase separate licenses and support.[0174] In at least one embodiment, by utilizing services provided by a PaaS platform, customers can employ programming languages and tools supported by a third party network infrastructure system and also control deployed services. In at least one embodiment, platform services provided by a third party network infrastructure system may include database third party network services, middleware third party network services and third party network services. In at least one embodiment, database third party network services may support shared service deployment models that enable organizations to pool database resources and offer customers a Database as a Service in a form of a database third party network. In at least one embodiment, middleware third party network services may provide a platform for customers to develop and deploy various business applications, and third party network services may provide a platform for customers to deploy applications, in a third party network infrastructure system.[0175] In at least one embodiment, various different infrastructure services may be provided by an laaS platform in a third party network infrastructure system. In at least one embodiment, infrastructure services facilitate management and control of underlying computing resources, such as storage, networks, and other fundamental computing resources for customers utilizing
services provided by a SaaS platform and a PaaS platform.[0176] In at least one embodiment, third party network infrastructure system 1102 may also include infrastructure resources 1130 for providing resources used to provide various services to customers of a third party network infrastructure system. In at least one embodiment, infrastructure resources 1130 may include pre-integrated and optimized combinations of hardware, such as servers, storage, and networking resources to execute services provided by a Paas platform and a Saas platform, and other resources.[0177] In at least one embodiment, resources in third party network infrastructure system 1102 may be shared by multiple users and dynamically re-allocated per demand. In at least one embodiment, resources may be allocated to users in different time zones. In at least one embodiment, third party network infrastructure system 1102 may enable a first set of users in a first time zone to utilize resources of a third party network infrastructure system for a specified number of hours and then enable a re-allocation of same resources to another set of users located in a different time zone, thereby maximizing utilization of resources.[0178] In at least one embodiment, a number of internal shared services 1132 may be provided that are shared by different components or modules of third party network infrastructure system 1102 to enable provision of services by third party network infrastructure system 1102. In at least one embodiment, these internal shared services may include, without limitation, a security and identity service, an integration service, an enterprise repository service, an enterprise manager service, a virus scanning and white list service, a high availability, backup and recovery service, service for enabling third party network support, an email service, a notification service, a file transfer service, and/or variations thereof.[0179] In at least one embodiment, third party network infrastructure system 1102 may provide comprehensive management of third party network services (e.g., SaaS, PaaS, and laaS services) in a third party network infrastructure system. In at least one embodiment, third party network management functionality may include capabilities for provisioning, managing and tracking a customer's subscription received by third party network infrastructure system 1102, and/or variations thereof.[0180] In at least one embodiment, as depicted in Figure 11, third party network management
functionality may be provided by one or more modules, such as an order management module 1120, an order orchestration module 1122, an order provisioning module 1124, an order management and monitoring module 1126, and an identity management module 1128. In at least one embodiment, these modules may include or be provided using one or more computers and/or servers, which may be general purpose computers, specialized server computers, server farms, server clusters, or any other appropriate arrangement and/or combination.[0181] In at least one embodiment, at step 1134, a customer using a client device, such as client computing devices 1104, 1106 or 1108, may interact with third party network infrastructure system 1102 by requesting one or more services provided by third party network infrastructure system 1102 and placing an order for a subscription for one or more services offered by third party network infrastructure system 1102. In at least one embodiment, a customer may access a third party network User Interface (UI) such as third party network UI 1112, third party network UI 1114 and/or third party network UI 1116 and place a subscription order via these UIs. In at least one embodiment, order information received by third party network infrastructure system 1102 in response to a customer placing an order may include information identifying a customer and one or more services offered by a third party network infrastructure system 1102 that a customer intends to subscribe to.[0182] In at least one embodiment, at step 1136, an order information received from a customer may be stored in an order database 1118. In at least one embodiment, if this is a new order, a new record may be created for an order. In at least one embodiment, order database 1118 can be one of several databases operated by third party network infrastructure system 1118 and operated in conjunction with other system elements.[0183] In at least one embodiment, at step 1138, an order information may be forwarded to an order management module 1120 that may be configured to perform billing and accounting functions related to an order, such as verifying an order, and upon verification, booking an order.[0184] In at least one embodiment, at step 1140, information regarding an order may be communicated to an order orchestration module 1122 that is configured to orchestrate provisioning of services and resources for an order placed by a customer. In at least one embodiment, order orchestration module 1122 may use services of order provisioning module
1124 for provisioning. In at least one embodiment, order orchestration module 1122 enables management of business processes associated with each order and applies business logic to determine whether an order should proceed to provisioning.[0185] In at least one embodiment, at step 1142, upon receiving an order for a new subscription, order orchestration module 1122 sends a request to order provisioning module 1124 to allocate resources and configure resources needed to fulfill a subscription order. In at least one embodiment, order provisioning module 1124 enables an allocation of resources for services ordered by a customer. In at least one embodiment, order provisioning module 1124 provides a level of abstraction between third party network services provided by third party network infrastructure system 1100 and a physical implementation layer that is used to provision resources for providing requested services. In at least one embodiment, this enables order orchestration module 1122 to be isolated from implementation details, such as whether or not services and resources are actually provisioned in real-time or pre-provisioned and only allocated/assigned upon request.[0186] In at least one embodiment, at step 1144, once services and resources are provisioned, a notification may be sent to subscribing customers indicating that a requested service is now ready for use. In at least one embodiment, information (e.g. a link) may be sent to a customer that enables a customer to start using requested services.[0187] In at least one embodiment, at step 1146, a customer's subscription order may be managed and tracked by an order management and monitoring module 1126. In at least one embodiment, order management and monitoring module 1126 may be configured to collect usage statistics regarding a customer use of subscribed services. In at least one embodiment, statistics may be collected for an amount of storage used, an amount data transferred, a number of users, and an amount of system up time and system down time, and/or variations thereof.[0188] In at least one embodiment, third party network infrastructure system 1100 may include an identity management module 1128 that is configured to provide identity services, such as access management and authorization services in third party network infrastructure system 1100. In at least one embodiment, identity management module 1128 may control information about customers who wish to utilize services provided by third party network infrastructure system
1102. In at least one embodiment, such information can include information that authenticates identities of such customers and information that describes which actions those customers are authorized to perform relative to various system resources (e.g., files, directories, applications, communication ports, memory segments, etc.). In at least one embodiment, identity management module 1128 may also include management of descriptive information about each customer and about how and by whom that descriptive information can be accessed and modified.[0189] Figure 12 illustrates a cloud computing environment 1202, in accordance with at least one embodiment. In at least one embodiment, cloud computing environment 1202 comprises one or more computer system/servers 1204 with which computing devices such as, personal digital assistant (PDA) or cellular telephone 1206 A, desktop computer 1206B, laptop computer 1206C, and/or automobile computer system 1206N communicate. In at least one embodiment, this allows for infrastructure, platforms and/or software to be offered as services from cloud computing environment 1202, so as to not require each client to separately maintain such resources. It is understood that types of computing devices 1206A-N shown in Figure 12 are intended to be illustrative only and that cloud computing environment 1202 can communicate with any type of computerized device over any type of network and/or network/addressable connection (e.g., using a web browser).[0190] In at least one embodiment, a computer system/server 1204, which can be denoted as a cloud computing node, is operational with numerous other general purpose or special purpose computing system environments or configurations. In at least one embodiment, computing systems, environments, and/or configurations that may be suitable for use with computer system/server 1204 include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputer systems, mainframe computer systems, and distributed cloud computing environments that include any of the above systems or devices, and/or variations thereof.[0191] In at least one embodiment, computer system/server 1204 may be described in a general context of computer system-executable instructions, such as program modules, being executed by a computer system. In at least one embodiment, program modules include routines,
programs, objects, components, logic, data structures, and so on, that perform particular tasks or implement particular abstract data types. In at least one embodiment, exemplary computer system/server 1204 may be practiced in distributed loud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In at least one embodiment, in a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.[0192] Figure 13 illustrates a set of functional abstraction layers provided by cloud computing environment 1202 (Figure 12), in accordance with at least one embodiment. It should be understood in advance that components, layers, and functions shown in Figure 13 are intended to be illustrative only, and components, layers, and functions may vary.[0193] In at least one embodiment, hardware and software layer 1302 includes hardware and software components. In at least one embodiment, hardware components include mainframes, various RISC (Reduced Instruction Set Computer) architecture based servers, various computing systems, supercomputing systems, storage devices, networks, networking components, and/or variations thereof. In at least one embodiment, software components include network application server software, various application server software, various database software, and/or variations thereof.[0194] In at least one embodiment, virtualization layer 1304 provides an abstraction layer from which following exemplary virtual entities may be provided: virtual servers, virtual storage, virtual networks, including virtual private networks, virtual applications, virtual clients, and/or variations thereof.[0195] In at least one embodiment, management layer 1306 provides various functions. In at least one embodiment, resource provisioning provides dynamic procurement of computing resources and other resources that are utilized to perform tasks within a cloud computing environment. In at least one embodiment, metering provides usage tracking as resources are utilized within a cloud computing environment, and billing or invoicing for consumption of these resources. In at least one embodiment, resources may comprise application software licenses. In at least one embodiment, security provides identity verification for users and tasks, as well as
protection for data and other resources. In at least one embodiment, user interface provides access to a cloud computing environment for both users and system administrators. In at least one embodiment, service level management provides cloud computing resource allocation and management such that required service levels are met. In at least one embodiment, Service Level Agreement (SLA) management provides pre-arrangement for, and procurement of, cloud computing resources for which a future requirement is anticipated in accordance with an SLA.[0196] In at least one embodiment, workloads layer 1308 provides functionality for which a cloud computing environment is utilized. In at least one embodiment, workloads and functions which may be provided from this layer include: mapping and navigation, software development and management, educational services, data analytics and processing, transaction processing, and service delivery.Supercomputing[0197] The following figures set forth, without limitation, exemplary supercomputer-based systems that can be used to implement at least one embodiment.[0198] In at least one embodiment, a supercomputer may refer to a hardware system exhibiting substantial parallelism and comprising at least one chip, where chips in a system are interconnected by a network and are placed in hierarchically organized enclosures. In at least one embodiment, a large hardware system filling a machine room, with several racks, each containing several boards/rack modules, each containing several chips, all interconnected by a scalable network, is at least one embodiment of a supercomputer. In at least one embodiment, a single rack of such a large hardware system is at least one other embodiment of a supercomputer. In at least one embodiment, a single chip exhibiting substantial parallelism and containing several hardware components can equally be considered to be a supercomputer, since as feature sizes may decrease, an amount of hardware that can be incorporated in a single chip may also increase.[0199] Figure 14 illustrates a supercomputer at a chip level, in accordance with at least one embodiment. In at least one embodiment, inside an FPGA or ASIC chip, main computation is performed within finite state machines (1404) called thread units. In at least one embodiment, task and synchronization networks (1402) connect finite state machines and are used to dispatch
threads and execute operations in correct order. In at least one embodiment, a multi-level partitioned on-chip cache hierarchy (1408, 1412) is accessed using memory networks (1406, 1410). In at least one embodiment, off-chip memory is accessed using memory controllers (1416) and an off-chip memory network (1414). In at least one embodiment, I/O controller (1418) is used for cross-chip communication when a design does not fit in a single logic chip.[0200] Figure 15 illustrates a supercomputer at a rock module level, in accordance with at least one embodiment. In at least one embodiment, within a rack module, there are multiple FPGA or ASIC chips (1502) that are connected to one or more DRAM units (1504) which constitute main accelerator memory. In at least one embodiment, each FPGA/ASIC chip is connected to its neighbor FPGA/ASIC chip using wide busses on a board, with differential high speed signaling (1506). In at least one embodiment, each FPGA/ASIC chip is also connected to at least one high-speed serial communication cable.[0201] Figure 16 illustrates a supercomputer at a rack level, in accordance with at least one embodiment. Figure 17 illustrates a supercomputer at a whole system level, in accordance with at least one embodiment. In at least one embodiment, referring to Figure 16 and Figure 17, between rack modules in a rack and across racks throughout an entire system, high-speed serial optical or copper cables (1602, 1702) are used to realize a scalable, possibly incomplete hypercube network. In at least one embodiment, one of FPGA/ASIC chips of an accelerator is connected to a host system through a PCI-Express connection (1704). In at least one embodiment, host system comprises a host microprocessor (1708) that a software part of an application runs on and a memory consisting of one or more host memory DRAM units (1706) that is kept coherent with memory on an accelerator. In at least one embodiment, host system can be a separate module on one of racks, or can be integrated with one of a supercomputer's modules. In at least one embodiment, cube-connected cycles topology provide communication links to create a hypercube network for a large supercomputer. In at least one embodiment, a small group of FPGA/ASIC chips on a rack module can act as a single hypercube node, such that a total number of external links of each group is increased, compared to a single chip. In at least one embodiment, a group contains chips A, B, C and D on a rack module with internal wide differential busses connecting A, B, C and D in a torus organization. In at least one embodiment, there are 12 serial communication cables connecting a rack module to an outside world. In at
least one embodiment, chip A on a rack module connects to serial communication cables 0, 1, 2. In at least one embodiment, chip B connects to cables 3, 4, 5. In at least one embodiment, chip C connects to 6, 7, 8. In at least one embodiment, chip D connects to 9, 10, 11. In at least one embodiment, an entire group {A, B, C, D} constituting a rack module can form a hypercube node within a supercomputer system, with up to 212=4096 rack modules (16384 FPGA/ASIC chips). In at least one embodiment, for chip A to send a message out on link 4 of group {A, B, C, D], a message has to be routed first to chip B with an on-board differential wide bus connection. In at least one embodiment, a message arriving into a group {A, B, C, D} on link 4 (i.e., arriving at B) destined to chip A, also has to be routed first to a correct destination chip (A) internally within a group {A, B, C, D}. In at least one embodiment, parallel supercomputer systems of other sizes may also be implemented.Artificial Intelligence[0202] The following figures set forth, without limitation, exemplary artificial intelligencebased systems that can be used to implement at least one embodiment.[0203] Figure 18A illustrates inference and/or training logic 1815 used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 1815 are provided below in conjunction with Figures 18A and/or 18B.[0204] In at least one embodiment, inference and/or training logic 1815 may include, without limitation, code and/or data storage 1801 to store forward and/or output weight and/or input/output data, and/or other parameters to configure neurons or layers of a neural network trained and/or used for inferencing in aspects of one or more embodiments. In at least one embodiment, training logic 1815 may include, or be coupled to code and/or data storage 1801 to store graph code or other software to control timing and/or order, in which weight and/or other parameter information is to be loaded to configure, logic, including integer and/or floating point units (collectively, arithmetic logic units (ALUs). In at least one embodiment, code, such as graph code, loads weight or other parameter information into processor ALUs based on an architecture of a neural network to which such code corresponds. In at least one embodiment code and/or data storage 1801 stores weight parameters and/or input/output data of each layer of a neural network trained or used in conjunction with one or more embodiments during forward
propagation of input/output data and/or weight parameters during training and/or inferencing using aspects of one or more embodiments. In at least one embodiment, any portion of code and/or data storage 1801 may be included with other on-chip or off-chip data storage, including a processor’s LI, L2, or L3 cache or system memory.[0205] In at least one embodiment, any portion of code and/or data storage 1801 may be internal or external to one or more processors or other hardware logic devices or circuits. In at least one embodiment, code and/or code and/or data storage 1801 may be cache memory, dynamic randomly addressable memory (“DRAM”), static randomly addressable memory (“SRAM”), non-volatile memory (e.g., flash memory), or other storage. In at least one embodiment, a choice of whether code and/or code and/or data storage 1801 is internal or external to a processor, in at least one embodiment, or comprising DRAM, SRAM, flash or some other storage type may depend on available storage on-chip versus off-chip, latency requirements of training and/or inferencing functions being performed, batch size of data used in inferencing and/or training of a neural network, or some combination of these factors.[0206] In at least one embodiment, inference and/or training logic 1815 may include, without limitation, a code and/or data storage 1805 to store backward and/or output weight and/or input/output data corresponding to neurons or layers of a neural network trained and/or used for inferencing in aspects of one or more embodiments. In at least one embodiment, code and/or data storage 1805 stores weight parameters and/or input/output data of each layer of a neural network trained or used in conjunction with one or more embodiments during backward propagation of input/output data and/or weight parameters during training and/or inferencing using aspects of one or more embodiments. In at least one embodiment, training logic 1815 may include, or be coupled to code and/or data storage 1805 to store graph code or other software to control timing and/or order, in which weight and/or other parameter information is to be loaded to configure, logic, including integer and/or floating point units (collectively, arithmetic logic units (ALUs).[0207] In at least one embodiment, code, such as graph code, causes loading of weight or other parameter information into processor ALUs based on an architecture of a neural network to which such code corresponds. In at least one embodiment, any portion of code and/or data storage 1805 may be included with other on-chip or off-chip data storage, including a
processor’s LI, L2, or L3 cache or system memory. In at least one embodiment, any portion of code and/or data storage 1805 may be internal or external to one or more processors or other hardware logic devices or circuits. In at least one embodiment, code and/or data storage 1805 may be cache memory, DRAM, SRAM, non-volatile memory (e.g., flash memory), or other storage. In at least one embodiment, a choice of whether code and/or data storage 1805 is internal or external to a processor, in at least one embodiment, or comprising DRAM, SRAM, flash memory or some other storage type may depend on available storage on-chip versus off- chip, latency requirements of training and/or inferencing functions being performed, batch size of data used in inferencing and/or training of a neural network, or some combination of these factors.[0208] In at least one embodiment, code and/or data storage 1801 and code and/or data storage 1805 may be separate storage structures. In at least one embodiment, code and/or data storage 1801 and code and/or data storage 1805 may be a combined storage structure. In at least one embodiment, code and/or data storage 1801 and code and/or data storage 1805 may be partially combined and partially separate. In at least one embodiment, any portion of code and/or data storage 1801 and code and/or data storage 1805 may be included with other on-chip or off-chip data storage, including a processor’s LI, L2, or L3 cache or system memory.[0209] In at least one embodiment, inference and/or training logic 1815 may include, without limitation, one or more arithmetic logic unit(s) (“ALU(s)”) 1810, including integer and/or floating point units, to perform logical and/or mathematical operations based, at least in part on, or indicated by, training and/or inference code (e.g., graph code), a result of which may produce activations (e.g., output values from layers or neurons within a neural network) stored in an activation storage 1820 that are functions of input/output and/or weight parameter data stored in code and/or data storage 1801 and/or code and/or data storage 1805. In at least one embodiment, activations stored in activation storage 1820 are generated according to linear algebraic and or matrix-based mathematics performed by ALU(s) 1810 in response to performing instructions or other code, wherein weight values stored in code and/or data storage 1805 and/or data storage 1801 are used as operands along with other values, such as bias values, gradient information, momentum values, or other parameters or hyperparameters, any or all of which may be stored in code and/or data storage 1805 or code and/or data storage 1801 or another storage on or off-chip.
[0210] In at least one embodiment, ALU(s) 1810 are included within one or more processors or other hardware logic devices or circuits, whereas in another embodiment, ALU(s) 1810 may be external to a processor or other hardware logic device or circuit that uses them (e.g., a coprocessor). In at least one embodiment, ALUs 1810 may be included within a processor’s execution units or otherwise within a bank of ALUs accessible by a processor’s execution units either within same processor or distributed between different processors of different types (e.g., central processing units, graphics processing units, fixed function units, etc.). In at least one embodiment, code and/or data storage 1801, code and/or data storage 1805, and activation storage 1820 may share a processor or other hardware logic device or circuit, whereas in another embodiment, they may be in different processors or other hardware logic devices or circuits, or some combination of same and different processors or other hardware logic devices or circuits. In at least one embodiment, any portion of activation storage 1820 may be included with other on-chip or off-chip data storage, including a processor’s LI, L2, or L3 cache or system memory. Furthermore, inferencing and/or training code may be stored with other code accessible to a processor or other hardware logic or circuit and fetched and/or processed using a processor’s fetch, decode, scheduling, execution, retirement and/or other logical circuits.[0211] In at least one embodiment, activation storage 1820 may be cache memory, DRAM, SRAM, non-volatile memory (e.g., flash memory), or other storage. In at least one embodiment, activation storage 1820 may be completely or partially within or external to one or more processors or other logical circuits. In at least one embodiment, a choice of whether activation storage 1820 is internal or external to a processor, in at least one embodiment, or comprising DRAM, SRAM, flash memory or some other storage type may depend on available storage on- chip versus off-chip, latency requirements of training and/or inferencing functions being performed, batch size of data used in inferencing and/or training of a neural network, or some combination of these factors.[0212] In at least one embodiment, inference and/or training logic 1815 illustrated in Figure 18A may be used in conjunction with an application-specific integrated circuit (“ASIC”), such as a TensorFlow® Processing Unit from Google, an inference processing unit (IPU) from Graphcore™, or a Nervana® (e.g., “Lake Crest”) processor from Intel Corp. In at least one embodiment, inference and/or training logic 1815 illustrated in Figure 18A may be used in
conjunction with central processing unit (“CPU”) hardware, graphics processing unit (“GPU”) hardware or other hardware, such as field programmable gate arrays (“FPGAs”).[0213] Figure 18B illustrates inference and/or training logic 1815, according to at least one embodiment. In at least one embodiment, inference and/or training logic 1815 may include, without limitation, hardware logic in which computational resources are dedicated or otherwise exclusively used in conjunction with weight values or other information corresponding to one or more layers of neurons within a neural network. In at least one embodiment, inference and/or training logic 1815 illustrated in Figure 18B may be used in conjunction with an applicationspecific integrated circuit (ASIC), such as TensorFlow® Processing Unit from Google, an inference processing unit (IPU) from Graphcore™, or a Nervana® (e.g., “Lake Crest”) processor from Intel Corp. In at least one embodiment, inference and/or training logic 1815 illustrated in Figure 18B may be used in conjunction with central processing unit (CPU) hardware, graphics processing unit (GPU) hardware or other hardware, such as field programmable gate arrays (FPGAs). In at least one embodiment, inference and/or training logic 1815 includes, without limitation, code and/or data storage 1801 and code and/or data storage 1805, which may be used to store code (e.g., graph code), weight values and/or other information, including bias values, gradient information, momentum values, and/or other parameter or hyperparameter information. In at least one embodiment illustrated in Figure 18B, each of code and/or data storage 1801 and code and/or data storage 1805 is associated with a dedicated computational resource, such as computational hardware 1802 and computational hardware 1806, respectively. In at least one embodiment, each of computational hardware 1802 and computational hardware 1806 comprises one or more ALUs that perform mathematical functions, such as linear algebraic functions, only on information stored in code and/or data storage 1801 and code and/or data storage 1805, respectively, result of which is stored in activation storage 1820.[0214] In at least one embodiment, each of code and/or data storage 1801 and 1805 and corresponding computational hardware 1802 and 1806, respectively, correspond to different layers of a neural network, such that resulting activation from one storage/computational pair 1801/1802 of code and/or data storage 1801 and computational hardware 1802 is provided as an input to a next storage/computational pair 1805/1806 of code and/or data storage 1805 and computational hardware 1806, in order to mirror a conceptual organization of a neural network.
In at least one embodiment, each of storage/computational pairs 1801/1802 and 1805/1806 may correspond to more than one neural network layer. In at least one embodiment, additional storage/computation pairs (not shown) subsequent to or in parallel with storage/computation pairs 1801/1802 and 1805/1806 may be included in inference and/or training logic 1815.[0215] Figure 19 illustrates training and deployment of a deep neural network, according to at least one embodiment. In at least one embodiment, untrained neural network 1906 is trained using a training dataset 1902. In at least one embodiment, training framework 1904 is a PyTorch framework, whereas in other embodiments, training framework 1904 is a TensorFlow, Boost, Caffe, Microsoft Cognitive Toolkit/CNTK, MXNet, Chainer, Keras, Deeplearning4j, or other training framework. In at least one embodiment, training framework 1904 trains an untrained neural network 1906 and enables it to be trained using processing resources described herein to generate a trained neural network 1908. In at least one embodiment, weights may be chosen randomly or by pre-training using a deep belief network. In at least one embodiment, training may be performed in either a supervised, partially supervised, or unsupervised manner.[0216] In at least one embodiment, untrained neural network 1906 is trained using supervised learning, wherein training dataset 1902 includes an input paired with a desired output for an input, or where training dataset 1902 includes input having a known output and an output of neural network 1906 is manually graded. In at least one embodiment, untrained neural network 1906 is trained in a supervised manner and processes inputs from training dataset 1902 and compares resulting outputs against a set of expected or desired outputs. In at least one embodiment, errors are then propagated back through untrained neural network 1906. In at least one embodiment, training framework 1904 adjusts weights that control untrained neural network 1906. In at least one embodiment, training framework 1904 includes tools to monitor how well untrained neural network 1906 is converging towards a model, such as trained neural network 1908, suitable to generating correct answers, such as in result 1914, based on input data such as a new dataset 1912. In at least one embodiment, training framework 1904 trains untrained neural network 1906 repeatedly while adjust weights to refine an output of untrained neural network 1906 using a loss function and adjustment algorithm, such as stochastic gradient descent. In at least one embodiment, training framework 1904 trains untrained neural network 1906 until untrained neural network 1906 achieves a desired accuracy. In at least one embodiment, trained
neural network 1908 can then be deployed to implement any number of machine learning operations.[0217] In at least one embodiment, untrained neural network 1906 is trained using unsupervised learning, wherein untrained neural network 1906 attempts to train itself using unlabeled data. In at least one embodiment, unsupervised learning training dataset 1902 will include input data without any associated output data or “ground truth” data. In at least one embodiment, untrained neural network 1906 can learn groupings within training dataset 1902 and can determine how individual inputs are related to untrained dataset 1902. In at least one embodiment, unsupervised training can be used to generate a self-organizing map in trained neural network 1908 capable of performing operations useful in reducing dimensionality of new dataset 1912. In at least one embodiment, unsupervised training can also be used to perform anomaly detection, which allows identification of data points in new dataset 1912 that deviate from normal patterns of new dataset 1912.[0218] In at least one embodiment, semi-supervised learning may be used, which is a technique in which in training dataset 1902 includes a mix of labeled and unlabeled data. In at least one embodiment, training framework 1904 may be used to perform incremental learning, such as through transferred learning techniques. In at least one embodiment, incremental learning enables trained neural network 1908 to adapt to new dataset 1912 without forgetting knowledge instilled within trained neural network 1408 during initial training.5G Networks[0219] The following figures set forth, without limitation, exemplary 5G network-based systems that can be used to implement at least one embodiment.[0220] Figure 20 illustrates an architecture of a system 2000 of a network, in accordance with at least one embodiment. In at least one embodiment, system 2000 is shown to include a user equipment (UE) 2002 and a UE 2004. In at least one embodiment, UEs 2002 and 2004 are illustrated as smartphones (e.g., handheld touchscreen mobile computing devices connectable to one or more cellular networks) but may also comprise any mobile or non-mobile computing device, such as Personal Data Assistants (PDAs), pagers, laptop computers, desktop computers, wireless handsets, or any computing device including a wireless communications interface.
[0221] In at least one embodiment, any of UEs 2002 and 2004 can comprise an Internet of Things (loT) UE, which can comprise a network access layer designed for low-power loT applications utilizing short-lived UE connections. In at least one embodiment, an loT UE can utilize technologies such as machine-to-machine (M2M) or machine-type communications (MTC) for exchanging data with an MTC server or device via a public land mobile network (PLMN), Proximity-Based Service (ProSe) or device-to-device (D2D) communication, sensor networks, or loT networks. In at least one embodiment, a M2M or MTC exchange of data may be a machine-initiated exchange of data. In at least one embodiment, an loT network describes interconnecting loT UEs, which may include uniquely identifiable embedded computing devices (within Internet infrastructure), with short-lived connections. In at least one embodiment, an loT UEs may execute background applications (e.g., keep alive messages, status updates, etc.) to facilitate connections of an loT network.[0222] In at least one embodiment, UEs 2002 and 2004 may be configured to connect, e.g., communicatively couple, with a radio access network (RAN) 2016. In at least one embodiment, RAN 2016 may be, in at least one embodiment, an Evolved Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access Network (E-UTRAN), a NextGen RAN (NG RAN), or some other type of RAN. In at least one embodiment, UEs 2002 and 2004 utilize connections 2012 and 2014, respectively, each of which comprises a physical communications interface or layer. In at least one embodiment, connections 2012 and 2014 are illustrated as an air interface to enable communicative coupling, and can be consistent with cellular communications protocols, such as a Global System for Mobile Communications (GSM) protocol, a code-division multiple access (CDMA) network protocol, a Push-to-Talk (PTT) protocol, a PTT over Cellular (POC) protocol, a Universal Mobile Telecommunications System (UMTS) protocol, a 3GPP Long Term Evolution (LTE) protocol, a fifth generation (5G) protocol, a New Radio (NR) protocol, and variations thereof.[0223] In at least one embodiment, UEs 2002 and 2004 may further directly exchange communication data via a ProSe interface 2006. In at least one embodiment, ProSe interface 2006 may alternatively be referred to as a sidelink interface comprising one or more logical channels, including but not limited to a Physical Sidelink Control Channel (PSCCH), a Physical Sidelink Shared Channel (PSSCH), a Physical Sidelink Discovery Channel (PSDCH), and a
Physical Sidelink Broadcast Channel (PSBCH).[0224] In at least one embodiment, UE 2004 is shown to be configured to access an access point (AP) 2010 via connection 2008. In at least one embodiment, connection 2008 can comprise a local wireless connection, such as a connection consistent with any IEEE 802.11 protocol, wherein AP 2010 would comprise a wireless fidelity (WiFi®) router. In at least one embodiment, AP 2010 is shown to be connected to an Internet without connecting to a core network of a wireless system.[0225] In at least one embodiment, RAN 2016 can include one or more access nodes that enable connections 2012 and 2014. In at least one embodiment, these access nodes (ANs) can be referred to as base stations (BSs), NodeBs, evolved NodeBs (eNBs), next Generation NodeBs (gNB), RAN nodes, and so forth, and can comprise ground stations (e.g., terrestrial access points) or satellite stations providing coverage within a geographic area (e.g., a cell). In at least one embodiment, RAN 2016 may include one or more RAN nodes for providing macrocells, e.g., macro RAN node 2018, and one or more RAN nodes for providing femtocells or picocells (e.g., cells having smaller coverage areas, smaller user capacity, or higher bandwidth compared to macrocells), e.g., low power (LP) RAN node 2020.[0226] In at least one embodiment, any of RAN nodes 2018 and 2020 can terminate an air interface protocol and can be a first point of contact for UEs 2002 and 2004. In at least one embodiment, any of RAN nodes 2018 and 2020 can fulfill various logical functions for RAN 2016 including, but not limited to, radio network controller (RNC) functions such as radio bearer management, uplink and downlink dynamic radio resource management and data packet scheduling, and mobility management.[0227] In at least one embodiment, UEs 2002 and 2004 can be configured to communicate using Orthogonal Frequency-Division Multiplexing (OFDM) communication signals with each other or with any of RAN nodes 2018 and 2020 over a multi-carrier communication channel in accordance various communication techniques, such as, but not limited to, an Orthogonal Frequency Division Multiple Access (OFDMA) communication technique (e.g., for downlink communications) or a Single Carrier Frequency Division Multiple Access (SC-FDMA) communication technique (e.g., for uplink and ProSe or sidelink communications), and/or
variations thereof. In at least one embodiment, OFDM signals can comprise a plurality of orthogonal sub-carriers.[0228] In at least one embodiment, a downlink resource grid can be used for downlink transmissions from any of RAN nodes 2018 and 2020 to UEs 2002 and 2004, while uplink transmissions can utilize similar techniques. In at least one embodiment, a grid can be a time frequency grid, called a resource grid or time-frequency resource grid, which is a physical resource in a downlink in each slot. In at least one embodiment, such a time frequency plane representation is a common practice for OFDM systems, which makes it intuitive for radio resource allocation. In at least one embodiment, each column and each row of a resource grid corresponds to one OFDM symbol and one OFDM subcarrier, respectively. In at least one embodiment, a duration of a resource grid in a time domain corresponds to one slot in a radio frame. In at least one embodiment, a smallest time-frequency unit in a resource grid is denoted as a resource element. In at least one embodiment, each resource grid comprises a number of resource blocks, which describe a mapping of certain physical channels to resource elements. In at least one embodiment, each resource block comprises a collection of resource elements. In at least one embodiment, in a frequency domain, this may represent a smallest quantity of resources that currently can be allocated. In at least one embodiment, there are several different physical downlink channels that are conveyed using such resource blocks.[0229] In at least one embodiment, a physical downlink shared channel (PDSCH) may carry user data and higher-layer signaling to UEs 2002 and 2004. In at least one embodiment, a physical downlink control channel (PDCCH) may carry information about a transport format and resource allocations related to PDSCH channel, among other things. In at least one embodiment, it may also inform UEs 2002 and 2004 about a transport format, resource allocation, and HARQ (Hybrid Automatic Repeat Request) information related to an uplink shared channel. In at least one embodiment, typically, downlink scheduling (assigning control and shared channel resource blocks to UE 2002 within a cell) may be performed at any of RAN nodes 2018 and 2020 based on channel quality information fed back from any of UEs 2002 and 2004. In at least one embodiment, downlink resource assignment information may be sent on a PDCCH used for (e.g., assigned to) each of UEs 2002 and 2004.[0230] In at least one embodiment, a PDCCH may use control channel elements (CCEs) to
convey control information. In at least one embodiment, before being mapped to resource elements, PDCCH complex valued symbols may first be organized into quadruplets, which may then be permuted using a sub-block interleaver for rate matching. In at least one embodiment, each PDCCH may be transmitted using one or more of these CCEs, where each CCE may correspond to nine sets of four physical resource elements known as resource element groups (REGs). In at least one embodiment, four Quadrature Phase Shift Keying (QPSK) symbols may be mapped to each REG. In at least one embodiment, PDCCH can be transmitted using one or more CCEs, depending on a size of a downlink control information (DCI) and a channel condition. In at least one embodiment, there can be four or more different PDCCH formats defined in LTE with different numbers of CCEs (e.g., aggregation level, L=l, 2, 4, or 8).[0231] In at least one embodiment, an enhanced physical downlink control channel (EPDCCH) that uses PDSCH resources may be utilized for control information transmission. In at least one embodiment, EPDCCH may be transmitted using one or more enhanced control channel elements (ECCEs). In at least one embodiment, each ECCE may correspond to nine sets of four physical resource elements known as an enhanced resource element groups (EREGs). In at least one embodiment, an ECCE may have other numbers of EREGs in some situations.[0232] In at least one embodiment, RAN 2016 is shown to be communicatively coupled to a core network (CN) 2038 via an SI interface 2022. In at least one embodiment, CN 2038 may be an evolved packet core (EPC) network, a NextGen Packet Core (NPC) network, or some other type of CN. In at least one embodiment, SI interface 2022 is split into two parts: Sl-U interface 2026, which carries traffic data between RAN nodes 2018 and 2020 and serving gateway (S- GW) 2030, and a Sl-mobility management entity (MME) interface 2024, which is a signaling interface between RAN nodes 2018 and 2020 and MMEs 2028.[0233] In at least one embodiment, CN 2038 comprises MMEs 2028, S-GW 2030, Packet Data Network (PDN) Gateway (P-GW) 2034, and a home subscriber server (HSS) 2032. In at least one embodiment, MMEs 2028 may be similar in function to a control plane of legacy Serving General Packet Radio Service (GPRS) Support Nodes (SGSN). In at least one embodiment, MMEs 2028 may manage mobility aspects in access such as gateway selection and tracking area list management. In at least one embodiment, HSS 2032 may comprise a database for network users, including subscription related information to support a network entities' handling of
communication sessions. In at least one embodiment, CN 2038 may comprise one or several HSSs 2032, depending on a number of mobile subscribers, on a capacity of an equipment, on an organization of a network, etc. In at least one embodiment, HSS 2032 can provide support for routing/roaming, authentication, authorization, naming/addressing resolution, location dependencies, etc.[0234] In at least one embodiment, S-GW 2030 may terminate a SI interface 2022 towards RAN 2016, and routes data packets between RAN 2016 and CN 2038. In at least one embodiment, S-GW 2030 may be a local mobility anchor point for inter-RAN node handovers and also may provide an anchor for inter-3GPP mobility. In at least one embodiment, other responsibilities may include lawful intercept, charging, and some policy enforcement.[0235] In at least one embodiment, P-GW 2034 may terminate an SGi interface toward a PDN. In at least one embodiment, P-GW 2034 may route data packets between an EPC network 2038 and external networks such as a network including application server 2040 (alternatively referred to as application function (AF)) via an Internet Protocol (IP) interface 2042. In at least one embodiment, application server 2040 may be an element offering applications that use IP bearer resources with a core network (e.g., UMTS Packet Services (PS) domain, LTE PS data services, etc.). In at least one embodiment, P-GW 2034 is shown to be communicatively coupled to an application server 2040 via an IP communications interface 2042. In at least one embodiment, application server 2040 can also be configured to support one or more communication services (e.g., Voice-over-Internet Protocol (VoIP) sessions, PTT sessions, group communication sessions, social networking services, etc.) for UEs 2002 and 2004 via CN 2038.[0236] In at least one embodiment, P-GW 2034 may further be a node for policy enforcement and charging data collection. In at least one embodiment, policy and Charging Enforcement Function (PCRF) 2036 is a policy and charging control element of CN 2038. In at least one embodiment, in a non-roaming scenario, there may be a single PCRF in a Home Public Land Mobile Network (HPLMN) associated with a UE's Internet Protocol Connectivity Access Network (IP-CAN) session. In at least one embodiment, in a roaming scenario with local breakout of traffic, there may be two PCRFs associated with a UE's IP-CAN session: a Home PCRF (H-PCRF) within a HPLMN and a Visited PCRF (V-PCRF) within a Visited Public Land Mobile Network (VPLMN). In at least one embodiment, PCRF 2036 may be communicatively
coupled to application server 2040 via P-GW 2034. In at least one embodiment, application server 2040 may signal PCRF 2036 to indicate a new service flow and select an appropriate Quality of Service (QoS) and charging parameters. In at least one embodiment, PCRF 2036 may provision this rule into a Policy and Charging Enforcement Function (PCEF) (not shown) with an appropriate traffic flow template (TFT) and QoS class of identifier (QCI), which commences a QoS and charging as specified by application server 2040.[0237] Figure 21 illustrates an architecture of a system 2100 of a network in accordance with some embodiments. In at least one embodiment, system 2100 is shown to include a UE 2102, a 5G access node or RAN node (shown as (R)AN node 2108), a User Plane Function (shown as UPF 2104), a Data Network (DN 2106), which may be, in at least one embodiment, operator services, Internet access or 3rd party services, and a 5G Core Network (5GC) (shown as CN 2110).[0238] In at least one embodiment, CN 2110 includes an Authentication Server Function (AUSF 2114); a Core Access and Mobility Management Function (AMF 2112); a Session Management Function (SMF 2118); a Network Exposure Function (NEF 2116); a Policy Control Function (PCF 2122); a Network Function (NF) Repository Function (NRF 2120); a Unified Data Management (UDM 2124); and an Application Function (AF 2126). In at least one embodiment, CN 2110 may also include other elements that are not shown, such as a Structured Data Storage network function (SDSF), an Unstructured Data Storage network function (UDSF), and variations thereof.[0239] In at least one embodiment, UPF 2104 may act as an anchor point for intra-RAT and inter-RAT mobility, an external PDU session point of interconnect to DN 2106, and a branching point to support multi -homed PDU session. In at least one embodiment, UPF 2104 may also perform packet routing and forwarding, packet inspection, enforce user plane part of policy rules, lawfully intercept packets (UP collection); traffic usage reporting, perform QoS handling for user plane (e.g. packet filtering, gating, UL/DL rate enforcement), perform Uplink Traffic verification (e.g., SDF to QoS flow mapping), transport level packet marking in uplink and downlink, and downlink packet buffering and downlink data notification triggering. In at least one embodiment, UPF 2104 may include an uplink classifier to support routing traffic flows to a data network. In at least one embodiment, DN 2106 may represent various network operator services,
Internet access, or third party services.[0240] In at least one embodiment, AUSF 2114 may store data for authentication of UE 2102 and handle authentication related functionality. In at least one embodiment, AUSF 2114 may facilitate a common authentication framework for various access types.[0241] In at least one embodiment, AMF 2112 may be responsible for registration management (e.g., for registering UE 2102, etc.), connection management, reachability management, mobility management, and lawful interception of AMF-related events, and access authentication and authorization. In at least one embodiment, AMF 2112 may provide transport for SM messages for SMF 2118, and act as a transparent proxy for routing SM messages. In at least one embodiment, AMF 2112 may also provide transport for short message service (SMS) messages between UE 2102 and an SMS function (SMSF) (not shown by Figure 21). In at least one embodiment, AMF 2112 may act as Security Anchor Function (SEA), which may include interaction with AUSF 2114 and UE 2102 and receipt of an intermediate key that was established as a result of UE 2102 authentication process. In at least one embodiment, where USIM based authentication is used, AMF 2112 may retrieve security material from AUSF 2114. In at least one embodiment, AMF 2112 may also include a Security Context Management (SCM) function, which receives a key from SEA that it uses to derive access-network specific keys. In at least one embodiment, furthermore, AMF 2112 may be a termination point of RAN CP interface (N2 reference point), a termination point of NAS (NI) signaling, and perform NAS ciphering and integrity protection.[0242] In at least one embodiment, AMF 2112 may also support NAS signaling with a UE 2102 over an N3 interworking-function (IWF) interface. In at least one embodiment, N3IWF may be used to provide access to untrusted entities. In at least one embodiment, N3IWF may be a termination point for N2 and N3 interfaces for control plane and user plane, respectively, and as such, may handle N2 signaling from SMF and AMF for PDU sessions and QoS, encapsulate/de-encapsulate packets for IPSec and N3 tunneling, mark N3 user-plane packets in uplink, and enforce QoS corresponding to N3 packet marking taking into account QoS requirements associated to such marking received over N2. In at least one embodiment, N3IWF may also relay uplink and downlink control-plane NAS (NI) signaling between UE 2102 and AMF 2112, and relay uplink and downlink user-plane packets between UE 2102 and UPF 2104.
In at least one embodiment, N3IWF also provides mechanisms for IPsec tunnel establishment with UE 2102.[0243] In at least one embodiment, SMF 2118 may be responsible for session management (e.g., session establishment, modify and release, including tunnel maintain between UPF and AN node); UE IP address allocation & management (including optional Authorization); Selection and control of UP function; Configures traffic steering at UPF to route traffic to proper destination; termination of interfaces towards Policy control functions; control part of policy enforcement and QoS; lawful intercept (for SM events and interface to LI System); termination of SM parts of NAS messages; downlink Data Notification; initiator of AN specific SM information, sent via AMF over N2 to AN; determine SSC mode of a session. In at least one embodiment, SMF 2118 may include following roaming functionality: handle local enforcement to apply QoS SLAB (VPLMN); charging data collection and charging interface (VPLMN); lawful intercept (in VPLMN for SM events and interface to LI System); support for interaction with external DN for transport of signaling for PDU session authorization/ authentication by external DN.[0244] In at least one embodiment, NEF 2116 may provide means for securely exposing services and capabilities provided by 3 GPP network functions for third party, internal exposure/re-exposure, Application Functions (e.g., AF 2126), edge computing or fog computing systems, etc. In at least one embodiment, NEF 2116 may authenticate, authorize, and/or throttle AFs. In at least one embodiment, NEF 2116 may also translate information exchanged with AF 2126 and information exchanged with internal network functions. In at least one embodiment, NEF 2116 may translate between an AF-Service-Identifier and an internal 5GC information. In at least one embodiment, NEF 2116 may also receive information from other network functions (NFs) based on exposed capabilities of other network functions. In at least one embodiment, this information may be stored at NEF 2116 as structured data, or at a data storage NF using a standardized interfaces. In at least one embodiment, stored information can then be re-exposed by NEF 2116 to other NFs and AFs, and/or used for other purposes such as analytics.[0245] In at least one embodiment, NRF 2120 may support service discovery functions, receive NF Discovery Requests from NF instances, and provide information of discovered NF instances to NF instances. In at least one embodiment, NRF 2120 also maintains information of
available NF instances and their supported services.[0246] In at least one embodiment, PCF 2122 may provide policy rules to control plane function(s) to enforce them, and may also support unified policy framework to govern network behavior. In at least one embodiment, PCF 2122 may also implement a front end (FE) to access subscription information relevant for policy decisions in a UDR of UDM 2124.[0247] In at least one embodiment, UDM 2124 may handle subscription-related information to support a network entities' handling of communication sessions, and may store subscription data of UE 2102. In at least one embodiment, UDM 2124 may include two parts, an application FE and a User Data Repository (UDR). In at least one embodiment, UDM may include a UDM FE, which is in charge of processing of credentials, location management, subscription management and so on. In at least one embodiment, several different front ends may serve a same user in different transactions. In at least one embodiment, UDM-FE accesses subscription information stored in an UDR and performs authentication credential processing; user identification handling; access authorization; registration/mobility management; and subscription management. In at least one embodiment, UDR may interact with PCF 2122. In at least one embodiment, UDM 2124 may also support SMS management, wherein an SMS-FE implements a similar application logic as discussed previously.[0248] In at least one embodiment, AF 2126 may provide application influence on traffic routing, access to a Network Capability Exposure (NCE), and interact with a policy framework for policy control. In at least one embodiment, NCE may be a mechanism that allows a 5GC and AF 2126 to provide information to each other via NEF 2116, which may be used for edge computing implementations. In at least one embodiment, network operator and third party services may be hosted close to UE 2102 access point of attachment to achieve an efficient service delivery through a reduced end-to-end latency and load on a transport network. In at least one embodiment, for edge computing implementations, 5GC may select a UPF 2104 close to UE 2102 and execute traffic steering from UPF 2104 to DN 2106 via N6 interface. In at least one embodiment, this may be based on UE subscription data, UE location, and information provided by AF 2126. In at least one embodiment, AF 2126 may influence UPF (re)selection and traffic routing. In at least one embodiment, based on operator deployment, when AF 2126 is considered to be a trusted entity, a network operator may permit AF 2126 to interact directly
with relevant NFs.[0249] In at least one embodiment, CN 2110 may include an SMSF, which may be responsible for SMS subscription checking and verification, and relaying SM messages to/from UE 2102 to/from other entities, such as an SMS-GMSC/IWMSC/SMS-router. In at least one embodiment, SMS may also interact with AMF 2112 and UDM 2124 for notification procedure that UE 2102 is available for SMS transfer (e.g., set a UE not reachable flag, and notifying UDM 2124 when UE 2102 is available for SMS).[0250] In at least one embodiment, system 2100 may include following service-based interfaces: Namf: Service-based interface exhibited by AMF; Nsmf: Service-based interface exhibited by SMF; Nnef: Service-based interface exhibited by NEF; Npcf: Service-based interface exhibited by PCF; Nudm: Service-based interface exhibited by UDM; Naf: Servicebased interface exhibited by AF; Nnrf: Service-based interface exhibited by NRF; and Nausf: Service-based interface exhibited by AUSF.[0251] In at least one embodiment, system 2100 may include following reference points: Nl : Reference point between UE and AMF; N2: Reference point between (R)AN and AMF; N3: Reference point between (R)AN and UPF; N4: Reference point between SMF and UPF; and N6: Reference point between UPF and a Data Network. In at least one embodiment, there may be many more reference points and/or service-based interfaces between a NF services in NFs, however, these interfaces and reference points have been omitted for clarity. In at least one embodiment, an NS reference point may be between a PCF and AF; an N7 reference point may be between PCF and SMF; an Nl 1 reference point between AMF and SMF; etc. In at least one embodiment, CN 2110 may include an Nx interface, which is an inter-CN interface between MME and AMF 2112 in order to enable interworking between CN 2110 and CN 7221.[0252] In at least one embodiment, system 2100 may include multiple RAN nodes (such as (R)AN node 2108) wherein an Xn interface is defined between two or more (R)AN node 2108 (e.g., gNBs) that connecting to 5GC 410, between a (R)AN node 2108 (e.g., gNB) connecting to CN 2110 and an eNB (e.g., a macro RAN node), and/or between two eNBs connecting to CN 2110.[0253] In at least one embodiment, Xn interface may include an Xn user plane (Xn-U)
interface and an Xn control plane (Xn-C) interface. In at least one embodiment, Xn-U may provide non-guaranteed delivery of user plane PDUs and support/provide data forwarding and flow control functionality. In at least one embodiment, Xn-C may provide management and error handling functionality, functionality to manage a Xn-C interface; mobility support for UE 2102 in a connected mode (e.g., CM-CONNECTED) including functionality to manage UE mobility for connected mode between one or more (R)AN node 2108. In at least one embodiment, mobility support may include context transfer from an old (source) serving (R)AN node 2108 to new (target) serving (R)AN node 2108; and control of user plane tunnels between old (source) serving (R)AN node 2108 to new (target) serving (R)AN node 2108.[0254] In at least one embodiment, a protocol stack of a Xn-U may include a transport network layer built on Internet Protocol (IP) transport layer, and a GTP-U layer on top of a UDP and/or IP layer(s) to carry user plane PDUs. In at least one embodiment, Xn-C protocol stack may include an application layer signaling protocol (referred to as Xn Application Protocol (Xn-AP)) and a transport network layer that is built on an SCTP layer. In at least one embodiment, SCTP layer may be on top of an IP layer. In at least one embodiment, SCTP layer provides a guaranteed delivery of application layer messages. In at least one embodiment, in a transport IP layer point-to-point transmission is used to deliver signaling PDUs. In at least one embodiment, Xn-U protocol stack and/or a Xn-C protocol stack may be same or similar to an user plane and/or control plane protocol stack(s) shown and described herein.[0255] Figure 22 is an illustration of a control plane protocol stack in accordance with some embodiments. In at least one embodiment, a control plane 2200 is shown as a communications protocol stack between UE 2002 (or alternatively, UE 2004), RAN 2016, and MME(s) 2028.[0256] In at least one embodiment, PHY layer 2202 may transmit or receive information used by MAC layer 2204 over one or more air interfaces. In at least one embodiment, PHY layer 2202 may further perform link adaptation or adaptive modulation and coding (AMC), power control, cell search (e.g., for initial synchronization and handover purposes), and other measurements used by higher layers, such as an RRC layer 2210. In at least one embodiment, PHY layer 2202 may still further perform error detection on transport channels, forward error correction (FEC) coding/de-coding of transport channels, modulation/demodulation of physical channels, interleaving, rate matching, mapping onto physical channels, and Multiple Input
Multiple Output (MIMO) antenna processing.[0257] In at least one embodiment, MAC layer 2204 may perform mapping between logical channels and transport channels, multiplexing of MAC service data units (SDUs) from one or more logical channels onto transport blocks (TB) to be delivered to PHY via transport channels, de-multiplexing MAC SDUs to one or more logical channels from transport blocks (TB) delivered from PHY via transport channels, multiplexing MAC SDUs onto TBs, scheduling information reporting, error correction through hybrid automatic repeat request (HARD), and logical channel prioritization.[0258] In at least one embodiment, RLC layer 2206 may operate in a plurality of modes of operation, including: Transparent Mode (TM), Unacknowledged Mode (UM), and Acknowledged Mode (AM). In at least one embodiment, RLC layer 2206 may execute transfer of upper layer protocol data units (PDUs), error correction through automatic repeat request (ARQ) for AM data transfers, and concatenation, segmentation and reassembly of RLC SDUs for UM and AM data transfers. In at least one embodiment, RLC layer 2206 may also execute re-segmentation of RLC data PDUs for AM data transfers, reorder RLC data PDUs for UM and AM data transfers, detect duplicate data for UM and AM data transfers, discard RLC SDUs for UM and AM data transfers, detect protocol errors for AM data transfers, and perform RLC reestablishment.[0259] In at least one embodiment, PDCP layer 2208 may execute header compression and decompression of IP data, maintain PDCP Sequence Numbers (SNs), perform in-sequence delivery of upper layer PDUs at re-establishment of lower layers, eliminate duplicates of lower layer SDUs at re-establishment of lower layers for radio bearers mapped on RLC AM, cipher and decipher control plane data, perform integrity protection and integrity verification of control plane data, control timer-based discard of data, and perform security operations (e.g., ciphering, deciphering, integrity protection, integrity verification, etc.).[0260] In at least one embodiment, main services and functions of a RRC layer 2210 may include broadcast of system information (e.g., included in Master Information Blocks (MIBs) or System Information Blocks (SIBs) related to a non-access stratum (NAS)), broadcast of system information related to an access stratum (AS), paging, establishment, maintenance and release of
an RRC connection between an UE and E-UTRAN (e.g., RRC connection paging, RRC connection establishment, RRC connection modification, and RRC connection release), establishment, configuration, maintenance and release of point-to-point radio bearers, security functions including key management, inter radio access technology (RAT) mobility, and measurement configuration for UE measurement reporting. In at least one embodiment, said MIBs and SIBs may comprise one or more information elements (IES), which may each comprise individual data fields or data structures.[0261] In at least one embodiment, UE 2002 and RAN 2016 may utilize a Uu interface (e.g., an LTE-Uu interface) to exchange control plane data via a protocol stack comprising PHY layer 2202, MAC layer 2204, RLC layer 2206, PDCP layer 2208, and RRC layer 2210.[0262] In at least one embodiment, non-access stratum (NAS) protocols (NAS protocols 2212) form a highest stratum of a control plane between UE 2002 and MME(s) 2028. In at least one embodiment, NAS protocols 2212 support mobility of UE 2002 and session management procedures to establish and maintain IP connectivity between UE 2002 and P-GW 2034.[0263] In at least one embodiment, Si Application Protocol (Sl-AP) layer (Si-AP layer 2222) may support functions of a Si interface and comprise Elementary Procedures (EPs). In at least one embodiment, an EP is a unit of interaction between RAN 2016 and CN 2028. In at least one embodiment, SI -AP layer services may comprise two groups: UE-associated services and non UE-associated services. In at least one embodiment, these services perform functions including, but not limited to: E-UTRAN Radio Access Bearer (E-RAB) management, UE capability indication, mobility, NAS signaling transport, RAN Information Management (RIM), and configuration transfer.[0264] In at least one embodiment, Stream Control Transmission Protocol (SCTP) layer (alternatively referred to as a stream control transmission protocol/internet protocol (SCTP/IP) layer) (SCTP layer 2220) may ensure reliable delivery of signaling messages between RAN 2016 and MME(s) 2028 based, in part, on an IP protocol, supported by an IP layer 2218. In at least one embodiment, L2 layer 2216 and an LI layer 2214 may refer to communication links (e.g., wired or wireless) used by a RAN node and MME to exchange information.[0265] In at least one embodiment, RAN 2016 and MME(s) 2028 may utilize an SI -MME
interface to exchange control plane data via a protocol stack comprising a LI layer 2214, L2 layer 2216, IP layer 2218, SCTP layer 2220, and Si -AP layer 2222.[0266] Figure 23 is an illustration of a user plane protocol stack in accordance with at least one embodiment. In at least one embodiment, a user plane 2300 is shown as a communications protocol stack between a UE 2002, RAN 2016, S-GW 2030, and P-GW 2034. In at least one embodiment, user plane 2300 may utilize a same protocol layers as control plane 2200. In at least one embodiment, UE 2002 and RAN 2016 may utilize a Uu interface (e.g., an LTE-Uu interface) to exchange user plane data via a protocol stack comprising PHY layer 2202, MAC layer 2204, RLC layer 2206, PDCP layer 2208.[0267] In at least one embodiment, General Packet Radio Service (GPRS) Tunneling Protocol for a user plane (GTP-U) layer (GTP-U layer 2304) may be used for carrying user data within a GPRS core network and between a radio access network and a core network. In at least one embodiment, user data transported can be packets in any of IPv4, IPv6, or PPP formats. In at least one embodiment, UDP and IP security (UDP/IP) layer (UDP/IP layer 2302) may provide checksums for data integrity, port numbers for addressing different functions at a source and destination, and encryption and authentication on selected data flows. In at least one embodiment, RAN 2016 and S-GW 2030 may utilize an SI -U interface to exchange user plane data via a protocol stack comprising LI layer 2214, L2 layer 2216, UDP/IP layer 2302, and GTP-U layer 2304. In at least one embodiment, S-GW 2030 and P-GW 2034 may utilize an S5/S8a interface to exchange user plane data via a protocol stack comprising LI layer 2214, L2 layer 2216, UDP/IP layer 2302, and GTP-U layer 2304. In at least one embodiment, as discussed above with respect to Figure 22, NAS protocols support a mobility of UE 2002 and session management procedures to establish and maintain IP connectivity between UE 2002 and P-GW 2034.[0268] Figure 24 illustrates components 2400 of a core network in accordance with at least one embodiment. In at least one embodiment, components of CN 2038 may be implemented in one physical node or separate physical nodes including components to read and execute instructions from a machine-readable or computer-readable medium (e.g., a non-transitory machine-readable storage medium). In at least one embodiment, Network Functions Virtualization (NFV) is utilized to virtualize any or all of above described network node functions via executable
instructions stored in one or more computer readable storage mediums (described in further detail below). In at least one embodiment, a logical instantiation of CN 2038 may be referred to as a network slice 2402 (e.g., network slice 2402 is shown to include HSS 2032, MME(s) 2028, and S-GW 2030). In at least one embodiment, a logical instantiation of a portion of CN 2038 may be referred to as a network sub-slice 2404 (e.g., network sub-slice 2404 is shown to include P-GW 2034 and PCRF 2036).[0269] In at least one embodiment, NFV architectures and infrastructures may be used to virtualize one or more network functions, alternatively performed by proprietary hardware, onto physical resources comprising a combination of industry-standard server hardware, storage hardware, or switches. In at least one embodiment, NFV systems can be used to execute virtual or reconfigurable implementations of one or more EPC components/functions.[0270] Figure 25 is a block diagram illustrating components, according to at least one embodiment, of a system 2500 to support network function virtualization (NFV). In at least one embodiment, system 2500 is illustrated as including a virtualized infrastructure manager (shown as VIM 2502), a network function virtualization infrastructure (shown as NFVI 2504), a VNF manager (shown as VNFM 2506), virtualized network functions (shown as VNF 2508), an element manager (shown as EM 2510), an NFV Orchestrator (shown as NFVO 2512), and a network manager (shown as NM 2514).[0271] In at least one embodiment, VIM 2502 manages resources of NFVI 2504. In at least one embodiment, NFVI 2504 can include physical or virtual resources and applications (including hypervisors) used to execute system 2500. In at least one embodiment, VIM 2502 may manage a life cycle of virtual resources with NFVI 2504 (e.g., creation, maintenance, and tear down of virtual machines (VMs) associated with one or more physical resources), track VM instances, track performance, fault and security of VM instances and associated physical resources, and expose VM instances and associated physical resources to other management systems.[0272] In at least one embodiment, VNFM 2506 may manage VNF 2508. In at least one embodiment, VNF 2508 may be used to execute EPC components/ functions. In at least one embodiment, VNFM 2506 may manage a life cycle of VNF 2508 and track performance, fault
and security of virtual aspects of VNF 2508. In at least one embodiment, EM 2510 may track performance, fault and security of functional aspects of VNF 2508. In at least one embodiment, tracking data from VNFM 2506 and EM 2510 may comprise, in at least one embodiment, performance measurement (PM) data used by VIM 2502 or NFVI 2504. In at least one embodiment, both VNFM 2506 and EM 2510 can scale up/down a quantity of VNFs of system 2500.[0273] In at least one embodiment, NFVO 2512 may coordinate, authorize, release and engage resources of NFVI 2504 in order to provide a requested service (e.g., to execute an EPC function, component, or slice). In at least one embodiment, NM 2514 may provide a package of end-user functions with responsibility for a management of a network, which may include network elements with VNFs, non-virtualized network functions, or both (management of VNFs may occur via an EM 2510).Computer-Based Systems[0274] The following figures set forth, without limitation, exemplary computer-based systems that can be used to implement at least one embodiment.[0275] Figure 26 illustrates a processing system 2600, in accordance with at least one embodiment. In at least one embodiment, processing system 2600 includes one or more processors 2602 and one or more graphics processors 2608, and may be a single processor desktop system, a multiprocessor workstation system, or a server system having a large number of processors 2602 or processor cores 2607. In at least one embodiment, processing system 2600 is a processing platform incorporated within a system-on-a-chip (“SoC”) integrated circuit for use in mobile, handheld, or embedded devices.[0276] In at least one embodiment, processing system 2600 can include, or be incorporated within a server-based gaming platform, a game console, a media console, a mobile gaming console, a handheld game console, or an online game console. In at least one embodiment, processing system 2600 is a mobile phone, smart phone, tablet computing device or mobile Internet device. In at least one embodiment, processing system 2600 can also include, couple with, or be integrated within a wearable device, such as a smart watch wearable device, smart eyewear device, augmented reality device, or virtual reality device. In at least one embodiment,
processing system 2600 is a television or set top box device having one or more processors 2602 and a graphical interface generated by one or more graphics processors 2608.[0277] In at least one embodiment, one or more processors 2602 each include one or more processor cores 2607 to process instructions which, when executed, perform operations for system and user software. In at least one embodiment, each of one or more processor cores 2607 is configured to process a specific instruction set 2609. In at least one embodiment, instruction set 2609 may facilitate Complex Instruction Set Computing (“CISC”), Reduced Instruction Set Computing (“RISC”), or computing via a Very Long Instruction Word (“VLIW"). In at least one embodiment, processor cores 2607 may each process a different instruction set 2609, which may include instructions to facilitate emulation of other instruction sets. In at least one embodiment, processor core 2607 may also include other processing devices, such as a digital signal processor (“DSP”).[0278] In at least one embodiment, processor 2602 includes cache memory (‘cache”) 2604. In at least one embodiment, processor 2602 can have a single internal cache or multiple levels of internal cache. In at least one embodiment, cache memory is shared among various components of processor 2602. In at least one embodiment, processor 2602 also uses an external cache (e.g., a Level 3 (“L3”) cache or Last Level Cache (“LLC”)) (not shown), which may be shared among processor cores 2607 using known cache coherency techniques. In at least one embodiment, register file 2606 is additionally included in processor 2602 which may include different types of registers for storing different types of data (e.g., integer registers, floating point registers, status registers, and an instruction pointer register). In at least one embodiment, register file 2606 may include general-purpose registers or other registers.[0279] In at least one embodiment, one or more processor(s) 2602 are coupled with one or more interface bus(es) 2610 to transmit communication signals such as address, data, or control signals between processor 2602 and other components in processing system 2600. In at least one embodiment interface bus 2610, in one embodiment, can be a processor bus, such as a version of a Direct Media Interface (“DMI”) bus. In at least one embodiment, interface bus 2610 is not limited to a DMI bus, and may include one or more Peripheral Component Interconnect buses (e.g., “PCI,” PCI Express (“PCIe”)), memory buses, or other types of interface buses. In at least one embodiment processor(s) 2602 include an integrated memory controller 2616 and a platform
controller hub 2630. In at least one embodiment, memory controller 2616 facilitates communication between a memory device and other components of processing system 2600, while platform controller hub (“PCH”) 2630 provides connections to Input/Output (“I/O”) devices via a local I/O bus.[0280] In at least one embodiment, memory device 2620 can be a dynamic random access memory (“DRAM”) device, a static random access memory (“SRAM”) device, flash memory device, phase-change memory device, or some other memory device having suitable performance to serve as processor memory. In at least one embodiment memory device 2620 can operate as system memory for processing system 2600, to store data 2622 and instructions 2621 for use when one or more processors 2602 executes an application or process. In at least one embodiment, memory controller 2616 also couples with an optional external graphics processor 2612, which may communicate with one or more graphics processors 2608 in processors 2602 to perform graphics and media operations. In at least one embodiment, a display device 2611 can connect to processor(s) 2602. In at least one embodiment display device 2611 can include one or more of an internal display device, as in a mobile electronic device or a laptop device or an external display device attached via a display interface (e.g., DisplayPort, etc.). In at least one embodiment, display device 2611 can include a head mounted display (“HMD”) such as a stereoscopic display device for use in virtual reality (“VR”) applications or augmented reality (“AR”) applications.[0281] In at least one embodiment, platform controller hub 2630 enables peripherals to connect to memory device 2620 and processor 2602 via a high-speed I/O bus. In at least one embodiment, I/O peripherals include, but are not limited to, an audio controller 2646, a network controller 2634, a firmware interface 2628, a wireless transceiver 2626, touch sensors 2625, a data storage device 2624 (e.g., hard disk drive, flash memory, etc.). In at least one embodiment, data storage device 2624 can connect via a storage interface (e.g., SATA) or via a peripheral bus, such as PCI, or PCIe. In at least one embodiment, touch sensors 2625 can include touch screen sensors, pressure sensors, or fingerprint sensors. In at least one embodiment, wireless transceiver 2626 can be a Wi-Fi transceiver, a Bluetooth transceiver, or a mobile network transceiver such as a 3G, 4G, or Long Term Evolution (“LTE”) transceiver. In at least one embodiment, firmware interface 2628 enables communication with system firmware, and can be,
in at least one embodiment, a unified extensible firmware interface (“UEFI”). In at least one embodiment, network controller 2634 can enable a network connection to a wired network. In at least one embodiment, a high-performance network controller (not shown) couples with interface bus 2610. In at least one embodiment, audio controller 2646 is a multi-channel high definition audio controller. In at least one embodiment, processing system 2600 includes an optional legacy I/O controller 2640 for coupling legacy (e.g., Personal System 2 (“PS/2”)) devices to processing system 2600. In at least one embodiment, platform controller hub 2630 can also connect to one or more Universal Serial Bus (“USB”) controllers 2642 connect input devices, such as keyboard and mouse 2643 combinations, a camera 2644, or other USB input devices.[0282] In at least one embodiment, an instance of memory controller 2616 and platform controller hub 2630 may be integrated into a discreet external graphics processor, such as external graphics processor 2612. In at least one embodiment, platform controller hub 2630 and/or memory controller 2616 may be external to one or more processor(s) 2602. In at least one embodiment, processing system 2600 can include an external memory controller 2616 and platform controller hub 2630, which may be configured as a memory controller hub and peripheral controller hub within a system chipset that is in communication with processor(s) 2602.[0283] Figure 27 illustrates a computer system 2700, in accordance with at least one embodiment. In at least one embodiment, computer system 2700 may be a system with interconnected devices and components, an SOC, or some combination. In at least on embodiment, computer system 2700 is formed with a processor 2702 that may include execution units to execute an instruction. In at least one embodiment, computer system 2700 may include, without limitation, a component, such as processor 2702 to employ execution units including logic to perform algorithms for processing data. In at least one embodiment, computer system 2700 may include processors, such as PENTIUM® Processor family, XeonTM, Itanium®, XScaleTM and/or StrongARMTM, Intel® Core™, or Intel® Nervana™ microprocessors available from Intel Corporation of Santa Clara, California, although other systems (including PCs having other microprocessors, engineering workstations, set-top boxes and like) may also be used. In at least one embodiment, computer system 2700 may execute a version of WINDOWS’ operating system available from Microsoft Corporation of Redmond, Wash., although other
operating systems (UNIX and Linux in at least one embodiment), embedded software, and/or graphical user interfaces, may also be used.[0284] In at least one embodiment, computer system 2700 may be used in other devices such as handheld devices and embedded applications. Some ones of the at least one embodiments of handheld devices include cellular phones, Internet Protocol devices, digital cameras, personal digital assistants (“PDAs”), and handheld PCs. In at least one embodiment, embedded applications may include a microcontroller, a digital signal processor (DSP), an SoC, network computers (“NetPCs”), set-top boxes, network hubs, wide area network (“WAN”) switches, or any other system that may perform one or more instructions.[0285] In at least one embodiment, computer system 2700 may include, without limitation, processor 2702 that may include, without limitation, one or more execution units 2708 that may be configured to execute a Compute Unified Device Architecture (“CUD A”) (CUD A® is developed by NVIDIA Corporation of Santa Clara, CA) program. In at least one embodiment, a CUDA program is at least a portion of a software application written in a CUDA programming language. In at least one embodiment, computer system 2700 is a single processor desktop or server system. In at least one embodiment, computer system 2700 may be a multiprocessor system. In at least one embodiment, processor 2702 may include, without limitation, a CISC microprocessor, a RISC microprocessor, a VLIW microprocessor, a processor implementing a combination of instruction sets, or any other processor device, such as a digital signal processor, in at least one embodiment. In at least one embodiment, processor 2702 may be coupled to a processor bus 2710 that may transmit data signals between processor 2702 and other components in computer system 2700.[0286] In at least one embodiment, processor 2702 may include, without limitation, a Level 1 (“LI”) internal cache memory (“cache”) 2704. In at least one embodiment, processor 2702 may have a single internal cache or multiple levels of internal cache. In at least one embodiment, cache memory may reside external to processor 2702. In at least one embodiment, processor 2702 may also include a combination of both internal and external caches. In at least one embodiment, a register file 2706 may store different types of data in various registers including, without limitation, integer registers, floating point registers, status registers, and instruction pointer register.
[0287] In at least one embodiment, execution unit 2708, including, without limitation, logic to perform integer and floating point operations, also resides in processor 2702. Processor 2702 may also include a microcode (“ucode”) read only memory (“ROM”) that stores microcode for certain macro instructions. In at least one embodiment, execution unit 2708 may include logic to handle a packed instruction set 2709. In at least one embodiment, by including packed instruction set 2709 in an instruction set of a general-purpose processor 2702, along with associated circuitry to execute instructions, operations used by many multimedia applications may be performed using packed data in a general-purpose processor 2702. In at least one embodiment, many multimedia applications may be accelerated and executed more efficiently by using full width of a processor's data bus for performing operations on packed data, which may eliminate a need to transfer smaller units of data across a processor's data bus to perform one or more operations one data element at a time.[0288] In at least one embodiment, execution unit 2708 may also be used in microcontrollers, embedded processors, graphics devices, DSPs, and other types of logic circuits. In at least one embodiment, computer system 2700 may include, without limitation, a memory 2720. In at least one embodiment, memory 2720 may be implemented as a DRAM device, an SRAM device, flash memory device, or other memory device. Memory 2720 may store instruction(s) 2719 and/or data 2721 represented by data signals that may be executed by processor 2702.[0289] In at least one embodiment, a system logic chip may be coupled to processor bus 2710 and memory 2720. In at least one embodiment, a system logic chip may include, without limitation, a memory controller hub (“MCH”) 2716, and processor 2702 may communicate with MCH 2716 via processor bus 2710. In at least one embodiment, MCH 2716 may provide a high bandwidth memory path 2718 to memory 2720 for instruction and data storage and for storage of graphics commands, data and textures. In at least one embodiment, MCH 2716 may direct data signals between processor 2702, memory 2720, and other components in computer system 2700 and to bridge data signals between processor bus 2710, memory 2720, and a system I/O 2722. In at least one embodiment, system logic chip may provide a graphics port for coupling to a graphics controller. In at least one embodiment, MCH 2716 may be coupled to memory 2720 through high bandwidth memory path 2718 and graphics/video card 2712 may be coupled to MCH 2716 through an Accelerated Graphics Port (“AGP”) interconnect 2714.
[0290] In at least one embodiment, computer system 2700 may use system I/O 2722 that is a proprietary hub interface bus to couple MCH 2716 to I/O controller hub (“ICH”) 2730. In at least one embodiment, ICH 2730 may provide direct connections to some I/O devices via a local I/O bus. In at least one embodiment, local I/O bus may include, without limitation, a high-speed I/O bus for connecting peripherals to memory 2720, a chipset, and processor 2702. Examples may include, without limitation, an audio controller 2729, a firmware hub (“flash BIOS”) 2728, a wireless transceiver 2726, a data storage 2724, a legacy VO controller 2723 containing a user input interface 2725 and a keyboard interface, a serial expansion port 2777, such as a USB, and a network controller 2734. Data storage 2724 may comprise a hard disk drive, a floppy disk drive, a CD-ROM device, a flash memory device, or other mass storage device.[0291] In at least one embodiment, Figure 27 illustrates a system, which includes interconnected hardware devices or “chips.” In at least one embodiment, Figure 27 may illustrate an exemplary SoC. In at least one embodiment, devices illustrated in Figure 27 may be interconnected with proprietary interconnects, standardized interconnects (e.g., PCIe), or some combination thereof. In at least one embodiment, one or more components of system 2700 are interconnected using compute express link (“CXL”) interconnects.[0292] Figure 28 illustrates a system 2800, in accordance with at least one embodiment. In at least one embodiment, system 2800 is an electronic device that utilizes a processor 2810. In at least one embodiment, system 2800 may be, in at least one embodiment and without limitation, a notebook, a tower server, a rack server, a blade server, a laptop, a desktop, a tablet, a mobile device, a phone, an embedded computer, or any other suitable electronic device.[0293] In at least one embodiment, system 2800 may include, without limitation, processor 2810 communicatively coupled to any suitable number or kind of components, peripherals, modules, or devices. In at least one embodiment, processor 2810 is coupled using a bus or interface, such as an I2C bus, a System Management Bus (“SMBus”), a Low Pin Count (“LPC”) bus, a Serial Peripheral Interface (“SPI”), a High Definition Audio (“HD A”) bus, a Serial Advance Technology Attachment (“SATA”) bus, a USB (versions 1, 2, 3), or a Universal Asynchronous Receiver/Transmitter (“UART”) bus. In at least one embodiment, Figure 28 illustrates a system which includes interconnected hardware devices or “chips.” In at least one embodiment, Figure 28 may illustrate an exemplary SoC. In at least one embodiment, devices
illustrated in Figure 28 may be interconnected with proprietary interconnects, standardized interconnects (e.g., PCIe) or some combination thereof. In at least one embodiment, one or more components of Figure 28 are interconnected using CXL interconnects.[0294] In at least one embodiment, FIG 28 may include a display 2824, a touch screen 2825, a touch pad 2830, a Near Field Communications unit (“NFC”) 2845, a sensor hub 2840, a thermal sensor 2846, an Express Chipset (“EC”) 2835, a Trusted Platform Module (“TPM”) 2838, BlOS/firmware/flash memory (“BIOS, FW Flash”) 2822, a DSP 2860, a Solid State Disk (“SSD”) or Hard Disk Drive (“HDD”) 2820, a wireless local area network unit (“WLAN”) 2850, a Bluetooth unit 2852, a Wireless Wide Area Network unit (“WWAN”) 2856, a Global Positioning System (“GPS”) 2855, a camera (“USB 3.0 camera”) 2854 such as a USB 3.0 camera, or a Low Power Double Data Rate (“LPDDR”) memory unit (“LPDDR3”) 2815 implemented, in at least one embodiment, LPDDR3 standard. These components may each be implemented in any suitable manner.[0295] In at least one embodiment, other components may be communicatively coupled to processor 2810 through components discussed above. In at least one embodiment, an accelerometer 2841, an Ambient Light Sensor (“ALS”) 2842, a compass 2843, and a gyroscope 2844 may be communicatively coupled to sensor hub 2840. In at least one embodiment, a thermal sensor 2839, a fan 2837, a keyboard 2846, and a touch pad 2830 may be communicatively coupled to EC 2835. In at least one embodiment, a speaker 2863, a headphones 2864, and a microphone (“mic”) 2865 may be communicatively coupled to an audio unit (“audio codec and class d amp”) 2864, which may in turn be communicatively coupled to DSP 2860. In at least one embodiment, audio unit 2864 may include, without limitation, an audio coder/decoder (“codec”) and a class D amplifier. In at least one embodiment, a SIM card (“SIM”) 2857 may be communicatively coupled to WWAN unit 2856. In at least one embodiment, components such as WLAN unit 2850 and Bluetooth unit 2852, as well as WWAN unit 2856 may be implemented in a Next Generation Form Factor (“NGFF”).[0296] Figure 29 illustrates an exemplary integrated circuit 2900, in accordance with at least one embodiment. In at least one embodiment, exemplary integrated circuit 2900 is an SoC that may be fabricated using one or more IP cores. In at least one embodiment, integrated circuit 2900 includes one or more application processor(s) 2905 (e.g., CPUs), at least one graphics
processor 2910, and may additionally include an image processor 2915 and/or a video processor 2920, any of which may be a modular IP core. In at least one embodiment, integrated circuit 2900 includes peripheral or bus logic including a USB controller 2925, a UART controller 2930, an SPI/SDIO controller 2935, and an I2S/I2C controller 2940. In at least one embodiment, integrated circuit 2900 can include a display device 2945 coupled to one or more of a high- definition multimedia interface (“HDMI”) controller 2950 and a mobile industry processor interface (“MIPI”) display interface 2955. In at least one embodiment, storage may be provided by a flash memory subsystem 2960 including flash memory and a flash memory controller. In at least one embodiment, a memory interface may be provided via a memory controller 2965 for access to SDRAM or SRAM memory devices. In at least one embodiment, some integrated circuits additionally include an embedded security engine 2970.[0297] Figure 30 illustrates a computing system 3000, according to at least one embodiment; In at least one embodiment, computing system 3000 includes a processing subsystem 3001 having one or more processor(s) 3002 and a system memory 3004 communicating via an interconnection path that may include a memory hub 3005. In at least one embodiment, memory hub 3005 may be a separate component within a chipset component or may be integrated within one or more processor(s) 3002. In at least one embodiment, memory hub 3005 couples with an I/O subsystem 3011 via a communication link 3006. In at least one embodiment, I/O subsystem 3011 includes an I/O hub 3007 that can enable computing system 3000 to receive input from one or more input device(s) 3008. In at least one embodiment, I/O hub 3007 can enable a display controller, which may be included in one or more processor(s) 3002, to provide outputs to one or more display device(s) 3010A. In at least one embodiment, one or more display device(s) 3010A coupled with I/O hub 3007 can include a local, internal, or embedded display device.[0298] In at least one embodiment, processing subsystem 3001 includes one or more parallel processor(s) 3012 coupled to memory hub 3005 via a bus or other communication link 3013. In at least one embodiment, communication link 3013 may be one of any number of standards based communication link technologies or protocols, such as, but not limited to PCIe, or may be a vendor specific communications interface or communications fabric. In at least one embodiment, one or more parallel processor(s) 3012 form a computationally focused parallel or vector processing system that can include a large number of processing cores and/or processing
clusters, such as a many integrated core processor. In at least one embodiment, one or more parallel processor(s) 3012 form a graphics processing subsystem that can output pixels to one of one or more display device(s) 3010A coupled via I/O Hub 3007. In at least one embodiment, one or more parallel processor(s) 3012 can also include a display controller and display interface (not shown) to enable a direct connection to one or more display device(s) 3010B.[0299] In at least one embodiment, a system storage unit 3014 can connect to I/O hub 3007 to provide a storage mechanism for computing system 3000. In at least one embodiment, an I/O switch 3016 can be used to provide an interface mechanism to enable connections between I/O hub 3007 and other components, such as a network adapter 3018 and/or wireless network adapter 3019 that may be integrated into a platform, and various other devices that can be added via one or more add-in device(s) 3020. In at least one embodiment, network adapter 3018 can be an Ethernet adapter or another wired network adapter. In at least one embodiment, wireless network adapter 3019 can include one or more of a Wi-Fi, Bluetooth, NFC, or other network device that includes one or more wireless radios.[0300] In at least one embodiment, computing system 3000 can include other components not explicitly shown, including USB or other port connections, optical storage drives, video capture devices, and/or variations thereof, that may also be connected to VO hub 3007. In at least one embodiment, communication paths interconnecting various components in Figure 30 may be implemented using any suitable protocols, such as PCI based protocols (e.g., PCIe), or other bus or point-to-point communication interfaces and/or protocol(s), such as NVLink high-speed interconnect, or interconnect protocols.[0301] In at least one embodiment, one or more parallel processor(s) 3012 incorporate circuitry optimized for graphics and video processing, including, in at least one embodiment, video output circuitry, and constitutes a graphics processing unit (“GPU”). In at least one embodiment, one or more parallel processor(s) 3012 incorporate circuitry optimized for general purpose processing. In at least embodiment, components of computing system 3000 may be integrated with one or more other system elements on a single integrated circuit. In at least one embodiment, one or more parallel processor(s) 3012, memory hub 3005, processor(s) 3002, and VO hub 3007 can be integrated into a SoC integrated circuit. In at least one embodiment, components of computing system 3000 can be integrated into a single package to form a system in package (“SIP”)
configuration. In at least one embodiment, at least a portion of components of computing system 3000 can be integrated into a multi-chip module (“MCM”), which can be interconnected with other multi-chip modules into a modular computing system. In at least one embodiment, I/O subsystem 3011 and display devices 3010B are omitted from computing system 3000.Processing Systems[0302] The following figures set forth, without limitation, exemplary processing systems that can be used to implement at least one embodiment.[0303] Figure 31 illustrates an accelerated processing unit (“APU”) 3100, in accordance with at least one embodiment. In at least one embodiment, APU 3100 is developed by AMD Corporation of Santa Clara, CA. In at least one embodiment, APU 3100 can be configured to execute an application program, such as a CUDA program. In at least one embodiment, APU 3100 includes, without limitation, a core complex 3110, a graphics complex 3140, fabric 3160, I/O interfaces 3170, memory controllers 3180, a display controller 3192, and a multimedia engine 3194. In at least one embodiment, APU 3100 may include, without limitation, any number of core complexes 3110, any number of graphics complexes 3150, any number of display controllers 3192, and any number of multimedia engines 3194 in any combination. For explanatory purposes, multiple instances of like objects are denoted herein with reference numbers identifying an object and parenthetical numbers identifying an instance where needed.[0304] In at least one embodiment, core complex 3110 is a CPU, graphics complex 3140 is a GPU, and APU 3100 is a processing unit that integrates, without limitation, 3110 and 3140 onto a single chip. In at least one embodiment, some tasks may be assigned to core complex 3110 and other tasks may be assigned to graphics complex 3140. In at least one embodiment, core complex 3110 is configured to execute main control software associated with APU 3100, such as an operating system. In at least one embodiment, core complex 3110 is a master processor of APU 3100, controlling and coordinating operations of other processors. In at least one embodiment, core complex 3110 issues commands that control an operation of graphics complex 3140. In at least one embodiment, core complex 3110 can be configured to execute host executable code derived from CUDA source code, and graphics complex 3140 can be configured to execute device executable code derived from CUDA source code.
[0305] In at least one embodiment, core complex 3110 includes, without limitation, cores 3120( 1 )-3120(4) and an L3 cache 3130. In at least one embodiment, core complex 3110 may include, without limitation, any number of cores 3120 and any number and type of caches in any combination. In at least one embodiment, cores 3120 are configured to execute instructions of a particular instruction set architecture (“ISA”). In at least one embodiment, each core 3120 is a CPU core.[0306] In at least one embodiment, each core 3120 includes, without limitation, a fetch/decode unit 3122, an integer execution engine 3124, a floating point execution engine 3126, and an L2 cache 3128. In at least one embodiment, fetch/decode unit 3122 fetches instructions, decodes such instructions, generates micro-operations, and dispatches separate micro-instructions to integer execution engine 3124 and floating point execution engine 3126. In at least one embodiment, fetch/decode unit 3122 can concurrently dispatch one micro-instruction to integer execution engine 3124 and another micro-instruction to floating point execution engine 3126. In at least one embodiment, integer execution engine 3124 executes, without limitation, integer and memory operations. In at least one embodiment, floating point engine 3126 executes, without limitation, floating point and vector operations. In at least one embodiment, fetch-decode unit 3122 dispatches micro-instructions to a single execution engine that replaces both integer execution engine 3124 and floating point execution engine 3126.[0307] In at least one embodiment, each core 3120(i), where i is an integer representing a particular instance of core 3120, may access L2 cache 3128(i) included in core 3120(i). In at least one embodiment, each core 3120 included in core complex 3110(j), where j is an integer representing a particular instance of core complex 3110, is connected to other cores 3120 included in core complex 3110(j) via L3 cache 3130(j ) included in core complex 3110(j ). In at least one embodiment, cores 3120 included in core complex 3110(j ), where j is an integer representing a particular instance of core complex 3110, can access all of L3 cache 3130(j ) included in core complex 3110(j). In at least one embodiment, L3 cache 3130 may include, without limitation, any number of slices.[0308] In at least one embodiment, graphics complex 3140 can be configured to perform compute operations in a highly-parallel fashion. In at least one embodiment, graphics complex 3140 is configured to execute graphics pipeline operations such as draw commands, pixel
operations, geometric computations, and other operations associated with rendering an image to a display. In at least one embodiment, graphics complex 3140 is configured to execute operations unrelated to graphics. In at least one embodiment, graphics complex 3140 is configured to execute both operations related to graphics and operations unrelated to graphics.[0309] In at least one embodiment, graphics complex 3140 includes, without limitation, any number of compute units 3150 and an L2 cache 3142. In at least one embodiment, compute units 3150 share L2 cache 3142. In at least one embodiment, L2 cache 3142 is partitioned. In at least one embodiment, graphics complex 3140 includes, without limitation, any number of compute units 3150 and any number (including zero) and type of caches. In at least one embodiment, graphics complex 3140 includes, without limitation, any amount of dedicated graphics hardware.[0310] In at least one embodiment, each compute unit 3150 includes, without limitation, any number of SIMD units 3152 and a shared memory 3154. In at least one embodiment, each SIMD unit 3152 implements a SIMD architecture and is configured to perform operations in parallel. In at least one embodiment, each compute unit 3150 may execute any number of thread blocks, but each thread block executes on a single compute unit 3150. In at least one embodiment, a thread block includes, without limitation, any number of threads of execution. In at least one embodiment, a workgroup is a thread block. In at least one embodiment, each SIMD unit 3152 executes a different warp. In at least one embodiment, a warp is a group of threads (e.g., 16 threads), where each thread in a warp belongs to a single thread block and is configured to process a different set of data based on a single set of instructions. In at least one embodiment, predication can be used to disable one or more threads in a warp. In at least one embodiment, a lane is a thread. In at least one embodiment, a work item is a thread. In at least one embodiment, a wavefront is a warp. In at least one embodiment, different wavefronts in a thread block may synchronize together and communicate via shared memory 3154.[0311] In at least one embodiment, fabric 3160 is a system interconnect that facilitates data and control transmissions across core complex 3110, graphics complex 3140, I/O interfaces 3170, memory controllers 3180, display controller 3192, and multimedia engine 3194. In at least one embodiment, APU 3100 may include, without limitation, any amount and type of system interconnect in addition to or instead of fabric 3160 that facilitates data and control transmissions
across any number and type of directly or indirectly linked components that may be internal or external to APU 3100. In at least one embodiment, I/O interfaces 3170 are representative of any number and type of I/O interfaces (e.g., PCI , PCI-Extended (“PCI-X"), PCIe, gigabit Ethernet (“GBE”), USB, etc.). In at least one embodiment, various types of peripheral devices are coupled to VO interfaces 3170 In at least one embodiment, peripheral devices that are coupled to VO interfaces 3170 may include, without limitation, keyboards, mice, printers, scanners, joysticks or other types of game controllers, media recording devices, external storage devices, network interface cards, and so forth.[0312] In at least one embodiment, display controller AMD92 displays images on one or more display device(s), such as a liquid crystal display (“LCD”) device. In at least one embodiment, multimedia engine 240 includes, without limitation, any amount and type of circuitry that is related to multimedia, such as a video decoder, a video encoder, an image signal processor, etc. In at least one embodiment, memory controllers 3180 facilitate data transfers between APU 3100 and a unified system memory 3190. In at least one embodiment, core complex 3110 and graphics complex 3140 share unified system memory 3190.[0313] In at least one embodiment, APU 3100 implements a memory subsystem that includes, without limitation, any amount and type of memory controllers 3180 and memory devices (e.g., shared memory 3154) that may be dedicated to one component or shared among multiple components. In at least one embodiment, APU 3100 implements a cache subsystem that includes, without limitation, one or more cache memories (e.g., L2 caches 2728, L3 cache 3130, and L2 cache 3142) that may each be private to or shared between any number of components (e.g., cores 3120, core complex 3110, SIMD units 3152, compute units 3150, and graphics complex 3140).[0314] Figure 32 illustrates a CPU 3200, in accordance with at least one embodiment. In at least one embodiment, CPU 3200 is developed by AMD Corporation of Santa Clara, CA. In at least one embodiment, CPU 3200 can be configured to execute an application program. In at least one embodiment, CPU 3200 is configured to execute main control software, such as an operating system. In at least one embodiment, CPU 3200 issues commands that control an operation of an external GPU (not shown). In at least one embodiment, CPU 3200 can be configured to execute host executable code derived from CUDA source code, and an external
GPU can be configured to execute device executable code derived from such CUDA source code. In at least one embodiment, CPU 3200 includes, without limitation, any number of core complexes 3210, fabric 3260, I/O interfaces 3270, and memory controllers 3280.[0315] In at least one embodiment, core complex 3210 includes, without limitation, cores 3220(l)-3220(4) and an L3 cache 3230. In at least one embodiment, core complex 3210 may include, without limitation, any number of cores 3220 and any number and type of caches in any combination. In at least one embodiment, cores 3220 are configured to execute instructions of a particular ISA. In at least one embodiment, each core 3220 is a CPU core.[0316] In at least one embodiment, each core 3220 includes, without limitation, a fetch/decode unit 3222, an integer execution engine 3224, a floating point execution engine 3226, and an L2 cache 3228. In at least one embodiment, fetch/decode unit 3222 fetches instructions, decodes such instructions, generates micro-operations, and dispatches separate micro-instructions to integer execution engine 3224 and floating point execution engine 3226. In at least one embodiment, fetch/decode unit 3222 can concurrently dispatch one micro-instruction to integer execution engine 3224 and another micro-instruction to floating point execution engine 3226. In at least one embodiment, integer execution engine 3224 executes, without limitation, integer and memory operations. In at least one embodiment, floating point engine 3226 executes, without limitation, floating point and vector operations. In at least one embodiment, fetch-decode unit 3222 dispatches micro-instructions to a single execution engine that replaces both integer execution engine 3224 and floating point execution engine 3226.[0317] In at least one embodiment, each core 3220(i), where i is an integer representing a particular instance of core 3220, may access L2 cache 3228(i) included in core 3220(i). In at least one embodiment, each core 3220 included in core complex 3210(j), where j is an integer representing a particular instance of core complex 3210, is connected to other cores 3220 in core complex 3210(j ) via L3 cache 3230(j) included in core complex 3210(j). In at least one embodiment, cores 3220 included in core complex 3210(j), where j is an integer representing a particular instance of core complex 3210, can access all of L3 cache 3230(j ) included in core complex 3210(j ). In at least one embodiment, L3 cache 3230 may include, without limitation, any number of slices.
[0318] In at least one embodiment, fabric 3260 is a system interconnect that facilitates data and control transmissions across core complexes 3210(l)-3210(N) (where N is an integer greater than zero), I/O interfaces 3270, and memory controllers 3280. In at least one embodiment, CPU 3200 may include, without limitation, any amount and type of system interconnect in addition to or instead of fabric 3260 that facilitates data and control transmissions across any number and type of directly or indirectly linked components that may be internal or external to CPU 3200. In at least one embodiment, I/O interfaces 3270 are representative of any number and type of I/O interfaces (e.g., PCI , PCI-X, PCIe, GBE, USB, etc.). In at least one embodiment, various types of peripheral devices are coupled to I/O interfaces 3270 In at least one embodiment, peripheral devices that are coupled to I/O interfaces 3270 may include, without limitation, displays, keyboards, mice, printers, scanners, joysticks or other types of game controllers, media recording devices, external storage devices, network interface cards, and so forth.[0319] In at least one embodiment, memory controllers 3280 facilitate data transfers between CPU 3200 and a system memory 3290. In at least one embodiment, core complex 3210 and graphics complex 3240 share system memory 3290. In at least one embodiment, CPU 3200 implements a memory subsystem that includes, without limitation, any amount and type of memory controllers 3280 and memory devices that may be dedicated to one component or shared among multiple components. In at least one embodiment, CPU 3200 implements a cache subsystem that includes, without limitation, one or more cache memories (e.g., L2 caches 3228 and L3 caches 3230) that may each be private to or shared between any number of components (e.g., cores 3220 and core complexes 3210).[0320] Figure 33 illustrates an exemplary accelerator integration slice 3390, in accordance with at least one embodiment. As used herein, a “slice” comprises a specified portion of processing resources of an accelerator integration circuit. In at least one embodiment, an accelerator integration circuit provides cache management, memory access, context management, and interrupt management services on behalf of multiple graphics processing engines included in a graphics acceleration module. Graphics processing engines may each comprise a separate GPU. Alternatively, graphics processing engines may comprise different types of graphics processing engines within a GPU such as graphics execution units, media processing engines (e.g., video encoders/decoders), samplers, and blit engines. In at least one
embodiment, a graphics acceleration module may be a GPU with multiple graphics processing engines. In at least one embodiment, graphics processing engines may be individual GPUs integrated on a common package, line card, or chip.[0321] An application effective address space 3382 within system memory 3314 stores process elements 3383. In one embodiment, process elements 3383 are stored in response to GPU invocations 3381 from applications 3380 executed on processor 3307. A process element 3383 contains process state for corresponding application 3380. A work descriptor (“WD”) 3384 contained in process element 3383 can be a single job requested by an application or may contain a pointer to a queue of jobs. In at least one embodiment, WD 3384 is a pointer to a job request queue in application effective address space 3382.[0322] Graphics acceleration module 3346 and/or individual graphics processing engines can be shared by all or a subset of processes in a system. In at least one embodiment, an infrastructure for setting up process state and sending WD 3384 to graphics acceleration module 3346 to start a job in a virtualized environment may be included.[0323] In at least one embodiment, a dedicated-process programming model is implementation-specific. In this model, a single process owns graphics acceleration module3346 or an individual graphics processing engine. Because graphics acceleration module 3346 is owned by a single process, a hypervisor initializes an accelerator integration circuit for an owning partition and an operating system initializes accelerator integration circuit for an owning process when graphics acceleration module 3346 is assigned.[0324] In operation, a WD fetch unit 3391 in accelerator integration slice 3390 fetches next WD 3384 which includes an indication of work to be done by one or more graphics processing engines of graphics acceleration module 3346. Data from WD 3384 may be stored in registers 3345 and used by a memory management unit (“MMU”) 3339, interrupt management circuit3347 and/or context management circuit 3348 as illustrated. In at least one embodiment of MMU 3339 includes segment/page walk circuitry for accessing segment/page tables 3386 within OS virtual address space 3385. Interrupt management circuit 3347 may process interrupt events (“INT”) 3392 received from graphics acceleration module 3346. When performing graphics operations, an effective address 3393 generated by a graphics processing engine is translated to a
real address by MMU 3339.[0325] In one embodiment, a same set of registers 3345 are duplicated for each graphics processing engine and/or graphics acceleration module 3346 and may be initialized by a hypervisor or operating system. Each of these duplicated registers may be included in accelerator integration slice 3390. Exemplary registers that may be initialized by a hypervisor are shown in Table 1.Table 1 -Hypervisor Initialized Registers[0326] Exemplary registers that may be initialized by an operating system are shown in Table 2.Table 2 -Operating System Initialized Registers[0327] In one embodiment, each WD 3384 is specific to a particular graphics acceleration module 3346 and/or a particular graphics processing engine. It contains all information required by a graphics processing engine to do work or it can be a pointer to a memory location where an application has set up a command queue of work to be completed.[0328] Figures 34A-34B illustrate exemplary graphics processors, in accordance with at least one embodiment. In at least one embodiment, any of the exemplary graphics processors may be fabricated using one or more IP cores. In addition to what is illustrated, other logic and circuits may be included in at least one embodiment, including additional graphics processors/cores, peripheral interface controllers, or general-purpose processor cores. In at least one embodiment, the exemplary graphics processors are for use within an SoC.[0329] Figure 34A illustrates an exemplary graphics processor 3410 of an SoC integrated circuit that may be fabricated using one or more IP cores, in accordance with at least one embodiment. Figure 34B illustrates an additional exemplary graphics processor 3440 of an SoC integrated circuit that may be fabricated using one or more IP cores, in accordance with at least one embodiment. In at least one embodiment, graphics processor 3410 of Figure 34A is a low power graphics processor core. In at least one embodiment, graphics processor 3440 of Figure 34B is a higher performance graphics processor core. In at least one embodiment, each of graphics processors 3410, 3440 can be variants of graphics processor 510 of Figure 5.[0330] In at least one embodiment, graphics processor 3410 includes a vertex processor 3405 and one or more fragment processor(s) 3415A-3415N (e.g., 3415 A, 3415B, 3415C, 3415D, through 3415N-1, and 3415N). In at least one embodiment, graphics processor 3410 can execute different shader programs via separate logic, such that vertex processor 3405 is optimized to execute operations for vertex shader programs, while one or more fragment processor(s) 34 ISA- 3415N execute fragment (e.g., pixel) shading operations for fragment or pixel shader programs. In at least one embodiment, vertex processor 3405 performs a vertex processing stage of a 3D graphics pipeline and generates primitives and vertex data. In at least one embodiment, fragment processor(s) 3415A-3415N use primitive and vertex data generated by vertex processor 3405 to
produce a framebuffer that is displayed on a display device. In at least one embodiment, fragment processor(s) 3415A-3415N are optimized to execute fragment shader programs as provided for in an OpenGL API, which may be used to perform similar operations as a pixel shader program as provided for in a Direct 3D API.[0331] In at least one embodiment, graphics processor 3410 additionally includes one or more MMU(s) 3420A-3420B, cache(s) 3425A-3425B, and circuit interconnect(s) 3430A-3430B. In at least one embodiment, one or more MMU(s) 3420A-3420B provide for virtual to physical address mapping for graphics processor 3410, including for vertex processor 3405 and/or fragment processor(s) 3415A-3415N, which may reference vertex or image/texture data stored in memory, in addition to vertex or image/texture data stored in one or more cache(s) 3425A- 3425B. In at least one embodiment, one or more MMU(s) 3420A-3420B may be synchronized with other MMUs within a system, including one or more MMUs associated with one or more application processor(s) 505, image processors 515, and/or video processors 520 of Figure 5, such that each processor 505-520 can participate in a shared or unified virtual memory system. In at least one embodiment, one or more circuit interconnect(s) 3430A-3430B enable graphics processor 3410 to interface with other IP cores within an SoC, either via an internal bus of an SoC or via a direct connection.[0332] In at least one embodiment, graphics processor 3440 includes one or more MMU(s) 3420A-3420B, caches 3425A-3425B, and circuit interconnects 3430A-3430B of graphics processor 3410 of Figure 34A. In at least one embodiment, graphics processor 3440 includes one or more shader core(s) 3455A-3455N (e.g., 3455A, 3455B, 3455C, 3455D, 3455E, 3455F, through 3455N-1, and 3455N), which provides for a unified shader core architecture in which a single core or type or core can execute all types of programmable shader code, including shader program code to implement vertex shaders, fragment shaders, and/or compute shaders. In at least one embodiment, a number of shader cores can vary. In at least one embodiment, graphics processor 3440 includes an inter-core task manager 3445, which acts as a thread dispatcher to dispatch execution threads to one or more shader cores 3455A-3455N and a tiling unit 3458 to accelerate tiling operations for tile-based rendering, in which rendering operations for a scene are subdivided in image space, in at least one embodiment to exploit local spatial coherence within a scene or to optimize use of internal caches.
[0333] Figure 35A illustrates a graphics core 3500, in accordance with at least one embodiment. In at least one embodiment, graphics core 3500 may be included within graphics processor 2410 of Figure 24. In at least one embodiment, graphics core 3500 may be a unified shader core 3455A-3455N as in Figure 34B. In at least one embodiment, graphics core 3500 includes a shared instruction cache 3502, a texture unit 3518, and a cache/shared memory 3520 that are common to execution resources within graphics core 3500. In at least one embodiment, graphics core 3500 can include multiple slices 3501 A-3501N or partition for each core, and a graphics processor can include multiple instances of graphics core 3500. Slices 3501 A-3501N can include support logic including a local instruction cache 3504A-3504N, a thread scheduler 3506A-3506N, a thread dispatcher 3508A-3508N, and a set of registers 3510A-3510N. In at least one embodiment, slices 3501 A-3501N can include a set of additional function units (“AFUs”) 3512A-3512N, floating-point units (“FPUs”) 3514A-3514N, integer arithmetic logic units (“ALUs”) 3516-3516N, address computational units (“ACUs”) 3513A-3513N, doubleprecision floating-point units (“DPFPUs”) 3515A-3515N, and matrix processing units (“MPUs”) 3517A-3517N.[0334] In at least one embodiment, FPUs 3514A-3514N can perform single-precision (32-bit) and half-precision (16-bit) floating point operations, while DPFPUs 3515A-3515N perform double precision (64-bit) floating point operations. In at least one embodiment, ALUs 3516A- 3516N can perform variable precision integer operations at 8-bit, 16-bit, and 32-bit precision, and can be configured for mixed precision operations. In at least one embodiment, MPUs 3517A-3517N can also be configured for mixed precision matrix operations, including halfprecision floating point and 8-bit integer operations. In at least one embodiment, MPUs 3517- 3517N can perform a variety of matrix operations to accelerate CUD A programs, including enabling support for accelerated general matrix to matrix multiplication (“GEMM”). In at least one embodiment, AFUs 3512A-3512N can perform additional logic operations not supported by floating-point or integer units, including trigonometric operations (e.g., Sine, Cosine, etc.).[0335] Figure 35B illustrates a general-purpose graphics processing unit (“GPGPU”) 3530, in accordance with at least one embodiment. In at least one embodiment, GPGPU 3530 is highly- parallel and suitable for deployment on a multi-chip module. In at least one embodiment, GPGPU 3530 can be configured to enable highly-parallel compute operations to be performed by
an array of GPUs. In at least one embodiment, GPGPU 3530 can be linked directly to other instances of GPGPU 3530 to create a multi-GPU cluster to improve execution time for CUDA programs. In at least one embodiment, GPGPU 3530 includes a host interface 3532 to enable a connection with a host processor. In at least one embodiment, host interface 3532 is a PCIe interface. In at least one embodiment, host interface 3532 can be a vendor specific communications interface or communications fabric. In at least one embodiment, GPGPU 3530 receives commands from a host processor and uses a global scheduler 3534 to distribute execution threads associated with those commands to a set of compute clusters 3536A-3536H. In at least one embodiment, compute clusters 3536A-3536H share a cache memory 3538. In at least one embodiment, cache memory 3538 can serve as a higher-level cache for cache memories within compute clusters 3536A-3536H.[0336] In at least one embodiment, GPGPU 3530 includes memory 3544A-3544B coupled with compute clusters 3536A-3536H via a set of memory controllers 3542A-3542B. In at least one embodiment, memory 3544A-3544B can include various types of memory devices including DRAM or graphics random access memory, such as synchronous graphics random access memory (“SGRAM”), including graphics double data rate (“GDDR”) memory.[0337] In at least one embodiment, compute clusters 3536A-3536H each include a set of graphics cores, such as graphics core 3500 of Figure 35 A, which can include multiple types of integer and floating point logic units that can perform computational operations at a range of precisions including suited for computations associated with CUDA programs. In at least one embodiment, at least a subset of floating point units in each of compute clusters 3536A-3536H can be configured to perform 16-bit or 32-bit floating point operations, while a different subset of floating point units can be configured to perform 64-bit floating point operations.[0338] In at least one embodiment, multiple instances of GPGPU 3530 can be configured to operate as a compute cluster. In at least one embodiment, compute clusters 3536A-3536H may implement any technically feasible communication techniques for synchronization and data exchange. In at least one embodiment, multiple instances of GPGPU 3530 communicate over host interface 3532. In at least one embodiment, GPGPU 3530 includes an I/O hub 3539 that couples GPGPU 3530 with a GPU link 3540 that enables a direct connection to other instances of GPGPU 3530. In at least one embodiment, GPU link 3540 is coupled to a dedicated GPU-to-
GPU bridge that enables communication and synchronization between multiple instances of GPGPU 3530. In at least one embodiment GPU link 3540 couples with a high speed interconnect to transmit and receive data to other GPGPUs 3530 or parallel processors. In at least one embodiment, multiple instances of GPGPU 3530 are located in separate data processing systems and communicate via a network device that is accessible via host interface 3532. In at least one embodiment GPU link 3540 can be configured to enable a connection to a host processor in addition to or as an alternative to host interface 3532. In at least one embodiment, GPGPU 3530 can be configured to execute a CUDA program.[0339] Figure 36A illustrates a parallel processor 3600, in accordance with at least one embodiment. In at least one embodiment, various components of parallel processor 3600 may be implemented using one or more integrated circuit devices, such as programmable processors, application specific integrated circuits (“ASICs”), or FPGAs.[0340] In at least one embodiment, parallel processor 3600 includes a parallel processing unit 3602. In at least one embodiment, parallel processing unit 3602 includes an VO unit 3604 that enables communication with other devices, including other instances of parallel processing unit 3602. In at least one embodiment, I/O unit 3604 may be directly connected to other devices. In at least one embodiment, I/O unit 3604 connects with other devices via use of a hub or switch interface, such as memory hub 605. In at least one embodiment, connections between memory hub 605 and I/O unit 3604 form a communication link. In at least one embodiment, I/O unit 3604 connects with a host interface 3606 and a memory crossbar 3616, where host interface 3606 receives commands directed to performing processing operations and memory crossbar 3616 receives commands directed to performing memory operations.[0341] In at least one embodiment, when host interface 3606 receives a command buffer via I/O unit 3604, host interface 3606 can direct work operations to perform those commands to a front end 3608. In at least one embodiment, front end 3608 couples with a scheduler 3610, which is configured to distribute commands or other work items to a processing array 3612. In at least one embodiment, scheduler 3610 ensures that processing array 3612 is properly configured and in a valid state before tasks are distributed to processing array 3612. In at least one embodiment, scheduler 3610 is implemented via firmware logic executing on a microcontroller. In at least one embodiment, microcontroller implemented scheduler 3610 is
configurable to perform complex scheduling and work distribution operations at coarse and fine granularity, enabling rapid preemption and context switching of threads executing on processing array 3612. In at least one embodiment, host software can prove workloads for scheduling on processing array 3612 via one of multiple graphics processing doorbells. In at least one embodiment, workloads can then be automatically distributed across processing array 3612 by scheduler 3610 logic within a microcontroller including scheduler 3610.[0342] In at least one embodiment, processing array 3612 can include up to “N” clusters (e.g., cluster 3614A, cluster 3614B, through cluster 3614N). In at least one embodiment, each cluster 3614A-3614N of processing array 3612 can execute a large number of concurrent threads. In at least one embodiment, scheduler 3610 can allocate work to clusters 3614A-3614N of processing array 3612 using various scheduling and/or work distribution algorithms, which may vary depending on a workload arising for each type of program or computation. In at least one embodiment, scheduling can be handled dynamically by scheduler 3610, or can be assisted in part by compiler logic during compilation of program logic configured for execution by processing array 3612. In at least one embodiment, different clusters 3614A-3614N of processing array 3612 can be allocated for processing different types of programs or for performing different types of computations.[0343] In at least one embodiment, processing array 3612 can be configured to perform various types of parallel processing operations. In at least one embodiment, processing array 3612 is configured to perform general-purpose parallel compute operations. In at least one embodiment, processing array 3612 can include logic to execute processing tasks including filtering of video and/or audio data, performing modeling operations, including physics operations, and performing data transformations.[0344] In at least one embodiment, processing array 3612 is configured to perform parallel graphics processing operations. In at least one embodiment, processing array 3612 can include additional logic to support execution of such graphics processing operations, including, but not limited to texture sampling logic to perform texture operations, as well as tessellation logic and other vertex processing logic. In at least one embodiment, processing array 3612 can be configured to execute graphics processing related shader programs such as, but not limited to vertex shaders, tessellation shaders, geometry shaders, and pixel shaders. In at least one
embodiment, parallel processing unit 3602 can transfer data from system memory via I/O unit 3604 for processing. In at least one embodiment, during processing, transferred data can be stored to on-chip memory (e.g., a parallel processor memory 3622) during processing, then written back to system memory.[0345] In at least one embodiment, when parallel processing unit 3602 is used to perform graphics processing, scheduler 3610 can be configured to divide a processing workload into approximately equal sized tasks, to better enable distribution of graphics processing operations to multiple clusters 3614A-3614N of processing array 3612. In at least one embodiment, portions of processing array 3612 can be configured to perform different types of processing. In at least one embodiment, a first portion may be configured to perform vertex shading and topology generation, a second portion may be configured to perform tessellation and geometry shading, and a third portion may be configured to perform pixel shading or other screen space operations, to produce a rendered image for display. In at least one embodiment, intermediate data produced by one or more of clusters 3614A-3614N may be stored in buffers to allow intermediate data to be transmitted between clusters 3614A-3614N for further processing.[0346] In at least one embodiment, processing array 3612 can receive processing tasks to be executed via scheduler 3610, which receives commands defining processing tasks from front end 3608. In at least one embodiment, processing tasks can include indices of data to be processed, e.g., surface (patch) data, primitive data, vertex data, and/or pixel data, as well as state parameters and commands defining how data is to be processed (e.g., what program is to be executed). In at least one embodiment, scheduler 3610 may be configured to fetch indices corresponding to tasks or may receive indices from front end 3608. In at least one embodiment, front end 3608 can be configured to ensure processing array 3612 is configured to a valid state before a workload specified by incoming command buffers (e.g., batch-buffers, push buffers, etc.) is initiated.[0347] In at least one embodiment, each of one or more instances of parallel processing unit 3602 can couple with parallel processor memory 3622. In at least one embodiment, parallel processor memory 3622 can be accessed via memory crossbar 3616, which can receive memory requests from processing array 3612 as well as I/O unit 3604. In at least one embodiment, memory crossbar 3616 can access parallel processor memory 3622 via a memory interface 3618.
In at least one embodiment, memory interface 3618 can include multiple partition units (e.g., a partition unit 3620 A, partition unit 3620B, through partition unit 3620N) that can each couple to a portion (e.g., memory unit) of parallel processor memory 3622. In at least one embodiment, a number of partition units 3620A-3620N is configured to be equal to a number of memory units, such that a first partition unit 3620 A has a corresponding first memory unit 3624 A, a second partition unit 3620B has a corresponding memory unit 3624B, and an Nth partition unit 3620N has a corresponding Nth memory unit 3624N. In at least one embodiment, a number of partition units 3620A-3620N may not be equal to a number of memory devices.[0348] In at least one embodiment, memory units 3624A-3624N can include various types of memory devices, including DRAM or graphics random access memory, such as SGRAM, including GDDR memory. In at least one embodiment, memory units 3624A-3624N may also include 3D stacked memory, including but not limited to high bandwidth memory (“HBM”). In at least one embodiment, render targets, such as frame buffers or texture maps may be stored across memory units 3624A-3624N, allowing partition units 3620A-3620N to write portions of each render target in parallel to efficiently use available bandwidth of parallel processor memory 3622. In at least one embodiment, a local instance of parallel processor memory 3622 may be excluded in favor of a unified memory design that utilizes system memory in conjunction with local cache memory.[0349] In at least one embodiment, any one of clusters 3614A-3614N of processing array 3612 can process data that will be written to any of memory units 3624A-3624N within parallel processor memory 3622. In at least one embodiment, memory crossbar 3616 can be configured to transfer an output of each cluster 3614A-3614N to any partition unit 3620A-3620N or to another cluster 3614A-3614N, which can perform additional processing operations on an output. In at least one embodiment, each cluster 3614A-3614N can communicate with memory interface 3618 through memory crossbar 3616 to read from or write to various external memory devices. In at least one embodiment, memory crossbar 3616 has a connection to memory interface 3618 to communicate with I/O unit 3604, as well as a connection to a local instance of parallel processor memory 3622, enabling processing units within different clusters 3614A-3614N to communicate with system memory or other memory that is not local to parallel processing unit 3602. In at least one embodiment, memory crossbar 3616 can use virtual channels to separate
traffic streams between clusters 3614A-3614N and partition units 3620A-3620N.[0350] In at least one embodiment, multiple instances of parallel processing unit 3602 can be provided on a single add-in card, or multiple add-in cards can be interconnected. In at least one embodiment, different instances of parallel processing unit 3602 can be configured to interoperate even if different instances have different numbers of processing cores, different amounts of local parallel processor memory, and/or other configuration differences. In at least one embodiment, some instances of parallel processing unit 3602 can include higher precision floating point units relative to other instances. In at least one embodiment, systems incorporating one or more instances of parallel processing unit 3602 or parallel processor 3600 can be implemented in a variety of configurations and form factors, including but not limited to desktop, laptop, or handheld personal computers, servers, workstations, game consoles, and/or embedded systems.[0351] Figure 36B illustrates a processing cluster 3694, in accordance with at least one embodiment. In at least one embodiment, processing cluster 3694 is included within a parallel processing unit. In at least one embodiment, processing cluster 3694 is one of processing clusters 3614A-3614N of Figure 36. In at least one embodiment, processing cluster 3694 can be configured to execute many threads in parallel, where the term “thread” refers to an instance of a particular program executing on a particular set of input data. In at least one embodiment, single instruction, multiple data (“SIMD”) instruction issue techniques are used to support parallel execution of a large number of threads without providing multiple independent instruction units. In at least one embodiment, single instruction, multiple thread (“SIMT”) techniques are used to support parallel execution of a large number of generally synchronized threads, using a common instruction unit configured to issue instructions to a set of processing engines within each processing cluster 3694.[0352] In at least one embodiment, operation of processing cluster 3694 can be controlled via a pipeline manager 3632 that distributes processing tasks to SIMT parallel processors. In at least one embodiment, pipeline manager 3632 receives instructions from scheduler 3610 of Figure 36 and manages execution of those instructions via a graphics multiprocessor 3634 and/or a texture unit 3636. In at least one embodiment, graphics multiprocessor 3634 is an exemplary instance of a SIMT parallel processor. However, in at least one embodiment, various types of SIMT parallel
processors of differing architectures may be included within processing cluster 3694. In at least one embodiment, one or more instances of graphics multiprocessor 3634 can be included within processing cluster 3694. In at least one embodiment, graphics multiprocessor 3634 can process data and a data crossbar 3640 can be used to distribute processed data to one of multiple possible destinations, including other shader units. In at least one embodiment, pipeline manager 3632 can facilitate distribution of processed data by specifying destinations for processed data to be distributed via data crossbar 3640.[0353] In at least one embodiment, each graphics multiprocessor 3634 within processing cluster 3694 can include an identical set of functional execution logic (e.g., arithmetic logic units, load/store units (“LSUs”), etc.). In at least one embodiment, functional execution logic can be configured in a pipelined manner in which new instructions can be issued before previous instructions are complete. In at least one embodiment, functional execution logic supports a variety of operations including integer and floating point arithmetic, comparison operations, Boolean operations, bit-shifting, and computation of various algebraic functions. In at least one embodiment, same functional -unit hardware can be leveraged to perform different operations and any combination of functional units may be present.[0354] In at least one embodiment, instructions transmitted to processing cluster 3694 constitute a thread. In at least one embodiment, a set of threads executing across a set of parallel processing engines is a thread group. In at least one embodiment, a thread group executes a program on different input data. In at least one embodiment, each thread within a thread group can be assigned to a different processing engine within graphics multiprocessor 3634. In at least one embodiment, a thread group may include fewer threads than a number of processing engines within graphics multiprocessor 3634. In at least one embodiment, when a thread group includes fewer threads than a number of processing engines, one or more of processing engines may be idle during cycles in which that thread group is being processed. In at least one embodiment, a thread group may also include more threads than a number of processing engines within graphics multiprocessor 3634. In at least one embodiment, when a thread group includes more threads than a number of processing engines within graphics multiprocessor 3634, processing can be performed over consecutive clock cycles. In at least one embodiment, multiple thread groups can be executed concurrently on graphics multiprocessor 3634.
[0355] In at least one embodiment, graphics multiprocessor 3634 includes an internal cache memory to perform load and store operations. In at least one embodiment, graphics multiprocessor 3634 can forego an internal cache and use a cache memory (e.g., LI cache 3648) within processing cluster 3694. In at least one embodiment, each graphics multiprocessor 3634 also has access to Level 2 (“L2”) caches within partition units (e.g., partition units 3620A-3620N of Figure 36A) that are shared among all processing clusters 3694 and may be used to transfer data between threads. In at least one embodiment, graphics multiprocessor 3634 may also access off-chip global memory, which can include one or more of local parallel processor memory and/or system memory. In at least one embodiment, any memory external to parallel processing unit 3602 may be used as global memory. In at least one embodiment, processing cluster 3694 includes multiple instances of graphics multiprocessor 3634 that can share common instructions and data, which may be stored in LI cache 3648.[0356] In at least one embodiment, each processing cluster 3694 may include an MMU 3645 that is configured to map virtual addresses into physical addresses. In at least one embodiment, one or more instances of MMU 3645 may reside within memory interface 3618 of Figure 36. In at least one embodiment, MMU 3645 includes a set of page table entries (“PTEs”) used to map a virtual address to a physical address of a tile and optionally a cache line index. In at least one embodiment, MMU 3645 may include address translation lookaside buffers (“TLBs”) or caches that may reside within graphics multiprocessor 3634 or LI cache 3648 or processing cluster 3694. In at least one embodiment, a physical address is processed to distribute surface data access locality to allow efficient request interleaving among partition units. In at least one embodiment, a cache line index may be used to determine whether a request for a cache line is a hit or miss.[0357] In at least one embodiment, processing cluster 3694 may be configured such that each graphics multiprocessor 3634 is coupled to a texture unit 3636 for performing texture mapping operations, e.g., determining texture sample positions, reading texture data, and filtering texture data. In at least one embodiment, texture data is read from an internal texture LI cache (not shown) or from an LI cache within graphics multiprocessor 3634 and is fetched from an L2 cache, local parallel processor memory, or system memory, as needed. In at least one embodiment, each graphics multiprocessor 3634 outputs a processed task to data crossbar 3640
to provide a processed task to another processing cluster 3694 for further processing or to store a processed task in an L2 cache, a local parallel processor memory, or a system memory via memory crossbar 3616. In at least one embodiment, a pre-raster operations unit (“preROP”) 3642 is configured to receive data from graphics multiprocessor 3634, direct data to ROP units, which may be located with partition units as described herein (e.g., partition units 3620A-3620N of Figure 36). In at least one embodiment, PreROP 3642 can perform optimizations for color blending, organize pixel color data, and perform address translations.[0358] Figure 36C illustrates a graphics multiprocessor 3696, in accordance with at least one embodiment. In at least one embodiment, graphics multiprocessor 3696 is graphics multiprocessor 3634 of Figure 36B. In at least one embodiment, graphics multiprocessor 3696 couples with pipeline manager 3632 of processing cluster 3694. In at least one embodiment, graphics multiprocessor 3696 has an execution pipeline including but not limited to an instruction cache 3652, an instruction unit 3654, an address mapping unit 3656, a register file 3658, one or more GPGPU cores 3662, and one or more LSUs 3666. GPGPU cores 3662 and LSUs 3666 are coupled with cache memory 3672 and shared memory 3670 via a memory and cache interconnect 3668.[0359] In at least one embodiment, instruction cache 3652 receives a stream of instructions to execute from pipeline manager 3632. In at least one embodiment, instructions are cached in instruction cache 3652 and dispatched for execution by instruction unit 3654. In at least one embodiment, instruction unit 3654 can dispatch instructions as thread groups (e.g., warps), with each thread of a thread group assigned to a different execution unit within GPGPU core 3662. In at least one embodiment, an instruction can access any of a local, shared, or global address space by specifying an address within a unified address space. In at least one embodiment, address mapping unit 3656 can be used to translate addresses in a unified address space into a distinct memory address that can be accessed by LSUs 3666.[0360] In at least one embodiment, register file 3658 provides a set of registers for functional units of graphics multiprocessor 3696. In at least one embodiment, register file 3658 provides temporary storage for operands connected to data paths of functional units (e.g., GPGPU cores 3662, LSUs 3666) of graphics multiprocessor 3696. In at least one embodiment, register file 3658 is divided between each of functional units such that each functional unit is allocated a
dedicated portion of register file 3658. In at least one embodiment, register file 3658 is divided between different thread groups being executed by graphics multiprocessor 3696.[0361] In at least one embodiment, GPGPU cores 3662 can each include FPUs and/or integer ALUs that are used to execute instructions of graphics multiprocessor 3696. GPGPU cores 3662 can be similar in architecture or can differ in architecture. In at least one embodiment, a first portion of GPGPU cores 3662 include a single precision FPU and an integer ALU while a second portion of GPGPU cores 3662 include a double precision FPU. In at least one embodiment, FPUs can implement IEEE 754-2008 standard for floating point arithmetic or enable variable precision floating point arithmetic. In at least one embodiment, graphics multiprocessor 3696 can additionally include one or more fixed function or special function units to perform specific functions such as copy rectangle or pixel blending operations. In at least one embodiment one or more of GPGPU cores 3662 can also include fixed or special function logic.[0362] In at least one embodiment, GPGPU cores 3662 include SIMD logic capable of performing a single instruction on multiple sets of data. In at least one embodiment GPGPU cores 3662 can physically execute SIMD4, SIMD8, and SIMD16 instructions and logically execute SIMD1, SIMD2, and SIMD32 instructions. In at least one embodiment, SIMD instructions for GPGPU cores 3662 can be generated at compile time by a shader compiler or automatically generated when executing programs written and compiled for single program multiple data (“SPMD”) or SIMT architectures. In at least one embodiment, multiple threads of a program configured for an SIMT execution model can executed via a single SIMD instruction. In at least one embodiment, eight SIMT threads that perform the same or similar operations can be executed in parallel via a single SIMD8 logic unit.[0363] In at least one embodiment, memory and cache interconnect 3668 is an interconnect network that connects each functional unit of graphics multiprocessor 3696 to register file 3658 and to shared memory 3670. In at least one embodiment, memory and cache interconnect 3668 is a crossbar interconnect that allows LSU 3666 to implement load and store operations between shared memory 3670 and register file 3658. In at least one embodiment, register file 3658 can operate at a same frequency as GPGPU cores 3662, thus data transfer between GPGPU cores 3662 and register file 3658 is very low latency. In at least one embodiment, shared memory 3670 can be used to enable communication between threads that execute on functional units
within graphics multiprocessor 3696. In at least one embodiment, cache memory 3672 can be used as a data cache in at least one embodiment, to cache texture data communicated between functional units and texture unit 3636. In at least one embodiment, shared memory 3670 can also be used as a program managed cached. In at least one embodiment, threads executing on GPGPU cores 3662 can programmatically store data within shared memory in addition to automatically cached data that is stored within cache memory 3672.[0364] In at least one embodiment, a parallel processor or GPGPU as described herein is communicatively coupled to host/processor cores to accelerate graphics operations, machinelearning operations, pattern analysis operations, and various general purpose GPU (GPGPU) functions. In at least one embodiment, a GPU may be communicatively coupled to host processor/cores over a bus or other interconnect (e.g., a high speed interconnect such as PCIe or NVLink). In at least one embodiment, a GPU may be integrated on a same package or chip as cores and communicatively coupled to cores over a processor bus/interconnect that is internal to a package or a chip. In at least one embodiment, regardless of a manner in which a GPU is connected, processor cores may allocate work to a GPU in a form of sequences of commands/instructions contained in a WD. In at least one embodiment, a GPU then uses dedicated circuitry /logic for efficiently processing these commands/instructions.General Computing[0365] The following figures set forth, without limitation, exemplary software constructs within general computing that can be used to implement at least one embodiment.[0366] Figure 37 illustrates a software stack of a programming platform, in accordance with at least one embodiment. In at least one embodiment, a programming platform is a platform for leveraging hardware on a computing system to accelerate computational tasks. A programming platform may be accessible to software developers through libraries, compiler directives, and/or extensions to programming languages, in at least one embodiment. In at least one embodiment, a programming platform may be, but is not limited to, CUD A, Radeon Open Compute Platform (“ROCm”), OpenCL (OpenCL™ is developed by Khronos group), SYCL, or Intel One API.[0367] In at least one embodiment, a software stack 3700 of a programming platform provides an execution environment for an application 3701. In at least one embodiment, application 3701
may include any computer software capable of being launched on software stack 3700. In at least one embodiment, application 3701 may include, but is not limited to, an artificial intelligence (“AI”)/machine learning (“ML”) application, a high performance computing (“HPC”) application, a virtual desktop infrastructure (“VDI”), or a datacenter workload.[0368] In at least one embodiment, application 3701 and software stack 3700 run on hardware 3707. Hardware 3707 may include one or more GPUs, CPUs, FPGAs, Al engines, and/or other types of compute devices that support a programming platform, in at least one embodiment. In at least one embodiment, such as with CUD A, software stack 3700 may be vendor specific and compatible with only devices from particular vendor(s). In at least one embodiment, such as in with OpenCL, software stack 3700 may be used with devices from different vendors. In at least one embodiment, hardware 3707 includes a host connected to one more devices that can be accessed to perform computational tasks via application programming interface (“API”) calls. A device within hardware 3707 may include, but is not limited to, a GPU, FPGA, Al engine, or other compute device (but may also include a CPU) and its memory, as opposed to a host within hardware 3707 that may include, but is not limited to, a CPU (but may also include a compute device) and its memory, in at least one embodiment.[0369] In at least one embodiment, software stack 3700 of a programming platform includes, without limitation, a number of libraries 3703, a runtime 3705, and a device kernel driver 3706. Each of libraries 3703 may include data and programming code that can be used by computer programs and leveraged during software development, in at least one embodiment. In at least one embodiment, libraries 3703 may include, but are not limited to, pre-written code and subroutines, classes, values, type specifications, configuration data, documentation, help data, and/or message templates. In at least one embodiment, libraries 3703 include functions that are optimized for execution on one or more types of devices. In at least one embodiment, libraries 3703 may include, but are not limited to, functions for performing mathematical, deep learning, and/or other types of operations on devices. In at least one embodiment, libraries 3803 are associated with corresponding APIs 3802, which may include one or more APIs, that expose functions implemented in libraries 3803.[0370] In at least one embodiment, application 3701 is written as source code that is compiled into executable code, as discussed in greater detail below in conjunction with Figure 42.
Executable code of application 3701 may run, at least in part, on an execution environment provided by software stack 3700, in at least one embodiment. In at least one embodiment, during execution of application 3701, code may be reached that needs to run on a device, as opposed to a host. In such a case, runtime 3705 may be called to load and launch requisite code on a device, in at least one embodiment. In at least one embodiment, runtime 3705 may include any technically feasible runtime system that is able to support execution of application SOI.[0371] In at least one embodiment, runtime 3705 is implemented as one or more runtime libraries associated with corresponding APIs, which are shown as API(s) 3704. One or more of such runtime libraries may include, without limitation, functions for memory management, execution control, device management, error handling, and/or synchronization, among other things, in at least one embodiment. In at least one embodiment, memory management functions may include, but are not limited to, functions to allocate, deallocate, and copy device memory, as well as transfer data between host memory and device memory. In at least one embodiment, execution control functions may include, but are not limited to, functions to launch a function (sometimes referred to as a “kernel” when a function is a global function callable from a host) on a device and set attribute values in a buffer maintained by a runtime library for a given function to be executed on a device.[0372] Runtime libraries and corresponding API(s) 3704 may be implemented in any technically feasible manner, in at least one embodiment. In at least one embodiment, one (or any number of) API may expose a low-level set of functions for fine-grained control of a device, while another (or any number of) API may expose a higher-level set of such functions. In at least one embodiment, a high-level runtime API may be built on top of a low-level API. In at least one embodiment, one or more of runtime APIs may be language-specific APIs that are layered on top of a language-independent runtime API.[0373] In at least one embodiment, device kernel driver 3706 is configured to facilitate communication with an underlying device. In at least one embodiment, device kernel driver 3706 may provide low-level functionalities upon which APIs, such as API(s) 3704, and/or other software relies. In at least one embodiment, device kernel driver 3706 may be configured to compile intermediate representation (“IR”) code into binary code at runtime. For CUD A, device kernel driver 3706 may compile Parallel Thread Execution (“PTX”) IR code that is not hardware
specific into binary code for a specific target device at runtime (with caching of compiled binary code), which is also sometimes referred to as “finalizing” code, in at least one embodiment. Doing so may permit finalized code to run on a target device, which may not have existed when source code was originally compiled into PTX code, in at least one embodiment. Alternatively, in at least one embodiment, device source code may be compiled into binary code offline, without requiring device kernel driver 3706 to compile IR code at runtime.[0374] Figure 38 illustrates a CUDA implementation of software stack 3700 of Figure 37, in accordance with at least one embodiment. In at least one embodiment, a CUDA software stack 3800, on which an application 3801 may be launched, includes CUDA libraries 3803, a CUDA runtime 3805, a CUDA driver 3807, and a device kernel driver 3808. In at least one embodiment, CUDA software stack 3800 executes on hardware 3809, which may include a GPU that supports CUDA and is developed by NVIDIA Corporation of Santa Clara, CA.[0375] In at least one embodiment, application 3801, CUDA runtime 3805, and device kernel driver 3808 may perform similar functionalities as application 3701, runtime 3705, and device kernel driver 3706, respectively, which are described above in conjunction with Figure 37. In at least one embodiment, CUDA driver 3807 includes a library (libcuda.so) that implements a CUDA driver API 3806. Similar to a CUDA runtime API 3804 implemented by a CUDA runtime library (cudart), CUDA driver API 3806 may, without limitation, expose functions for memory management, execution control, device management, error handling, synchronization, and/or graphics interoperability, among other things, in at least one embodiment. In at least one embodiment, CUDA driver API 3806 differs from CUDA runtime API 3804 in that CUDA runtime API 3804 simplifies device code management by providing implicit initialization, context (analogous to a process) management, and module (analogous to dynamically loaded libraries) management. In contrast to high-level CUDA runtime API 3804, CUDA driver API 3806 is a low-level API providing more fine-grained control of a device, particularly with respect to contexts and module loading, in at least one embodiment. In at least one embodiment, CUDA driver API 3806 may expose functions for context management that are not exposed by CUDA runtime API 3804. In at least one embodiment, CUDA driver API 3806 is also languageindependent and supports, e.g., OpenCL in addition to CUDA runtime API 3804. Further, in at least one embodiment, development libraries, including CUDA runtime 3805, may be considered
as separate from driver components, including user-mode CUDA driver 3807 and kernel -mode device driver 3808 (also sometimes referred to as a “display” driver).[0376] In at least one embodiment, CUDA libraries 3803 may include, but are not limited to, mathematical libraries, deep learning libraries, parallel algorithm libraries, and/or signal/image/video processing libraries, which parallel computing applications such as application 3801 may utilize. In at least one embodiment, CUDA libraries 3803 may include mathematical libraries such as a cuBLAS library that is an implementation of Basic Linear Algebra Subprograms (“BLAS”) for performing linear algebra operations, a cuFFT library for computing fast Fourier transforms (“FFTs”), and a cuRAND library for generating random numbers, among others. In at least one embodiment, CUDA libraries 3803 may include deep learning libraries such as a cuDNN library of primitives for deep neural networks and a TensorRT platform for high-performance deep learning inference, among others.[0377] Figure 39 illustrates a ROCm implementation of software stack 3700 of Figure 37, in accordance with at least one embodiment. In at least one embodiment, a ROCm software stack 3900, on which an application 3901 may be launched, includes a language runtime 3903, a system runtime 3905, a thunk 3907, a ROCm kernel driver 3908, and a device kernel driver 3909. In at least one embodiment, ROCm software stack 3900 executes on hardware 3910, which may include a GPU that supports ROCm and is developed by AMD Corporation of Santa Clara, CA.[0378] In at least one embodiment, application 3901 may perform similar functionalities as application 3701 discussed above in conjunction with Figure 37. In addition, language runtime 3903 and system runtime 3905 may perform similar functionalities as runtime 3705 discussed above in conjunction with Figure 37, in at least one embodiment. In at least one embodiment, language runtime 3903 and system runtime 3905 differ in that system runtime 3905 is a language-independent runtime that implements a ROCr system runtime API 3904 and makes use of a Heterogeneous System Architecture (“HAS”) Runtime API. HAS runtime API is a thin, user-mode API that exposes interfaces to access and interact with an AMD GPU, including functions for memory management, execution control via architected dispatch of kernels, error handling, system and agent information, and runtime initialization and shutdown, among other things, in at least one embodiment. In contrast to system runtime 3905, language runtime 3903
is an implementation of a language-specific runtime API 3902 layered on top of ROCr system runtime API 3904, in at least one embodiment. In at least one embodiment, language runtime API may include, but is not limited to, a Heterogeneous compute Interface for Portability (“HIP”) language runtime API, a Heterogeneous Compute Compiler (“HCC”) language runtime API, or an OpenCL API, among others. HIP language in particular is an extension of C++ programming language with functionally similar versions of CUDA mechanisms, and, in at least one embodiment, a HIP language runtime API includes functions that are similar to those of CUDA runtime API 3804 discussed above in conjunction with Figure 38, such as functions for memory management, execution control, device management, error handling, and synchronization, among other things.[0379] In at least one embodiment, thunk (ROCt) 3907 is an interface that can be used to interact with underlying ROCm driver 3908. In at least one embodiment, ROCm driver 3908 is a ROCk driver, which is a combination of an AMDGPU driver and a HAS kernel driver (amdkfd). In at least one embodiment, AMDGPU driver is a device kernel driver for GPUs developed by AMD that performs similar functionalities as device kernel driver 3706 discussed above in conjunction with Figure 37. In at least one embodiment, HAS kernel driver is a driver permitting different types of processors to share system resources more effectively via hardware features.[0380] In at least one embodiment, various libraries (not shown) may be included in ROCm software stack 3900 above language runtime 3903 and provide functionality similarity to CUDA libraries 3803, discussed above in conjunction with Figure 38. In at least one embodiment, various libraries may include, but are not limited to, mathematical, deep learning, and/or other libraries such as a hipBLAS library that implements functions similar to those of CUDA cuBLAS, a rocFFT library for computing FFTs that is similar to CUDA cuFFT, among others.[0381] Figure 40 illustrates an OpenCL implementation of software stack 3700 of Figure 37, in accordance with at least one embodiment. In at least one embodiment, an OpenCL software stack 4000, on which an application 4001 may be launched, includes an OpenCL framework 4005, an OpenCL runtime 4006, and a driver 4007. In at least one embodiment, OpenCL software stack 4000 executes on hardware 3809 that is not vendor-specific. As OpenCL is supported by devices developed by different vendors, specific OpenCL drivers may be required to interoperate with hardware from such vendors, in at least one embodiment.
[0382] In at least one embodiment, application 4001, OpenCL runtime 4006, device kernel driver 4007, and hardware 4008 may perform similar functionalities as application 3701, runtime 3705, device kernel driver 3706, and hardware 3707, respectively, that are discussed above in conjunction with Figure 37. In at least one embodiment, application 4001 further includes an OpenCL kernel 4002 with code that is to be executed on a device.[0383] In at least one embodiment, OpenCL defines a “platform” that allows a host to control devices connected to a host. In at least one embodiment, an OpenCL framework provides a platform layer API and a runtime API, shown as platform API 4003 and runtime API 4005. In at least one embodiment, runtime API 4005 uses contexts to manage execution of kernels on devices. In at least one embodiment, each identified device may be associated with a respective context, which runtime API 4005 may use to manage command queues, program objects, and kernel objects, share memory objects, among other things, for that device. In at least one embodiment, platform API 4003 exposes functions that permit device contexts to be used to select and initialize devices, submit work to devices via command queues, and enable data transfer to and from devices, among other things. In addition, OpenCL framework provides various built-in functions (not shown), including math functions, relational functions, and image processing functions, among others, in at least one embodiment.[0384] In at least one embodiment, a compiler 4004 is also included in OpenCL frame-work 4005. Source code may be compiled offline prior to executing an application or online during execution of an application, in at least one embodiment. In contrast to CUDA and ROCm, OpenCL applications in at least one embodiment may be compiled online by compiler 4004, which is included to be representative of any number of compilers that may be used to compile source code and/or IR code, such as Standard Portable Intermediate Representation (“SPIR-V”) code, into binary code. Alternatively, in at least one embodiment, OpenCL applications may be compiled offline, prior to execution of such applications.[0385] Figure 41 illustrates software that is supported by a programming platform, in accordance with at least one embodiment. In at least one embodiment, a programming platform 4104 is configured to support various programming models 4103, middlewares and/or libraries 4102, and frameworks 4101 that an application 4100 may rely upon. In at least one embodiment, application 4100 may be an AI/ML application implemented using, in at least one embodiment, a
deep learning framework such as MXNet, PyTorch, or TensorFlow, which may rely on libraries such as cuDNN, NVIDIA Collective Communications Library (“NCCL”), and/or NVIDA Developer Data Loading Library (“DALI”) CUDA libraries to provide accelerated computing on underlying hardware.[0386] In at least one embodiment, programming platform 4104 may be one of a CUDA, ROCm, or OpenCL platform described above in conjunction with Figure 33, Figure 34, and Figure 40, respectively. In at least one embodiment, programming platform 4104 supports multiple programming models 4103, which are abstractions of an underlying computing system permitting expressions of algorithms and data structures. Programming models 4103 may expose features of underlying hardware in order to improve performance, in at least one embodiment. In at least one embodiment, programming models 4103 may include, but are not limited to, CUDA, HIP, OpenCL, C++ Accelerated Massive Parallelism (“C++ AMP”), Open Multi-Processing (“OpenMP”), Open Accelerators (“OpenACC”), and/or Vulcan Compute.[0387] In at least one embodiment, libraries and/or middlewares 4102 provide implementations of abstractions of programming models 4104. In at least one embodiment, such libraries include data and programming code that may be used by computer programs and leveraged during software development. In at least one embodiment, such middlewares include software that provides services to applications beyond those available from programming platform 4104. In at least one embodiment, libraries and/or middlewares 4102 may include, but are not limited to, cuBLAS, cuFFT, cuRAND, and other CUDA libraries, or rocBLAS, rocFFT, rocRAND, and other ROCm libraries. In addition, in at least one embodiment, libraries and/or middlewares 4102 may include NCCL and ROCm Communication Collectives Library (“RCCL”) libraries providing communication routines for GPUs, a MIOpen library for deep learning acceleration, and/or an Eigen library for linear algebra, matrix and vector operations, geometrical transformations, numerical solvers, and related algorithms.[0388] In at least one embodiment, application frameworks 4101 depend on libraries and/or middlewares 4102. In at least one embodiment, each of application frameworks 4101 is a software framework used to implement a standard structure of application software. An AI/ML application may be implemented using a framework such as Caffe, Caffe2, TensorFlow, Keras, PyTorch, or MxNet deep learning frameworks, in at least one embodiment.
[0389] Figure 42 illustrates compiling code to execute on one of programming platforms of Figures 37 - 40, in accordance with at least one embodiment. In at least one embodiment, a compiler 4201 receives source code 4200 that includes both host code as well as device code. In at least one embodiment, compiler 4201 is configured to convert source code 4200 into host executable code 4202 for execution on a host and device executable code 4203 for execution on a device. In at least one embodiment, source code 4200 may either be compiled offline prior to execution of an application, or online during execution of an application.[0390] In at least one embodiment, source code 4200 may include code in any programming language supported by compiler 4201, such as C++, C, Fortran, etc. In at least one embodiment, source code 4200 may be included in a single-source file having a mixture of host code and device code, with locations of device code being indicated therein. In at least one embodiment, a single-source file may be a .cu file that includes CUDA code or a .hip.cpp file that includes HIP code. Alternatively, in at least one embodiment, source code 4200 may include multiple source code files, rather than a single-source file, into which host code and device code are separated.[0391] In at least one embodiment, compiler 4201 is configured to compile source code 4200 into host executable code 4202 for execution on a host and device executable code 4203 for execution on a device. In at least one embodiment, compiler 4201 performs operations including parsing source code 4200 into an abstract system tree (AST), performing optimizations, and generating executable code. In at least one embodiment in which source code 4200 includes a single-source file, compiler 4201 may separate device code from host code in such a singlesource file, compile device code and host code into device executable code 4203 and host executable code 4202, respectively, and link device executable code 4203 and host executable code 4202 together in a single file, as discussed in greater detail below with respect to Figure 26.[0392] In at least one embodiment, host executable code 4202 and device executable code 4203 may be in any suitable format, such as binary code and/or IR code. In a case of CUDA, host executable code 4202 may include native object code and device executable code 4203 may include code in PTX intermediate representation, in at least one embodiment. In a case of ROCm, both host executable code 4202 and device executable code 4203 may include target binary code, in at least one embodiment.
[0393] Other variations are within spirit of present disclosure. Thus, while disclosed techniques are susceptible to various modifications and alternative constructions, certain illustrated embodiments thereof are shown in drawings and have been described above in detail. It should be understood, however, that there is no intention to limit disclosure to specific form or forms disclosed, but on contrary, intention is to cover all modifications, alternative constructions, and equivalents falling within spirit and scope of disclosure, as defined in appended claims.[0394] Use of terms “a” and “an” and “the” and similar referents in context of describing disclosed embodiments (especially in context of following claims) are to be construed to cover both singular and plural, unless otherwise indicated herein or clearly contradicted by context, and not as a definition of a term. Terms “comprising,” “having,” “including,” and “containing” are to be construed as open-ended terms (meaning “including, but not limited to,”) unless otherwise noted, term “connected,” when unmodified and referring to physical connections, is to be construed as partly or wholly contained within, attached to, or joined together, even if there is something intervening. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within range, unless otherwise indicated herein and each separate value is incorporated into specification as if it were individually recited herein. In at least one embodiment, use of term “set” (e.g., “a set of items”) or “subset” unless otherwise noted or contradicted by context, is to be construed as a nonempty collection comprising one or more members. Further, unless otherwise noted or contradicted by context, term “subset” of a corresponding set does not necessarily denote a proper subset of corresponding set, but subset and corresponding set may be equal.[0395] Conjunctive language, such as phrases of form “at least one of A, B, and C,” or “at least one of A, B and C,” unless specifically stated otherwise or otherwise clearly contradicted by context, is otherwise understood with context as used in general to present that an item, term, etc., may be either A or B or C, or any nonempty subset of set of A and B and C. In at least one embodiment of a set having three members, conjunctive phrases “at least one of A, B, and C” and “at least one of A, B and C” refer to any of following sets: {A}, {B}, {C}, {A, B}, {A, C}, {B, C}, {A, B, C}. Thus, such conjunctive language is not generally intended to imply that certain embodiments require at least one of A, at least one of B and at least one of C each to be present. In addition, unless otherwise noted or contradicted by context, term “plurality” indicates
a state of being plural (e.g., “a plurality of items” indicates multiple items). In at least one embodiment, a number of items in a plurality is at least two, but can be more when so indicated either explicitly or by context. Further, unless stated otherwise or otherwise clear from context, phrase “based on” means “based at least in part on” and not “based solely on.”[0396] Operations of processes described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. In at least one embodiment, a process such as those processes described herein (or variations and/or combinations thereof) is performed under control of one or more computer systems configured with executable instructions and is implemented as code (e.g., executable instructions, one or more computer programs or one or more applications) executing collectively on one or more processors, by hardware or combinations thereof. In at least one embodiment, code is stored on a computer-readable storage medium. In at least one embodiment, in form of a computer program comprising a plurality of instructions executable by one or more processors. In at least one embodiment, a computer-readable storage medium is a non-transitory computer-readable storage medium that excludes transitory signals (e.g., a propagating transient electric or electromagnetic transmission) but includes non-transitory data storage circuitry (e.g., buffers, cache, and queues) within transceivers of transitory signals. In at least one embodiment, code (e.g., executable code or source code) is stored on a set of one or more non-transitory computer- readable storage media having stored thereon executable instructions (or other memory to store executable instructions) that, when executed (i.e., as a result of being executed) by one or more processors of a computer system, cause computer system to perform operations described herein. A set of non-transitory computer-readable storage media, in at least one embodiment, comprises multiple non-transitory computer-readable storage media and one or more of individual non- transitory storage media of multiple non-transitory computer-readable storage media lack all of code while multiple non-transitory computer-readable storage media collectively store all of code. In at least one embodiment, executable instructions are executed such that different instructions are executed by different processors — in at least one embodiment, a non-transitory computer-readable storage medium store instructions and a main central processing unit (“CPU”) executes some of instructions while a graphics processing unit (“GPU”) executes other instructions. In at least one embodiment, different components of a computer system have separate processors and different processors execute different subsets of instructions.
[0397] Accordingly, in at least one embodiment, computer systems are configured to implement one or more services that singly or collectively perform operations of processes described herein and such computer systems are configured with applicable hardware and/or software that enable performance of operations. Further, a computer system that implements at least one embodiment of present disclosure is a single device and, in another embodiment, is a distributed computer system comprising multiple devices that operate differently such that distributed computer system performs operations described herein and such that a single device does not perform all operations.[0398] Use of any and all of the at least one embodiments, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate embodiments of disclosure and does not pose a limitation on scope of disclosure unless otherwise claimed. No language in specification should be construed as indicating any non-claimed element as essential to practice of disclosure.[0399] All references, including publications, patent applications, and patents, cited herein are hereby incorporated by reference to same extent as if each reference were individually and specifically indicated to be incorporated by reference and were set forth in its entirety herein.[0400] In description and claims, terms “coupled” and “connected,” along with their derivatives, may be used. It should be understood that these terms may be not intended as synonyms for each other. Rather, in ones of at least one embodiments, “connected” or “coupled” may be used to indicate that two or more elements are in direct or indirect physical or electrical contact with each other. “Coupled” may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other.[0401] Unless specifically stated otherwise, it may be appreciated that throughout specification terms such as “processing,” “computing,” “calculating,” “determining,” or like, refer to action and/or processes of a computer or computing system, or similar electronic computing device, that manipulate and/or transform data represented as physical, such as electronic, quantities within computing system’s registers and/or memories into other data similarly represented as physical quantities within computing system’s memories, registers or other such information storage, transmission or display devices.
[0402] In a similar manner, term “processor” may refer to any device or portion of a device that processes electronic data from registers and/or memory and transform that electronic data into other electronic data that may be stored in registers and/or memory. As non-limiting ones of the at least one embodiments, “processor” may be a CPU or a GPU. A “computing platform” may comprise one or more processors. As used herein, “software” processes may include, in at least one embodiment, software and/or hardware entities that perform work over time, such as tasks, threads, and intelligent agents. Also, each process may refer to multiple processes, for carrying out instructions in sequence or in parallel, continuously or intermittently. Terms “system” and “method” are used herein interchangeably insofar as system may embody one or more methods and methods may be considered a system.[0403] In present document, references may be made to obtaining, acquiring, receiving, or inputting analog or digital data into a subsystem, computer system, or computer-implemented machine. In at least one embodiment, process of obtaining, acquiring, receiving, or inputting analog and digital data can be accomplished in a variety of ways such as by receiving data as a parameter of a function call or a call to an application programming interface. In some implementations, process of obtaining, acquiring, receiving, or inputting analog or digital data can be accomplished by transferring data via a serial or parallel interface. In another implementation, process of obtaining, acquiring, receiving, or inputting analog or digital data can be accomplished by transferring data via a computer network from providing entity to acquiring entity. References may also be made to providing, outputting, transmitting, sending, or presenting analog or digital data. In various ones of the at least one embodiments, process of providing, outputting, transmitting, sending, or presenting analog or digital data can be accomplished by transferring data as an input or output parameter of a function call, a parameter of an application programming interface or interprocess communication mechanism.[0404] Although discussion above sets forth ones of the at least one embodiments having implementations of described techniques, other architectures may be used to implement described functionality, and are intended to be within scope of this disclosure. Furthermore, although specific distributions of responsibilities are defined above for purposes of discussion, various functions and responsibilities might be distributed and divided in different ways, depending on circumstances.
[0405] Furthermore, although subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that subject matter claimed in appended claims is not necessarily limited to specific features or acts described. Rather, specific features and acts are disclosed as exemplary forms of implementing the claims. |
When threshold values for the capacitive sensors in a touch pad are periodically updated to allow for drift in these values, the updating process may be suspended while a nearby radio antenna is transmitting. Such transmissions from an antenna that is located next to the touch pad could otherwise significantly alter the effective capacitance in these sensors and thereby make the touch pad unreliable for registering a touch. Even though the capacitance may return to normal fairly quickly after the transmission stops, the moving average technique typically used to smooth out short term variation may incorporate the period of changed capacitance and thereby extend the period of unreliability, but suspending the update process during a transmission can avoid this problem. |
What is claimed is: 1. A wifeless communications device having a processor, a memory, a radio antenna for Near Field Communications (NFC), and- a touch pad having a plurality of touch sensors, the device configured to perform these operations for each of multiple ones of the sensors: a) periodically read a sensor value from the touch sensor; b) determine a moving average value for a quantity of the periodically read sensor values, and store the moving average value as a threshold value; c) repetitively update the threshold value by repetitively performing operations a) and bj over time; d) compare a next sensor value with the threshold value to determine whether to register a touch on the touch pad; and e) repeat operations a) through d) while no signal is transmitted from the antenna; wherein the device is further configured to start and stop transmitting a signal from the antenna, and to halt at least one of operations a) through c) during said transmission of the signal. 2. The device of claim K wherein die device is configured to halt operation c) during the transmission, 3. The device of claim 1, wherein the device is configured to halt operation b) durin the transmission. 4. The device of claim K wherein the device is configured to halt operation a) during the transmission. 5. Th device of claim 1 , wherein, the device is configured to resume updating the threshold value after determining the transmission has stopped, 6. Th device of claim 1, wherein the device is configured to halt operations a), bj, and. c) by disabling the touch pad during the transmission, 7. The device of claim .6, wherein the device is configured to resume operations a), b), and c) by enabling the touch pad after the transmission stops. Ϊ 3 8. The device of claim I, wherein a plane of die antenna is parallel to a plane of the touch. pad, 9. A touch pad assembly for a wireless communications device usirtg "Near Field C mutati n , the touch pad assembly comprising: a touch surface having multiple capacitive sensors arranged to sense a touch on the touch surface; logic configured to periodically read a capacitance value for each of the multiple capacitive sensors and to detemiine a moving average of the values for a multiple number of previous readings for each of the multiple capacitive sensors; and an input to determine when a nearby Near Field Communications (NFC) antenna is transmitting; wherein the logic is to periodically update the moving average for each sensor, and is to halt said updating the moving average when the antenna is transmitting. 10. The touch pad assembly of claim 9, wherein the logic is configured to halt the updating by stopping operation of the touch pad. 1 1 , The touch pad assembly of claim 9, wherein the logic is configured to halt the updating by stopping the periodic updating of the moving average. 12. The touch, pad assembly of claim 9, wherein the iogic Is configured to resume updating the moving average after the antenna stops transmitting, 13. A method of reducing interference of a touch pad by a co-located radio antenna, comprising: a) periodically read a sensor value from a sensor in the touch pad; b) detemiine a moving average value for a quantity of the periodically read sensor values. and store the moving average value as a threshold value; c) repetiti ely update the threshold value by repetitively performing operations a) and b) over time; d) compare a next sensor value with the threshold value to determine whether to register a touch on the touch pad; and Ϊ 4 e) repeat operations a) through d) while no signal Is being transmitted from the antenna; f) start and stop transmissions f om the radio antenna; g) halt at least one of operations a) through c) during said transmission of the signal. 14. The method of claim 13, further comprising hailing operation c) during the transmission. 15. The method of claim 13, further comprising halting operation b) during the transmission. 16. The method of claim 13, further comprising halting operation a) during the transmission. 17. The method of claim 13, further comprising resuming updating the threshold value after determining the transmission has stopped. 18. A computer-readable non-transitory storage medium, that contains instructions, which when executed b one or more processors result in performing operations comprising; determining a transmission from a radio antenna is about to start; disabling a touch pad after said determining the transmission is about to start; determining the transmission has stopped; and enabling the touch pad after said determining the transmission has stopped. 19. The medium of claim 1 , wherein the operations further comprise causing the transmission to start and stop. .20. A wireless communications dev ce having a processor, a memory, a radio antenna fo Near Field Communications (NFC), and a touch pad having a plurality of touch sensors, the device configured to perform these operations for each of multiple ones of the sensors; a) periodically read a sensor value from the touch sensor; h) determine a moving a verage value for a quantity of the periodically read sensor values, and store the moving average value as a threshold value; c) repetiti vely update the threshold value by repetitively performing operations a) and b) over time; (I) compare a next sensor value with the threshold value to determine whether to register a touch on the touch pad; e) determine whether a transmission .from an NFC antenna is transitioning between a transmi t status and a non-transmit status; and f) if the transmission status is transitioning, temporarily replace operation b) with an operation of storing a most recent sensor value as the threshold value; wherein operation f) is limited to a specific number of consecutive updates when the transmission status is determined to transition. 21. A computer-readable non-transitory storage medium that contains instructions *which when executed by one or more processors result in performing operations comprising: a) periodically reading a sensor value from the touch sensor; b) determining a moving average value for a quantity of the periodically read sensor values, and storing the mo ving average value as a threshold value; c) repetitively updating the threshold vaiue by repetitively performing operations a) and b) over time; d) comparing a next sensor value with the threshold value to determine whether to register a touch on the touch pad; e) determining whether a transmission from, au NFC antenna is transitioning between a transmit status and a non- transmit status; and i) if the transmission status is transitioning, temporarily replacing operation b) with an operation of storing a most recent sensor value as the threshold value; wherein operation f) is limited to a specific number of consecutive updates when the transmission status is determined to transition. |
CO-EXISTENCE OF TOUCB SENSOR AND NFC ANTENNA BACKGROUND Very thin notebook computers frequently have a chassis made of metal because the structural strength of the metal, tends to reduce damage caused b flexing of the thin chassis in evefyday use. However, most computer devices BOW include at least one radio and its associated internal antenna. Placing a radio antenna under a cutout in the metal may be desirable because die metal might otherwise interfere with transmissions from the antenna, especially in the ease of Near Field Communications (NFC) radios, which primarily use the magnetic portion of the electromagnetic radio waves. The cutout made to house a touch pad may be used for this purpose because it's approximately the right size. A modem touch pad generally consists of an array of capacitive touch sensors. A threshold value for each sensor may be set and stored in the touch pad's memory during system boot up. The readings on the capacitive sensor array may then he continuously compared to these threshold values to determine whether a touch event on the touch pad has occurred. The threshold values for the capacitive sensors may also he periodically updated in a moving average manner to capture longer term wander of the baseline values due to environmental changes (such as temperature, humidity, surroundings, etc.). A transmission "from a nearby radio antenna may significantl change the capacitive characteristic detected by the sensors. Even if the charge returns to normal fairly soon after the transmission stops, the moving average technique for updating the threshold values may cause the recorded threshold values to return to normal more slowly, and be out of balance with the actual charge values. In. addition, the higher sensor reading during a transmission might be falsely interpreted as a touch (either a palm touch if all the sensors have a sufficiently high reading, or a finger touch if only a small subset of the sensors have a sufficiently high reading). Any of these conditions can cause the touch pad to be unusable for its intended function during the transmission and/or for a period of time after a transmission. In addition to trackpad, devices commonly placed near the key board of a notebook computer, touch, screens such as those used in tablet computers and smart phones may also suffer from this same problem, since they typically use capacitive sensors. BRIEF DESCRIPTION OF THE DRAWINGS Some embodiments of the invention may be better understood by referring to the following description: and accompanying dra wings thai are used to illustrate embodiments of the invention. in the drawings: Fig. I. A shows a communications device, according to an embodiment, of the invention. Fig. IB shows functional components within a wireless communications device, according to an embodiment of the invention. Fig. 2 shows a touch pad with a radio antenna located beneath it, according to an embodiment of the invention. Fig. 3 shows a flow diagram of a method of disabling sensor values during a transmission, according to an embodiment of the invention. Fig. 4 shows a flow diagram of a method of ignoring sensor values during a transmission, according to an embodiment of the invention. Fig. 5 shows a flow diagram of a method of raising threshold values during a transmission, according to an embodiment of the invention. Fig. 6 shows a block diagram of a touch pad, radio, antenna, and. touch pad controller, according to an embodiment of the invention. Fig. 7 shows a flow diagram of a method of enabling/disabling a touch pad during transmissions, according to an embodiment of the invention. DETAILED DESCRIPTION In the following description, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In other instances, well-known circuits, structures and. techniques have not been, shown in detail in order not to obscure an understanding of this description. References to "one embodiment", "an embodiment", "example embodiment", "various embodiments", etc., indicate that the embodiments) of the invention so described may include particular features, structures, or characteristics, but not every embodiment necessarily includes the particular features, structures, or characteristics. Further, some embodiments may have some, ail, or none of the features described for other embodiments. In the following description and. claims, the terms "coupled" and "connected," along with their deri vatives, may be used. It should be understood that these terras are not intended as synonyms for each other. Rather, in particular embodiments, "connected" is used to indicate that two or more elements are in. direct physical or electrical contact with each other. "Coupled" is used, to indicate that two or more elements co-operate or interact with each other, but they may or .may not have intervening physical or electrical components between them. As used in the claims, unless otherwise specified the use of the ordinal adjectives "first", "second", "third", etc., to describe a common element, merely indicate that different instances of like elements are being referred to, and are not intended to imply that the elements so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner. Discussions herein utilizing terms such as, for example, processing", "computing", "calculating", "determining", "establishing", "analyzing", "checking", or the like, may refer to operation(s) and/or process(es) of a computer, a computing platform, a computing system, or other electronic computing device, that manipulate and/or transform data represented as physical (e.g., electronic) quantities within the computer's registers and/or memories into other data similarly represented as physical quantities within the computer's registers and/or memories or other information storage medium that may store instructions to perform operations and/or processes. Various embodiments of the invention may be implemented fully or partially in software and/or firmware. This software and/or firmware may take the form of instructions contained in or on a non-transitory computer-readable storage medium. Those instructions may then be read and. executed by one or more processors Co enable performance of the operations described herein. The instructions may be in any suitable form, such as but not limited to source code, compiled code, interpreted code, executable code, static code, dynamic code, and the like. Such a computer-readable medium may include any tangible non-transitory medium for storing information in a form readable by one or more computers, such as but not limited to read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; a flash memory, etc. The term "wireless" may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that communicate data by using modulated electromagnetic radiation through a non-solid medium. A wireless device may comprise at least one antenna, at least one radio, at least one memory, and at least one processor, where the radio(s) transmits signals through the antenna that, represent data and receives signals through the antenna that represent data, while the processor(s) may process the data to be transmitted and the data that has been received. The processor(s) may also process other data which is neither transmitted nor received. As used within this document, the term "communicate" is intended to include transmitting and/or receiving. This may be particularly useful in claims when describing the organization of data that is being transmitted by one device and received by another, but only the functionality of one of those devices is required t i nfringe the claim. Similarly, the exchange of data between a network controller and a mobile device (both devices transmit and receive during the exchange) may be described as 'communicating', when only the functionality of one of those devices is being claimed. Fig. I A. shows a wireless communications device, according to an embodiment, of the invention. Device 100 is shown as a typical notebook computer, with a keyboard 1 10, a display 120, and a touch pad 130, but device 100 may be any device, with any shape and configuration, that utilizes NFC wireless communications and has a touch pad input device. Although the touch pad 130 shown in Fig. 1 A is an example of a trackpad (i.e., a. small touch-sensitive area traditionally located near fee keyboard and used as a replacement for a computer mouse in notebook computers), the terra 'touch pad', as used herein, ma also include a touch screen (i.e., a display screen whose surface is sensitive to localized, touch), or any other capacitive sensor input device feat is sensitive io touch. Fig. 1 B shows functional components within a wireless communications device, according to an embodiment of the invention. In addition to keyboard .1 .1 , display 120, and touch pad 130 as shown in Fig. LA, wireless communications device 100 is also shown with processor 150, memory 160, radio 170, and radio antenna ISO, Although device 100 s shown with one each of these items, more than one of any of these items may be included in wireless device 100, Fig. 2 shows a touch pad wife a radio antenna located beneath it, according to an embodiment of the invention. Touch pad 130 is shown wife two buttons 133 and 135 for additional inputs, as is common with many touch pads located near computer keyboards such as shown in Fig. 1A, although buttons and a similar location should not be seen as limitations on the various embodiments of the invention. In some embodiments, touch pad 130 ma be a touch pad assembly containing logic to perform many of the operations described herein. The logic may assume any feasible form, such as but not limited to: I ) discrete circuitry, 2) a state machine, 3) programmable instructions, 4) etc. Antenna 220 is shown as a loop antenna with multiple loops, although multiple loops and or the loop configuration should not be considered limitations on the various antenna configurations. Antenna 220 is also shown with an. overall planar rectangular shape, although some antennas may be non-planar, and even planar antennas may have other shapes, such as but not limited to square, circular, oval, etc. The plane of antenna 220 is shown parallel to the plane of the touch pad 130, which would be considered a best mode for antennas with a planar shape that are expected to transmit through the opening in fee chassis that was created for the touch pad, bu other configurations may also be used, in some embodiments, the antenna and touch pad ma he built into a single structure. One way to accomplish this may he to create the antenna as a trace on a circuit board which forms one of the layers of a .multi-layer touch pad assembly, although various embodiments of the in vention are not limited in this respect. The touch area of the touch pad may be comprised of an array of capacitive sensors. When a human finger touches the touch pad, the proximity of the biological material in the finger may change the sensed capacitance of the sensors within the touched area, and this change may then be interpreted as a touch. Because small variations in capacitance may be normal even without a touch, the amount of change may need to exceed a specified minimum amount before a touch is to be interpreted. For each sensor, a value that is presumed to be the ' •non-touch' reference value may be previously recorded and then compared with the current value to determine if the change is significant enough to be reliably interpreted as a touch.. Because the steady state value of capacitance in each sensor may drift due to temperature,, humidity, age, proximity of the user 's hand, etc., the current value of capacitance at each sensor may be periodically measured and recorded, and the reference value updated. To eliminate short term error in this process, a moving average of such measurements, taken over a period of time, may be used to determine the current reference value for each sensor. For the purposes of this document, such reference values may be referred to as "threshold values *. In some embodiments, at system startup a slightly different sequence may be used to create the initial threshold values, since there may be uo prior history of values to depend upon. In one embodiment, a series of sensor values may be measured for each capacitive sensor over a period of time, and all those sensor values for all the sensors may be stored in a table, with the oldest sensor values being removed to make room for the newest sensor values. The table may contain V times V sensor readings, where V is the number of readings being stored, and y is the number of sensors whose readings are being stored. For each sensor, the average may be calculated by adding up the stored sensor values and dividing the sura by . In anothe embodiment, only the threshold values may be stored. Each, time a new sensor reading is measured, the associated threshold value may be updated by assuming the threshold value represents the average of the past number of readings and doing an incremental adjustment to it. For example, if the threshold value is based on an average of the last eight sensor values, the new threshold value may be calculated as: ( 7/8 of the old threshold value) + ( 1/8 of the new sensor value ). This approach avoids having to maintain an ' ' by V table, it may also simplify startup calculations, since the initial sensor value can be assumed as the threshold value, and all subsequent calculations can then follow the same algorithm. These two methods of calculating the threshold values are examples. Other techniques may be used instead. The frequency of reading the sensor values, and the number of sensor values used to calculate the moving average, may he any feasible numbers. For example, a readin of ail the sensor values might be taken four times per second, and a moving average of the most recent eight sensor readings might be used to calculate a threshold value. Note that these numbers are only an example used to illustrate the process, and should not be seen as a limitation on the various embodiments of the invention. As previously mentioned, to avoid inaccuracies in detecting a touch, the amount of change determined by the comparison should exceed a specified minimum amount before a touch is to be registered. This may be handled in various ways, in one embodiment, the recorded threshold values should include the minimum difference (i.e., the moving average plus the minimum difference), so that a direct comparison with the current value can be used to determine a touch. In another embodiment, he actual moving average value may he recorded as a threshold value, and the comparison process itself detects a difference of more than the minimum difference before a touch is to be registered. A touch may typically be simultaneously sensed by multiple adjacent sensors (a 'cluster' of sensors) due to the width, of a human finger and the spacing of the sensors, A sliding touch may be registered when th location of a cluster moves across the touch surface over time, without an intervening absence of touch. A palm touch may be registered if the number of sensors in a duster is significantly more than the number expected from a finger touch. In general, a palm touch is inadvertently caused when the user accident!y lays his or her palm across the touch pad. It may generally be considered an error by the user, and ignored by the system. In general, a finger touch may be inferred when a suitably small subset of ail the sensors simultaneously register a touch, while a palm touch may be inferred when a suitably large subset of all of the sensors simultaneously register a touch. The percentage of sensors that distinguish a finger touch (e.g., < m %) and a palm touch (e.g. > n %) may be a design choice, and in general do not affect the determination of 'threshold values', as that term is used in this document. In many embodiments, the spacing between the touch pad 1.30 and antenna 220 in Fig. 2 may be small (e.g., less than 3 millimeters), Because of this proximity, a transmission from the antenna may significantly change the charge in the sensors during the transmission, and an sensor reading taken during that time may result in false threshold values thai can persist until the moving average calculations no longer include the effects of the transmission period. In addition, the increased sensor values at the beginning of a transmission may be falsely interpreted as a touch. To avoid these issues, various techniques may be used to prevent the altered capacit ve values from creating problems with the touch function, such as but not limited, to one or more of these techniques: I ) any sensor values that are read durin a transmission may be ignored, and not used to calculate the moving average values, 2) the reading of sensor values may be disabled during a transmission, so that the periodic values are not even read during that time, 3} the threshold values may be immediately raised at the beginning of a transmission to reflect the increased sensor values, and lowered to the pre-transinission values immediately after the transmission. Techniques 1) and 2) assume the touch pad will be unusable during the transmission, while technique 3) attempts to make the touch pad usable during the transmission. All these techniques rely on knowing when a transmission is being made from the antenna. If transmissions from the antenna are being controlled or monitored by a device that also controls the touch pad, this device may provide the necessary knowledge of when transmissions occur and use that knowledge to suppress or ignore periodic readings of the capacitive sensor val ues. Alternati vely, if the touch pad detects a sudden and significant change in the values of all or .nearl all of the sensors, this may be interpreted as a transmission period, and any of the previously discussed techniques may be used. When the readings return to the pre-transniission range, the transmission may be assumed to be over and normal processing of the touch pad inputs may resume. This sequence may also be used to ignore a palm touch, since both a palm touch and a transmission may have a similar effect on the touch sensors. Figs. 3, 4, and 5 show flow diagrams for various ways of handling threshold values when a transmission from a nearby NFC antenna may affect sensor inputs from a touch pad. These figures each focus on when to update the threshold values, rather than when to register a touch on the touch pad. it may be assumed that a touch on the touch pad is registered when a) the touch pad is enabled, and b) a suitably small subset of the sensors in the touch pad sense a large enough increase in capacitance over the threshold value that a touch may be inferred. Fig. 3 shows a flow diagram, of a method of disabling sensor values during a transmission, according to an embodiment of the invention. In flo chart 300, sensor values for the capacitive sensors in a touch pad are read at 310. Dependin on the embodiment, these may be stored for future use and/or may immediately be made available for further use. Threshold values may then be computed at 320, using some form of moving average computation incorporating the results of several sets of previous sensor values, and the current threshold values may be stored for subsequent comparison. Various ways of calculating the threshold values have been previously discussed in this document. At 330, it may be determined whether a transmission from the NFC antenna is either imminent or is actively in progress, or whether no such transmission is imminent/active. If a transmission is not active or imminent, the device may continue to read sensor values at 310, and. compute and update new threshold values at 320. The loop through 310, 320, and 330 may continue as long as no NFC transmissions are determined to be taking place or anticipated to take place immediately. The determination of an NFC transmission may be made in any of several ways, including but not limited to: 1 ) A module in the device may control NFC transmission and therefore have advance knowledge that a transmission is about to take place, or current knowledge thai a transmission is in progress. This module may either control the subsequent actions described in flow diagram 300, or trigger another module to control them. In some embodiments, a peripheral controi hub (PCH) may be used to control transmissions and also control the touch pad. 2) A module in the device may monitor for NFC transmission and therefore obtain current knowledge that a transmission is in progress. This module may either controi the subsequent actions described in flow diagram 300, or trigger another module to control them. This module may monitor for a transmission in several ways, such as but not limited to: a) monitor a signal that indicates whether a transmission is in progress, b) examine an indicator in a register or memory location that indicates whether a transmission is in progress, c) receive an interrupt that indicate whether a transmission is being started and/or terminated. 3) Readings from most or all of the sensors may suddenly become large enough to indicate either a transmission from the nearby antenna- or a palm touch. If both events cause similar readings from the capacitive sensors, and both events are responded to in the same way, it may not matter whether the system can distinguish between the two events. If si's determined at 330 thai a transmission is imminent or is in progress, further sensor readings may be disabled at 340. This may be accomplished in several ways, such as but not limited to; 1 ) disabling the entire touch pad, 2) stopping the read function, 3) performing the reads but not 'keeping or using the sensor values, 4) etc. Sensor readings may remain disabled as long as the NFC transmission continues. Sensor reading may remain disabled until it is determined at 350 that the NFC transmission has stopped. This determinatio may be made in various ways, such as but not limited to using the same techniques listed above to determine if a transmission is imminent or in progress. Once it has been determined that the NFC transmission has stopped, sensor reading may be re- enabled at 360, and the read/eompuie update sequence may be restarted at 310-320. Since no readings were taken and/or used during the transmission, the most recent number of sensor values that effect the threshold calculations (where is the number of readings used to calculate a moving average) may naturally incorporate some values read before the transmission and some values read after the transmission, until sufficient updates have occurred to effectively exclude the sensor values read before the transmission. Fig. 4 shows a flow diagram of a method of ignoring sensor values during a transmission, according to an embodiment of the invention. In flow chart. 400, sensor values for die capacirive sensors in a touch pad are read at 410, and may he stored for further use. At 420 it may be determined whether a transmission from the NFC antenna is either imminen or is actively in progress, or whether no such transmission is imminent active. In some embodiments, the criteria for such a determination may be the same as described above for Fig. 3. If a transmission is not imminent or acti ve, at 430 the stored sensor values may be used to calculate threshold values, and the newly calculated threshold values may used to update (i.e., replace) the previous threshold values. The flow may then return to 410 to repeat the periodic process of reading and updating in the loop 410-420-430. This cycle may continue as long as there are no transmissions from the antenna. However, once a transmission from the antenna is detected or determined to be imminent, the calculation and updating of threshold values may be halted at 440. As long as the NFC transmission continues, the flow may continue to ioo through 4.10-420-440, Even though the reading of sensor values at 4.10 may be continued during this loop, these new sensor values may not be used to compute/update tire threshold values. Once the transmission stops, as determined at 420, the device may resume using new sensor values to compute and update the threshold values. I one embodiment, new threshold values are calculated based on the most recently stored threshold value (which was based only on pre-lransmission sensor readings), and the most recent sensor value, in which case any readings taken during the transmission are automatically ignored. Fig. 5 shows a flow diagram of a method of raising threshold values during a transmission, according to an embodiment of the invention, in flow diagram 500, sensor values from the multiple capacitive sensors may be read at 510. At 520, new threshold values may be calculated and the threshold values may be updated by replacing the previous threshold values with the newly calculated threshold values. This process involves the standard way of updating, in which any change to the threshold values is incremental in nature due to the moving average method of calculating new threshold values. At 530, it may be determined whether the transmission status of a nearby NFC antenna has changed between transmitting and not transmitting (I.e. , either from transmitting to not transmitting- or from not transmitting to transmitting). For example, if the transmission was previously off but is now on, the effect of radiation from, the antenna may be assumed to have immediately and significantly increased the value of capacitance sensed by all, or at least most, of the sensors. Since this condition may be expected to continue as long as the transmission is active, this niay be interpreted as a reason at 540 to replace the previously calculated threshold values with threshold values equal to the most recent sensor values. When the flow moves from 540 to 510, the next sensor values that are read at 510 may be assumed to be high if the transmission is still active. In such a case, new sensor values may be processed as usual at 520, with the newly calculated threshold values remaining fairly close to the values generated at 540, In this way, if the new sensor values are close to the currently high threshold values, no touch is inferred, even though the sensor values may be significantly higher than their long term average values. On the other hand, if a subset of the sensors now indicate senso values that are higher than the already-high threshold values, a touch may be inferred, even though the ongoing transmission has significantly affected the steady-state value of the sensors. As long as the on/off transmission status of the antenna remains unchanged, control may- loop through 51 -520-530-51 in the usual manner, with a touch being inferred whenever a subset of the sensors indicate sensor values that are sufficiently higher than the threshold values. However, if the determination at 530 indicates a change in the transmission status, control may move to 540, wher again the threshold values may be set to the most recent sensor values rather than being calculated in the usual incremental manner. For example, if the antenna was previously transmitting, but is now not transmitting, the threshold values may be immediately set to their most recent post-transmission values, which would be the values normally seen when the antenna was not transmitting and no touch was occurring. Control may then return to 10, and. the processing at 510-520-530-510 may resume as it was before the transmission started, with both sensor values and threshold values at their normal no-transmission levels. Depending on the timing, it is possible that sensor values may not have completed their transition when a change in the transmission on/off status is detected, in this case, the most recent sensor values may not be an accurate indication of what the new threshold values should be. To address this possibility, m some embodiments the change process of operations 530-540 may be implemented over two (or more) consecutive sensor reading cycles, in that way, if the first pass through 540 produces incorrect threshold values, the subsequent pass through 540 will correct it. The flow of fig. 5 may also be useful when rejecting a palm touch. Since a palm touch may affect many of the sensors (too many to be interpreted as a finger touch), the resulting sudden increase in this many sensor values may be handled in the same manner a the sudden activation of a transmission from the antenna. In some designs, radiation from the NFC antenna may interfere with the interface between the touch pad and the controller thai controls ihe touch pad. For example, an I2C interface may be subject to disruption by transmissions from the antenna. In such conditions, the touch pad may be disabled durin the transmissions. Fig. 6 shows a block diagram of a touch pad, radio, antenna, and touch pad controller, according to an embodiment of the invention. In module 600, a single controller 610 is shown to interface with both the touch pad 620 and with the radio 630, Controller 610 is labeled as a peripheral control hub (PCH), but this should not be seen as a limitation on either the name or the functionality of the controller. In some embodiments, controller 610 may be configured to control when touch pad 620 is enabled or disabled, and also to control when radio 630 does and does not transmit through antenna 640, In this manner, a single module may be able to disable the touch pad when the radio is transmitting, and enable the touch pad when the radio is not transmitting. Fig, 7 shows a flow diagram of a method of enabling/disabling a touch pad during transmissions , according to an embodiment of the invention, in flow chart 700, sensor values for the capacitive sensors in a touch pad are read at 710. Depending on the embodiment, these may be stored for future use and/or may immediately be made available for further use. "Threshold values may then be computed at 720, using some form of moving average computation incorporating the results of several sets of previous sensor values, and the current threshold values may be stored for subsequent comparison. Various ways of calculating the threshold values have been previously discussed in this document. At 730, it may be determined whether a transmission from the NFC antenna is either imminent or is actively in progress, or whether no such transmission is ininuneni/active. If it is neither, the device may continue to read sensor values at 710, and compote and update new threshold values at 720, The loop through 710, 720, and 730 may continue as long as no NFC transmissions are determined to be taking place or anticipated to take place immediately. The determination of an NFC transmission may be made in any of several ways, including but not limited to the techniques described for Fig. 3. If it's determined at 730 that a transmission is imminent or is in progress, the touch pad may be disabled at 740, By disabling the touch pad, no inputs from the touch pad may be received by the touch pad controller, so no corruption of those inputs may occur. The touch pad. may remain disabled until it. is determined at 750 that the NFC transmission Π has stopped. This determination may be made in various ways, such as but not limited to using the same techniques listed above to determine if a transmission is imminent or in progress. Once il has been determined that the NFC transmission has been completed, the touch pad may be re- enabled at 760, and the read/ca!eoiaie/opdate sequence may be restarted at 710-720. Since the touch pad was disabled during the transmission, no corrupted commands to the touch pad will have been received by the touch pad, and no corrupted inputs from the touch pad to the controller will have been received by the controller, during the transmission. The foregoing description is intended to be illustrative and not limiting. Variations will occur to those of skill in the art. Those variations are intended to be included in the various embodiments of the invention, which are limited only by the scope of the following claims. |
Capacitors, apparatus including a capacitor, and methods for forming a capacitor are provided. One such capacitor may include a first conductor a second conductor above the first conductor, and a dielectric between the first conductor and the second conductor. The dielectric does not cover a portion of the first conductor; and the second conductor does not cover the portion of the first conductor not covered by the dielectric. |
WHAT IS CLAIMED IS: 1. A capacitor, comprising: a first conductor; a second conductor above the first conductor; and a dielectric between the first conductor and the second conductor, wherein the dielectric does not cover a portion of the first conductor, and the second conductor does not cover the portion of the first conductor not covered by the dielectric. 2. A capacitor, comprising N planar conductors disposed one above the other, each of the planar conductors including a respective portion not covered by the planar conductors disposed above thereof, wherein N is a natural number equal to or greater than two. 3. The capacitor of Claim 2, wherein each of the portions is an end portion of a respective one of the planar conductors, and the end portion together with another of the planar conductors form a stair step. 4. The capacitor of Claim 2, wherein each of the planar conductors including a respective portion not covered by the planar conductors disposed above thereof comprises each of the planar conductors including at least two respective portions not covered by the planar conductors disposed above thereof. 5. The capacitor of Claim 2, wherein the planar conductors respectively extend in two dimensions, and wherein each of the planar conductors is smaller than its underlying planar conductors in at least one of the two dimensions. 6. The capacitor of Claim 2, further comprising one or more dielectrics, each of the one or more dielectrics interposed between a respective two adjacent ones of the planar conductors. 7. The capacitor of Claim 6, wherein the planar conductors and the one or more dielectrics respectively extend in two dimensions, and wherein each of the planar conductors is equal to or smaller than its underlying planar conductors in at least one of the two dimensions. 8. The capacitor of Clam 2, wherein alternate ones of the planar conductors constitutes a first set, and remaining ones of the planar conductors constitutes a second set, and wherein the capacitor further comprises a respective contact coupled to each of the portions of the planar conductors of the first set, each of the contacts extending in a direction substantially perpendicular to a surface of the planar conductors to which the corresponding contact is coupled. 9. The capacitor of Clam 8, wherein the contacts comprise first contacts and further comprising a respective second contact coupled to each of the portions of the planar conductors of the second set, each of the second contacts extending in a direction substantially perpendicular to a surface of the planar conductors to which the corresponding second contact is coupled. 10. The capacitor of Claim 9, wherein the planar conductors collectively form a stacked body, wherein the portions collectively form a stair step at a first side of the stacked body and a stair step at a second side of the stacked body, wherein each of the first contacts are disposed at the first side of the stacked body, andwherein each of the second contacts are disposed at the second side of the stacked body. 11. The capacitor of Claim 9, wherein the portions collectively form a stair step, and wherein each of the first contacts are disposed on the stair step adjacent to a first lateral side of the stair step, and wherein each of the second contacts are disposed on the stair step adjacent to an opposing second lateral side of the stair step. 12. The capacitor of Claim 9, further comprising first and second contact electrodes respectively coupled to at least some of the first contacts and at least some of the second contacts. 13. The capacitor of Claim 12, wherein the planar conductors collectively form a stacked body, wherein at least some of the portions collectively form a stair step at a first side of the stacked body and a stair step at a second side of the stacked body, and wherein the first and second contact electrodes are respectively disposed above the first and second sides of the stacked body. 14. The capacitor of Claim 12, wherein at least some of the portions collectively form a stair step at a side of the capacitor, and wherein the first contact electrode is disposed above the stair step adjacent to a first lateral side of the stair step, and wherein the second contact electrode is disposed above the stair step adjacent to an opposing second lateral side of the stair step. 15. The capacitor of Claim 12, further comprising third and fourth contact electrodes respectively coupled to at least some of the first contacts and at least some of the second contacts, respectively. 16. An apparatus comprising, a memory cell region; and a peripheral region adjacent to the memory cell region, the peripheral region comprising a capacitor including N planar conductors disposed one above the other, each of the planar conductors including a respective portion not covered by the planar conductors disposed above thereof, wherein N is a natural number equal to or greater than two. 17. The apparatus of Claim 16, wherein the memory cell region comprises a stacked body, the stacked body comprising a plurality of word line layers made of a conductive material and respectively disposed one above the other, and a plurality of dielectric layers, each of the dielectric layers being interposed between a respective two adjacent word line layers of the plurality of word line layers. 18. The apparatus of Claim 16, wherein the memory cell region comprises one or more strings of memory cells formed through the stacked body. 19. The apparatus of Claim 16, wherein the apparatus comprises a memory device. 20. The apparatus of Claim 16, wherein the apparatus comprises a system, wherein the system comprises a controller coupled to a memory device, wherein the memory device comprises the memory cell region and the peripheral region. 21. A method for fabricating a capacitor, the method comprising: alternately stacking N planar conductors with N- 1 dielectrics on a substrate to form a stacked body thereon, wherein N is a natural number greater than or equal to 2; removing a portion of the stacked body to uncover a respective portion of each of the planar conductors. 22. The method for Claim 21, wherein removing a portion of the stacked body comprises: iteratively etching the stacked body with a mask having a width that decreases with each iteration to form a stair step at a side of the stacked body. 23. The method of Claim 21 , wherein alternate ones of the planar conductors constitutes a first set, and remaining ones of the planar conductors constitutes a second set, and wherein the method further comprises forming a first contact and a second contact, the first contact being coupled to the respective portion of a planar conductor of the first set, the second contact being coupled to the respective portion of a planar conductor of the second set. 24. The method of Claim 23, wherein forming a first contact and a second contact comprises: forming an interlayer dielectric layer over the stacked body; forming openings in the interlayer dielectric layer; and depositing a conductive material into the openings to form the first contact and the second contact therein. 25. The method of Claim 23, further comprising forming first and second contact electrodes respectively above the first and second contacts. 26. The method of Claim 25, wherein removing a portion of the stacked body comprises removing portions of the stacked body to form a stair step at a first side of the stacked body and a stair step at a second side of the stacked body, and wherein forming first and second contact electrodes comprises forming the first and second contact electrodes above the first and second sides of the stacked body, respectively. 27. The method of Claim 25, wherein removing a portion of the stacked body comprises removing portions of the stacked body to form a stair step at a side of the stacked body, and wherein forming first and second contact electrodes comprises forming the first and second contact electrodes above the stair step adjacent to first and second lateral sides of the stair step, respectively. 28. The method of Claim 25, wherein forming first and second contacts comprises forming a plurality of first contacts and a plurality of second contacts, and wherein forming first and second contact electrodes further comprises forming third and fourth contact electrodes respectively disposed on and coupled to at least some of the first contacts and at least some of the second contacts, respectively. 29. The method of Claim 21, wherein alternately stacking N planar conductors comprise alternately stacking N planar conductors with N-l dielectrics in a memory cell array region and a peripheral region to form the stacked body, and wherein removing a portion of the stacked body comprises removing portions of the stacked body in the memory cell array region to form a three dimensional memory cell array structure. |
CAPACITORS, APPARATUS INCLUDING A CAPACITOR AND METHODS FOR FORMING A CAPACITOR PRIORITY APPLICATION [0001] This application claims priority benefit from U.S. Application No. 13/214,902, filed 22 August 2011, which is incorporated herein by reference in its entirety. TECHNICAL FIELD [0002] The present disclosure relates generally to capacitors and, in a particular embodiment, to a parallel plate capacitor for storing and providing electrical energy to a variety of devices, including semiconductor devices. BACKGROUND [0003] Capacitors are a basic electrical element used in storing and providing electrical energy to other electrical elements. They are used in most of today's electrical and/or electronic devices and continue to expand its range of applications into new types of hi-tech devices, such as semiconductor devices, as technologies rapidly evolve. While there are a vast array of capacitors (e.g., metal oxide field effect transistor (MOSFET) capacitors) available to be used in such semiconductor devices, as the density of the semiconductor devices have exponentially and steadily increased over the years, there have been incessant and increasing demands for capacitors that are smaller in size but greater in storage capacity. BRIEF DESCRIPTION OF THE FIGURES [0004] FIG. 1 A shows a perspective view of one illustrative embodiment of a capacitor. [0005] FIG. IB shows a cross-sectional view of the capacitor shown in FIG. 1A taken along line A- A'. [0006] FIG. 1C shows a planar view of the capacitor shown in FIG. 1A.[0007] FIG. 2 A shows a perspective view of another illustrative embodiment of a capacitor. [0008] FIG. 2B shows a cross-sectional view of the capacitor shown in FIG. 2 A taken along line B-B'. [0009] FIG. 2C shows a planar view of the capacitor shown in FIG. 2A. [0010] FIG. 2D shows a schematic circuit diagram of the capacitor shown in FIG. 2A. [0011] FIG. 3A shows a perspective view of yet another illustrative embodiment of a capacitor. [0012] FIG. 3B shows a cross-sectional view of the capacitor shown in FIG. 3A taken along line C-C. [0013] FIG. 3C shows a cross-sectional view of the capacitor shown in FIG. 3A taken along line D-D'. [0014] FIG. 3D shows a planar view of the capacitor shown in FIG. 3A. [0015] FIGS. 4A and 4B show a planar view of illustrative embodiments of capacitors including four contact electrodes. [0016] FIG. 5 shows a cross-sectional view of an illustrative embodiment of a flash memory device including a capacitor. [0017] FIG. 6 shows an illustrative embodiment of a charge pump including multiple capacitors. [0018] FIG. 7 shows an example flow diagram of an illustrative embodiment of a method for fabricating a capacitor. [0019] FIGS. 8A-8G are a series of diagrams illustrating the example method shown in FIG. 7 and the structures fabricated by the example method. [0020] FIG. 9 shows a schematic diagram of an illustrative embodiment of a system including a non-volatile memory device. DETAILED DESCRIPTION [0021] Techniques relating to a capacitor are provided. In one embodiment, the capacitor may include a first conductor, a second conductor above the first conductor, and a dielectric between the first conductor and the second conductor. The dielectric does not cover a portion of the first conductor; and the second conductor does not cover the portion of the first conductor not covered by the dielectric.[0022] In another embodiment, a capacitor may include N planar conductors disposed one above the other, each of the planar conductors including at least one first portion not covered by the planar conductor disposed above thereof, wherein N is a natural number equal to or greater than two. [0023] The foregoing embodiments are illustrative only and are not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the following detailed description. [0024] In the following detailed description, reference is made to the accompanying drawings, which form a part hereof. In the drawings, similar symbols typically identify similar components, unless context dictates otherwise. The illustrative embodiments described in the detailed description, drawings, and claims are not meant to be limiting. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented herein. It will be readily understood that the aspects of the present disclosure, as generally described herein, and illustrated in the Figures, can be arranged, substituted, combined, separated, and designed in a wide variety of different configurations, all of which are explicitly contemplated herein. [0025] FIG. 1 A shows a perspective view of one illustrative embodiment of a capacitor. FIG. IB shows a cross-sectional view of the capacitor shown in FIG. 1A taken along line A- A'. FIG. 1C shows a planar view of the capacitor shown in FIG. 1A. Referring to FIGS. 1A-1C, a capacitor 100 may include a substrate 1 10, and a stacked body 120 provided on substrate 110. While not expressly illustrated in FIGS. 1A-1C for the sake of simplicity, a dielectric material, such as silicon oxide, may be interposed between substrate 1 10 and stacked body 120. Stacked body 120 may include first and second planar conductors (e.g., first and second planar conductive layers 121a and 121b; hereinafter may be collectively referred to as planar conductive layers 121) disposed one above the other and substantially parallel to each other, and a dielectric (e.g., a dielectric layer 122) interposed therebetween. In one embodiment, second planar conductive layer 121b and dielectric layer 122 may be disposed on first conductive layer 121a to respectively cover only a portion(s) of the upper surface of underlying first conductive layer 121a, such that first conductive layer 121a may include one or more upper surface portions that are not covered by overlying second planar conductive layer 121band dielectric layer 122 (e.g., first and second upper surface portions 12a and 12b). By way of a non-limiting example, one or more upper surface portions 12a and 12b may be the end portions of first planar conductive layer 121a, such that they, together with second planar conductive layer 121b and first dielectric layer 122, may form one or more stair steps at one or more sides of capacitor 100. [0026] In one embodiment, as shown in FIGS. 1A-1C, second planar conductive layer 121b may be smaller than its underlying first planar conductive layer 121a in at least one of its two extending dimensions. For convenience of description, an xyz coordinate system is shown in FIG. 1A. The x and z axes in the this coordinate system respectively indicate the two orthogonal directions along which planar conductive layers 121 may extend, and the z axis indicate the direction that is orthogonal to the x and y axes. In the above coordinate system, second planar conductive layer 121b may be smaller than its underlying first planar conductive layer 121a in at least one of the two dimensions in the directions of x and y axes shown in FIG. 1A. Further, in the above embodiment, second planar conductive layer 121b may be equal to or smaller than its underlying dielectric layer 122 in at least one of its two extending dimensions (e.g., the two dimensions in the direction of x and y axes shown in FIG. 1A). However, it should be appreciated that a stacked body in accordance with the present disclosure is not limited thereto, and depending on particular embodiments, the second planar conductive layer may be equal to or greater in size in at least one of its two extending dimensions than at least some of its underlying first planar dielectric layers and/or dielectric layer, and still may be arranged in a manner that cover only a portion(s) of the upper surface of the first conductive layer. [0027] In one embodiment, capacitor 100 may further include first and second contacts 130a and 130b (which, hereinafter, may be collectively referred to by example as metal lines 130) (e.g., metal contacts, poly-silicon contacts, etc.) respectively coupled to the upper surfaces of first and second planar conductive layers 121a and 121b, and first and second contact electrodes 140a and 140b (hereinafter may be collectively referred to as contact electrodes 140) respectively disposed on and coupled to first and second metal lines 130a and 130b. First and second metal lines 130a and 130b may be buried inside an interlayer dielectric layer 150, and first and second contact electrodes 140a and 140b may be disposedon interlayer dielectric layer 150 to be respectively coupled to first and second metal lines 130a and 130b. [0028] By way of a non-limiting example, metal lines 130 may respectively extend in the direction substantially perpendicular to the upper surfaces of planar conductive layers 120 (e.g., in the direction indicated by the z axis shown in FIG. 1A). It should be appreciated that the term "substantially perpendicular," as used herein, includes, but is not limited to, a range of from -30 to +30 degrees around the perpendicular direction. While metal lines 130 are illustrated as an elongated rectangular structure, they may take one of a variety of different shapes. For example, metal lines 130 may be, but are not limited to, a cylinder or a tapered column. [0029] The aforementioned elements of capacitor 100 may respectively be made of various different materials. For example, substrate 110 may be fabricated from one or more materials, which include, but are not limited to, sapphire, glass, or semiconductor materials (e.g., silicon (Si), germanium (Ge), and gallium arsenide (GaAs)). Planar conductive layers 121 and dielectric layer 122 may be respectively made of a conductive material (e.g., polysilicon) and an oxide of the conductive material (e.g., silicon oxide) in any available manner. Metal lines 130 and contact electrodes 140 may respectively be made of tungsten and aluminum. Interlayer dielectric layer 150 may be made of borophosphosilicate glass (BPSG). It should be appreciated, however, that the aforementioned materials are given for illustrative purposes only, and other materials may be used as appropriate depending on each implementation. [0030] In the embodiment described with reference to FIGS. 1A-1C, planar conductive layers 121 and dielectric layer 122 are arranged in a way such that first conductive layer 121a includes one or more portions (e.g., first and second upper surface portions 12a and 12b) not covered by overlying second conductive layer 121b and dielectric layer 122, so as to allow forming of one or more contacts (e.g., first metal line 130a) and one or more contact electrodes (e.g., first contact electrode 140a) from top-down. The aforementioned arrangement not only allows its easy fabrication using one or more of existing semiconductor fabrication techniques (e.g., concurrently with other semiconductor devices, such as an array of three-dimensional non-volatile memory cells), but also allows attaining highcapacitance, to a level hitherto not possible, by, for example, stacking similarly configured additional planar conductive layers and dielectric layers therein. [0031] In this regard, FIGS. 2A-2D show another illustrative embodiment of a capacitor. FIG. 2A shows a perspective view of another illustrative embodiment of a capacitor. FIG. 2B shows a cross-sectional view of the capacitor shown in FIG. 2 A taken along line B-B'. FIG. 2C shows a planar view of the capacitor shown in FIG. 2 A. FIG. 2D shows a schematic circuit diagram of the capacitor shown in FIG. 2A. Referring to FIGS. 2A-2D, a capacitor 200 may include a substrate 210, a stacked body 220, a plurality of contacts 230a-230d (which, hereinafter, may be collectively referred to by example as metal lines 230), first and second contact electrodes 240a and 240b (which hereinafter may be collectively referred to as contact electrodes 240), and an interlayer dielectric layer 250. While not expressly illustrated in FIGS. 2A-2D for the sake of simplicity, a dielectric material, such as silicon oxide, may be interposed between substrate 210 and stacked body 220. Stacked body 220 may include N planar conductors (e.g. N planar conductive layers 221a-221d, which hereinafter may be collectively referred to as planar conductive layers 221) respectively disposed one above the other and N-l dielectrics (e.g. N-l dielectric layers 222a-222c, which hereinafter may be collectively referred to as dielectric layers 222) respectively interposed between two adjacent conductive layers 221. While stacked body 220 of the illustrative embodiment shown in FIGS. 2A-2D include four planar conductive layers 221 and three dielectric layers 222 (i.e., N is equal to 4), a stacked body in accordance with the present disclosure is not limited thereto, and may include any number of planar conductive layers and dielectric layers (e.g., N may be equal to a natural number equal to or greater than 2). It should be noted that numerals in FIGS. 2A-2D that are similar to those in FIGS. 1A-1C generally identify similar components, and unless context dictates otherwise, the descriptions provided with reference to FIGS. 1A-1C generally apply to corresponding components in FIGS. 2A-2D. [0032] Planar conductive layers 221 and dielectric layers 222 may be alternately disposed one above the other to respectively cover only a portion(s) of the upper surfaces of its underlying planar conductive layers 221 , such that each of planar conductive layers 221 may include at least one portion (e.g. upper surface portions 22a-22f) not covered by its overlying planar conductive layers 221 and dielectriclayers 222. By way of a non-limiting example, the at least one upper surface portion may be an end portion of one of planar conductive layers 221, such that the upper surface portions collectively form one or more stair steps at one or more sides of stacked body 220 of capacitor 200 (e.g., the two steps respectively at the right and left sides of stacked body 220 viewed in the direction of the y axis shown in FIG. 2A). [0033] In one embodiment, as shown in FIGS. 2A-2C, each of planar conductive layers 221 may be smaller than all of its underlying planar conductive layers 221 in at least one of its two extending dimensions (e.g., the two dimensions in the direction of x and y axes shown in FIG. 2A). Further, in the above embodiment, each of planar conductive layers 221 may be equal to or smaller than all of its underlying planar dielectric layers 222 in at least one of its two extending dimensions (e.g., the two dimensions in the direction of x and y axes shown in FIG. 2A). However, it should be appreciated that a stacked body in accordance with the present disclosure is not limited thereto, and depending on particular embodiments, may include one or more planar conductive layers that are equal to or greater in size than all or some of its underlying planar dielectric layers and/or dielectric layers in at least one of its two extending dimensions. [0034] One set of the plurality of contacts 230 (e.g., metal lines 230a and 230c) may be disposed at one side of stacked body 220 of capacitor 200 (e.g., the left side of stacked body 220 viewed in the direction of the y axis shown in FIG. 2A) and may be respectively coupled to the upper surface portions of alternate ones of planar conductive layers 221 (e.g., odd-numbered planar conductive layers 221) that are not covered by respective overlying planar conductive layers 221 and dielectric layers 222 (e.g., upper surface portions 22a and 22e of planar conductive layers 221a and 221c). Further, another set of the plurality of contacts 230 (e.g., metal lines 230b and 230d) may be disposed at another side of stacked body 220 of capacitor 200 (e.g., the right side of stacked body 220 viewed in the direction of the y axis shown in FIG. 2A) and may be respectively coupled to one of the upper surface portions of the remaining ones of the planar conductive layers (e.g., even-numbered planar conductive layers 221), some of which are not covered by respective overlying planar conductive layers 221 and dielectric layers 222 (e.g., upper surface portions 22d of planar conductive layers 221b). First contact electrode 240a may be disposed on and coupled to the one set of theplurality of contacts lines 230, whereas second contact electrode 240b may be disposed on and coupled to the another set of the plurality of contacts 230. Contacts 230 may be buried inside interlayer dielectric layer 250, and contact electrodes 240 may be disposed on interlayer dielectric layer 250 to be coupled to contacts 230. [0035] As can be seen from FIG. 2D, the aforementioned arrangement of stacked body 220 is the equivalent of N-l capacitors connected in parallel between two nodes A and B respectively corresponding to contact electrodes 240a and 240b. In the example shown in FIG. 2D, CI, C2, and C3 respectively represent capacitances provided between planar conductive layers 221a and 221b, planar conductive layers 221b and 221c, and planar conductive layers 221c and 221d. Between each pair of conductive layers, one capacitance is provided at the center portions of the planar conductive layer pair and another capacitance is provided at the end portions of the planar conductive layer pair. In some embodiments, the total capacitance provided by stacked body 220 may be expressed by Equation 1 shown below. [Equation 1] C total " C center + ^ side = β* ( νν * Ι_ * Ρη + W * a* (S-2)*(S-1 )) where, Ctotai is the total capacitance provided by stacked body 220, Ccenter is the capacitance provided between the center portions of planar conductive layers 221 in stacked body 220, Cside is the capacitance provided between the end portions of planar conductive layers 221 in stacked body 220, β is the unit capacitance between a planar conductive layer pair, W is the length of planar conductive layers 221 along the y axis shown in FIG. 2A, L is the length of planar conductive layer 22 Id along the x axis shown in FIG. 2 A, Pn is the number of dielectric layers 222 (i.e., N-l), S is the number of planar conductive layers 221 (i.e., N), and a is the length of an upper surface portion 22a-22d along the x axis shown in FIG. 2A. As can be appreciated from FIG. 2D and Equation 1 shown above, the total capacitance provided by capacitor 200 is proportional to and thus may be increased by stacking similarly configured additional planar conductive layers and dielectric layers in its stacked body 220.[0036] In the illustrative embodiment shown in FIGS. 2A-2D, contact electrodes 240a and 240b are disposed above two stair steps located at two opposing ends of capacitor 200. This is the same for the illustrative embodiment shown in FIGS. 1A-1C. It should be appreciated, however, that contact electrodes in accordance with the present disclosure (and contacts to be coupled thereto) may be disposed in a variety of different ways. [0037] In this regard, FIGS. 3A-3D show another illustrative embodiment of a capacitor. FIG. 3A shows a perspective view of yet another illustrative embodiment of a capacitor. FIG. 3B shows a cross-sectional view of the capacitor shown in FIG. 3A taken along line C-C FIG. 3C shows a cross- sectional view of the capacitor shown in FIG. 3A taken along line D-D'. FIG. 3D shows a planar view of the capacitor shown in FIG. 3 A. Referring to FIGS. 3A- 3D, a capacitor 300 may include a substrate 310, a stacked body 320, a plurality of contacts 330a-330d (which, hereinafter, may be collectively referred to by example as metal lines 330), first and second contact electrodes 340a and 340d (which hereinafter may be collectively referred to as contact electrodes 340), and an interlayer dielectric layer 350. Stacked body 320 may include N planar conductors (e.g. N planar conductive layers 321a-321d, which hereinafter may be collectively referred to as planar conductive layers 321, where N is a natural number greater than or equal to 2) respectively disposed one above the other and N-l dielectrics (e.g. N-l dielectric layers 322a-322c, which hereinafter may be collectively referred to as dielectric layers 322) respectively interposed between two adjacent conductive layers 321a-321d. While not expressly illustrated in FIGS. 3A-3D for the sake of simplicity, a dielectric material, such as silicon oxide, may be interposed between substrate 310 and stacked body 320. Numerals in FIGS. 3A-3D similar to those in FIGS. 1A-1C and 2A-2D generally identify similar components, and unless context dictates otherwise, the descriptions provided with reference to FIGS. 1A-1C and 2A-2D generally apply to corresponding components in FIGS. 3A-3D. For the sake of simplicity, some of the features of capacitor 300 that are similar to those of capacitors 100 and 200 may not be described in the ensuing descriptions. [0038] In one embodiment, planar conductive layers 321 and dielectric layers 322 may be alternately disposed one above the other to respectively cover end portions at one side of each upper surface of its underlying planar conductive layers 321,such that each of planar conductive layers 321 may include an end portion at the opposing side of its upper surface (e.g. upper surface portions 32a-32c) that is not covered by its overlying planar conductive layers 221 and dielectric layers 322, while the end portion on the one side of its upper surface is completely covered by its overlying planar conductive layers 321 and dielectric layers 322. In the above embodiment, planar conductive layers 321 and dielectric layers 322 collectively form only one stair step at the one side of stacked body 320 of capacitor 300 (e.g., the step at the left side of capacitor 300 viewed in the direction of the y axis shown in FIG. 3A), as opposed to two stair steps along the both sides of each of stacked bodies 120 and 220 in capacitors 100 and 200 of FIGS. 1A-1C and 2A- 2D. [0039] In the above embodiment, one set of the plurality of contacts 330 (e.g., metal lines 330a and 330c) may be disposed adjacent to one lateral side of the step formed by stacked body 320 (e.g., the direction along the x axis shown in FIG. 3A) and coupled to the upper surface portions of a first set of conductors, such as odd-numbered planar conductive layers 321 (e.g., upper surface portions 32a and 32c of planar conductive layers 321a and 321c). Further, another set of the plurality of contacts 330 (e.g., metal lines 330b and 330d) may be disposed adjacent to the other lateral side of the step formed by stacked body 320 (e.g., the direction along the x axis shown in FIG. 3A) and coupled to the upper surface portions of a second set of conductors, such as even-numbered planar conductive layers 321, some of which are not covered by its overlying planar conductive layers 321 and dielectric layers 322 (e.g., upper surface portions 32d of planar conductive layers 321b). [0040] Further, first and second contact electrodes 340a and 340b may both be disposed along a direction substantially parallel to the lateral sides of the step formed on stacked body 320 (e.g., the direction along the x axis shown in FIG. 3A) and be spaced apart from each other by a prescribed distance in a direction perpendicular to the lateral sides of the step formed on stacked body 320 (e.g., the direction along the y axis shown in FIG. 3A), such that first contact electrode 340a may be coupled to a set of contacts coupled to the upper surface portions of odd-numbered planar conductive layers 321, while second contact electrode 340b may be coupled to another set of contacts coupled to the upper surface portions of even-numbered planar conductive layers 321. Contacts 330 may be buried insideinterlayer dielectric layer 350, and contact electrodes 340 may be disposed on interlayer dielectric layer 350 to be coupled to contacts 330. [0041] In the illustrative embodiment shown in FIGS. 1A-1C, 2A-2D and 3A-3D, capacitors 100-300 each include two contact electrodes. It should be appreciated, however, that capacitors in accordance with the present disclosure may include three or more contact electrodes. [0042] In this regard, FIGS. 4A and 4B show a planar view of illustrative embodiments of capacitors including four contact electrodes. Referring to FIGS. 4A and 4B, each of capacitors 401 and 402 include four contact electrodes (i.e., contact electrodes 441a-441d and 442a-442d) that are disposed in a direction perpendicular to the lateral sides of the steps formed in capacitors 401 and 402 (e.g., the direction along the x axis shown in FIGS. 4A and 4B). Two contact electrodes that are disposed on contacts coupled to a first set of planar conductors, such as odd-numbered planar conductive layers (e.g., contact electrodes 441a and 441c coupled to contacts 431a in FIG. 4A, and contact electrodes 442a and 442c coupled to contacts 432a in FIG. 4B) are alternately arranged with the other two contact electrodes that are disposed on contacts coupled to a second set of planar conductors, such as even-numbered planar conductive layers (e.g., contact electrodes 441b and 44 Id coupled to contacts 43 lb in FIG. 4A, and contact electrodes 442b and 442d coupled to contacts 432b in FIG. 4B). In FIG. 4A, contacts 43 la and 43 lb are disposed on only one of the two side of capacitor 401 viewed in the direction perpendicular to the lateral sides of the steps thereon (e.g., each of the left and right sides of capacitor 401 viewed in the direction of the y axis shown in FIG. 4A). In FIG. 4B, however, contacts 432a and 432b are disposed on both sides of capacitor 402 viewed in the direction perpendicular to the lateral sides of the steps thereon (e.g., both sides of capacitor 402 viewed in the direction of the y axis shown in FIG. 4B). The arrangements of contact electrodes shown in FIGS. 4A and 4B may provide, further to the capacitance provided by their stacked bodies, additional capacitance between the contact electrodes. [0043] The capacitors described in conjunction with the preceding figures may be fabricated into a variety of semiconductor devices to be used as a passive circuit element therein. Especially, by virtue of their structural configurations hitherto described, capacitors in accordance with the present disclosure may be fabricatedconcurrently with other semiconductor elements, such as a three-dimensional memory cell array structure of a flash memory device. In this regard, FIG. 5 shows a cross-sectional view of an illustrative embodiment of a flash memory device including a capacitor in accordance with the present disclosure. Referring to FIG. 5, a flash memory device 500 may include a memory cell array region 51 and a periphery region 52. [0044] Memory cell array region 51 may include a three-dimensional memory cell array structure 501. Three-dimensional memory cell array structure 501 may include a substrate 560, a dielectric layer 561 located on substrate 560, and a stacked body 570 located on dielectric layer 561 and alternately stacked with planar conductors (e.g. planar conductive layers 571a-571d) and dielectrics (e.g. dielectric layers 572a-572c). Stacked body 570 may include one or more pillar- shaped semiconductor structures (e.g., a pillar-shaped semiconductor structure 56) that may respectively function as a string of three dimensional flash memory cells. Each pillar-shaped memory structure may include, for example, a silicon pillar (e.g., a silicon pillar 57) and an oxide-nitride-oxide (ONO) film (e.g., an ONO film 58) encircling the silicon pillar. Each planar conductive layer 571a-571d functions as a word line for controlling the portion of pillar-shaped memory structure 56 it encircles. For example, the portion of ONO film 57 surrounded by planar conductive layer 571a may function as a transistor that turns on and off depending on the voltage applied by planar conductive layer 571a functioning as a word line thereto. Each planar conductive layer 571a-571c is respectively connected to contact electrodes 590a-590c through contacts 580a-580c formed in a dielectric layer 599 to be supplied with program and other types of voltages. The concrete configurations of a three-dimensional memory cell array structure are well known in the pertinent art, and are not further described for the sake of simplicity. [0045] Periphery region 52 may be formed with a variety of structures/circuits for operating three-dimensional memory cell array structure 501. For example, periphery region 52 may include one or more capacitors in accordance with the present disclosure to supply necessary voltages to three-dimensional memory cell array structure 501 and/or other parts of flash memory device 500. In this regard, FIG. 5 shows a portion 502 of such a capacitor. The capacitor may include a substrate 510, a dielectric layer 51 1 located on substrate 510, and a stacked body520 located on dielectric layer 51 1 and alternately stacked with planar conductors (e.g. planar conductive layers 521a-521d) and dielectrics (e.g. dielectric layers 522a-522c). Each planar conductive layer 521a-521c is respectively connected to one of multiple contact electrodes (e.g., a contact electrode 540) through one of contacts 530 (e.g., contacts 530a and 530b) formed in an dielectric layer 550 to be supplied with a charging voltage. As can be appreciated from FIG. 5, the configuration of the three-dimensional memory cell array structure and the capacitor have similar structural configuration, thus, the capacitor may be fabricated in conjunction with and/or concurrently with the three-dimensional memory cell array structure in the memory cell array region. Further, the above similarity allows fabrication of the capacitor, for example, by using the structure(s) that are naturally formed in the peripheral region during the fabrication of the three-dimensional memory cell array structure in the memory cell array region. This will become clearer as we describe an example fabrication process of a capacitor in accordance with this disclosure with regard to FIGS. 7 and 8A-8G. [0046] The capacitors described in conjunction with the preceding figures may be used for a variety of devices formed in a peripheral region of a semiconductor device. By way of a non-limiting example, capacitors in accordance with the present disclosures may be used as capacitive elements in a charge pump for providing voltages, for example, to contact electrodes 590a-590c of three- dimensional memory cell array structure 501 shown in FIG. 5. In this regard, FIG. 6 shows an illustrative embodiment of a charge pump including multiple capacitors in accordance with the present disclosure. Referring to FIG. 6, a charge pump 600 may include a plurality of pump stages 610-630 respectively coupled to capacitors 61 1 and 612, 621 and 622, and 631 and 632. Capacitors 61 1, 621, and 631 may be provided with a clock pulse CLKa, while capacitors 611, 621, and 631 may be provided with a clock signal CLKb, which is of the same magnitude as clock signal CLKa but shifted in phase by 180 degrees. The above capacitors may store energy when clock pulse CLKa or CLKb is at Vcc [V], and discharge the energy stored therein when clock pulse CLKa or CLKb is at 0[V]. Each pump stage 610-630 is made of one or more transistors that, when provided with voltage signals discharged from the capacitors, turns on and conveys the voltage signals provided thereto as output. The capacitors in FIG. 6 may providegreater capacitance and store more energy than conventional capacitors (e.g., MOSFET capacitors), while being smaller in size. This allows increasing the voltage output of charge pump 600 without adding an additional pump stage(s) thereto (i.e., without increasing the size and cost of charge pump 600). [0047] A method for fabricating a capacitor is explained hereafter with reference to FIGS. 7 and 8A-8G. FIG. 7 shows an example flow diagram of an illustrative embodiment of a method for fabricating a capacitor. Referring to FIG. 7, a substrate may be prepared (block 710). The substrate, for example, may be prepared by using any of the materials described above with reference to FIGS. 1A-1C (e.g., materials described in the paragraph [0028]). In one embodiment, the substrate may be a substrate for a flash memory device including a memory cell array region and a peripheral region. In block 720, N planar conductors (e.g. planar conductive layers) are alternately stacked with -1 dielectrics (e.g. N-l dielectric layers) on the substrate to form a stacked body thereon. In the embodiment where the substrate is a substrate for a flash memory device, the planar conductive lavers and the dielectric layers may be alternately stacked in both the memory cell array region and the peripheral region. In this regard, FIG. 8A shows a cross-sectional view of an illustrative embodiment of portions 819 and 869 of a stacked body respectively formed in a peripheral region 82 and a memory cell array region 81 located on substrates 860 and 810. In FIG. 8A, the planar conductive layers and the dielectric layers are respectively referenced with numerals 821a-821d and 822a-822c (for stacked body portion 820) and 871a-871d and 872a-872c (for stacked body portion 870). Further, in some embodiments, as shown in FIG. 8A, dielectric layers 861 and 811 may be located on substrates 860 and 810, respectively. [0048] In block 730, one or more portions of the stacked body are removed to uncover one or more portions of each of the planar conductive layers that were previously covered by their overlying planar conductive and/or dielectric layers. By way of a non-limiting example, the stacked body may be etched with a mask having a width that decreases with each iteration (i.e., a mask slimmed with each iteration) to form a stair step at one or more sides of the stacked body. In the embodiment related to a flash memory device, the portions of the stacked body in both the memory cell array region and the peripheral region may be etched, for example, concurrently, so as to provide two separate stacked bodies in therespective regions. FIG. 8B shows a cross-sectional view of an illustrative embodiment of portions 820 and 870 of the stacked body respectively iteratively etched with a mask (not shown) having a width that decreases with each iteration (i.e., Wl, W2, and W3) to be formed into two separate stacked bodies 820 and 860 each having a stair step. There are various techniques known in the art, including the aforementioned mask slimming technique, for fabricating the aforementioned stair-stepped structure in a stacked body, all of which may be applied to a stacked body of the present disclosure. The technical details thereon are not further described for the sake of simplicity. [0049] In the embodiment related to a flash memory device, before or after block 730, the stacked body in the memory cell array region may be processed to form therein one or more pillar-shaped semiconductor structures that may respectively function as a string of three dimensional flash memory cells. Each pillar-shaped memory structure may include, for example, a silicon pillar (e.g., epitaxial silicon or polysilicon) and an oxide-nitride-oxide (ONO) film encircling the silicon pillar. In this regard, FIG. 8C shows a cross-sectional view of an illustrative embodiment of a pillar-shaped semiconductor structure 86 including a silicon pillar 87 and an ONO film 88. The techniques for fabricating the aforementioned pillar-shaped semiconductor structure are well known in the pertinent art, and are not further described for the sake of simplicity. [0050] In block 740, one or more contacts are formed on the stacked body. The one or more contacts may be substantially perpendicular to the upper surfaces of the stacked body. One set of contacts may be coupled to the uncovered portions of a first set of planar conductors (e.g. odd-numbered planar conductive layers), whereas another set of contacts may be coupled to the uncovered portions of a second set of planar conductors (e.g. even-numbered planar conductive layers). [0051] In one embodiment, the contacts may be formed by forming an interlay er dielectric layer over the stacked body, and removing one or portions of the interlay er dielectric layer above at least some of the one or more second portions of planar conductive layers to define one or more openings (e.g., holes) therethrough, and depositing conductive materials into the one or more openings to form the contacts therein. In the embodiment related to a flash memory device, one or more contacts may also be formed in the memory cell region. For example, the one or more contacts in the memory cell region may be formed concurrentlywith the contacts in the periphery region, by also depositing an interlayer dielectric layer in the memory cell region, forming one or more openings in the interlayer dielectric layer, and depositing a conductive material into the openings to form one or more contacts therein. In this regard, FIG. 8D shows a cross- sectional view of an illustrative embodiment of interlayer dielectric layers 850 and 899 respectively formed in periphery and memory cell array regions 82 and 81. FIG. 8E shows a cross-sectional view of an illustrative embodiment of openings 829a and 829b formed in periphery region 82 located on substrate 810, and openings 879a-879c formed in memory cell array region 81 located on substrate 860. Further, FIG. 8F shows a cross-sectional view of an illustrative embodiment of contacts 830a and 830b and contacts 880a-880c respectively formed in periphery and memory cell array regions 82 and 81. [0052] In block 750, two or more contact electrodes are formed on the contacts. For example, a first contact electrode may be formed on a first set of contacts coupled to the odd-numbered planar conductive layers and a second contact electrode may be coupled to a second set of contacts coupled to even-numbered planar conductive layers. In one embodiment, the first and second electrodes may be formed above the first and second sides of the stacked body at which stair steps are respectively formed. In another embodiment, the first and second contact electrodes may be formed above a stair step adjacent to first and second lateral sides of the stair step, respectively. Further, in yet another embodiment, in addition to the first and second electrodes, additional contact electrodes may be formed. For example, third and fourth contact electrodes may be respectively formed to be disposed on and coupled to at least some of the first set of contacts and at least some of the second set of contacts. The third contact electrode may be interposed between and adjacent to the second and fourth contact electrodes to provide capacitance between at least the second and third contact electrodes or the third and fourth contact electrodes. [0053] In the embodiment related to a flash memory device, one or more contact electrodes may also be formed in the memory cell region. In this regard, FIG. 8G shows a cross-sectional view of an illustrative embodiment of a contact electrode 840 formed in peripheral region 82 for a capacitor in accordance with present disclosure and contact electrodes 890a-890c for the three-dimensional memory cell array formed in memory cell array region 81.[0054] As can be appreciated from FIGS. 8A-8G, a capacitor may be formed by using a portion of a stacked body of alternating planar conductive layers and dielectric layers that is naturally formed in the peripheral region as well as the memory cell array region during the process of fabricating a flash memory device. Further, by virtue of its structural configuration, a capacitor in accordance with the present disclosure may be formed in conjunction with and/or concurrently with the memory cell array structure (e.g., a three-dimensional memory cell array structure) in the memory cell array region. [0055] FIG. 9 shows a schematic diagram of an illustrative embodiment of a system including a non-volatile memory device (e.g., a flash memory device 500 of FIG. 5). A system 900 may be used in devices such as, for example, a personal digital assistant (PDA), a laptop or portable computer with wireless capability, a web tablet, a wireless telephone, a pager, an instant messaging device, a digital music player, a digital camera, or other devices that may be adapted to transmit and/or receive information either wirelessly or over a wire connection. The system 900 may be used in any of the following systems: a wireless local area network (WLAN) system, a wireless personal area network (WPAN) system, or a cellular network. [0056] The system 900 may include a controller 910, an input/output (I/O) device 920 (e.g., a keypad, display), the flash memory device 500 of FIG. 5, a wireless interface 940, and a static random access memory (SRAM) 960 and coupled to each other via a bus 950. A battery 980 may supply power to the system 900 in one embodiment. The memory device may include a NAND memory, a flash memory, a NOR memory, or the like. [0057] The controller 910 may include, for example, one or more microprocessors, digital signal processors, micro-controllers, or the like. The flash memory device 500 may be used to store messages transmitted to or by the system 900. The flash memory device 500 may also optionally be used to store instructions that are executed by controller 920 during the operation of the system 900, and may be used to store user data either generated, collected or received by the system 900 (such as image data). The instructions may be stored as digital information and the user data, as disclosed herein, may be stored in one section of the memory as digital data and in another section as analog memory. As another example, agiven section at one time may be labeled as such and store digital information, and then later may be relabeled and reconfigured to store analog information. [0058] The I/O device 920 may be used to generate a message. The system 900 may use the wireless interface 940 to transmit and receive messages to and from a wireless communication network with a radio frequency (RF) signal. Examples of the wireless interface 940 may include an antenna, or a wireless transceiver, such as a dipole antenna, although the scope of the present disclosure is not limited in this respect. Also, the I/O device 920 may deliver a voltage reflecting what is stored as either a digital output (if digital information was stored), or as analog information (if analog information was stored). While an example in a wireless application is provided above, embodiments of the present invention may also be used in non-wireless applications as well. [0059] It should be appreciated that the structural and functional configurations of a capacitor, a semiconductor device, and/or a system and their elements described in conjunction with FIGS. 1 A-9 are indicative of a few ways in which a capacitor, a semiconductor device, and/or a system may be implemented. It should be appreciated that a capacitor in accordance with this disclosure may be applied to any type of devices and systems, including types of memories other than flash memory. [0060] One skilled in the art will appreciate that, for this and other processes and methods disclosed herein, the functions performed in the processes and methods may be implemented in differing order. Furthermore, the outlined steps and operations are only provided as examples, and some of the steps and operations may be optional, combined into fewer steps and operations, or expanded into additional steps and operations without detracting from the essence of the disclosed embodiments. [0061] The present disclosure is not to be limited in terms of the particular embodiments described in this application, which are intended as illustrations of various aspects. Many modifications and variations can be made without departing from its spirit and scope, as will be apparent to those skilled in the art. Functionally equivalent methods and apparatuses within the scope of the disclosure, in addition to those enumerated herein, will be apparent to those skilled in the art from the foregoing descriptions. Such modifications and variations are intended to fall within the scope of the appended claims. Thepresent disclosure is to be limited only by the terms of the appended claims, along with the full scope of equivalents to which such claims are entitled. It is to be understood that this disclosure is not limited to particular methods, reagents, compounds compositions or biological systems, which can, of course, vary. It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only, and is not intended to be limiting. [0062] With respect to the use of substantially any plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations may be expressly set forth herein for sake of clarity. [0063] It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as "open" terms (e.g., the term "including" should be interpreted as "including but not limited to," the term "having" should be interpreted as "having at least," the term "includes" should be interpreted as "includes but is not limited to," etc.). It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases "at least one" and "one or more" to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles "a" or "an" limits any particular claim containing such introduced claim recitation to embodiments containing only one such recitation, even when the same claim includes the introductory phrases "one or more" or "at least one" and indefinite articles such as "a" or "an" (e.g., "a" and/or "an" should be interpreted to mean "at least one" or "one or more"); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should be interpreted to mean at least the recited number (e.g., the bare recitation of "two recitations," without other modifiers, means at least two recitations, or two or more recitations). Furthermore, in thoseinstances where a convention analogous to "at least one of A, B, and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., " a system having at least one of A, B, and C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). In those instances where a convention analogous to "at least one of A, B, or C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., " a system having at least one of A, B, or C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase "A or B" will be understood to include the possibilities of "A" or "B" or "A and B." [0064] In addition, where features or aspects of the disclosure are described in terms of Markush groups, those skilled in the art will recognize that the disclosure is also thereby described in terms of any individual member or subgroup of members of the Markush group. [0065] As will be understood by one skilled in the art, for any and all purposes, such as in terms of providing a written description, all ranges disclosed herein also encompass any and all possible subranges and combinations of subranges thereof. Any listed range can be easily recognized as sufficiently describing and enabling the same range being broken down into at least equal halves, thirds, quarters, fifths, tenths, etc. As a non-limiting example, each range discussed herein can be readily broken down into a lower third, middle third, and upper third, etc. As will also be understood by one skilled in the art all language such as "up to," "at least," and the like include the number recited and refer to ranges which can be subsequently broken down into subranges as discussed above. Finally, as will be understood by one skilled in the art, a range includes each individual member. Thus, for example, a group having 1-3 cells refers to groups having 1, 2, or 3 cells. Similarly, a group having 1-5 cells refers to groups having 1, 2, 3, 4, or 5 cells, and so forth.[0066] From the foregoing, it will be appreciated that various embodiments of the present disclosure have been described herein for purposes of illustration, and that various modifications may be made without departing from the scope of the present disclosure. Accordingly, the various embodiments disclosed herein are not intended to be limiting, with the true scope and spirit being indicated by the following claims. |
In-circuit-emulation of an integrated circuit permits location and identification of optional emulation resources. Each emulation resource is assigned a memory address. The in-circuit-emulation generates a special memory access to memory addresses. If the special memory access corresponds to the address of an emulation resource, the emulation resource responds with an acknowledgement and a corresponding identification number. Nonemulation circuits do not respond to the special memory access. This technique permits manufacture of plural integrated circuits with corresponding sets of emulation resources, where an emulation program can determine the available resources for the particular integrated circuit. The emulation resources preferrably includes a set of emulation resources common to all integrated circuits with predetermined memory addresses and a predetermined identification numbers as well as optional emulation resources. The emulation program can locate and identify all emulation resources by generating the special memory access to each address of a range of addresses. |
What is claimed is: 1. A method of locating and identifying an emulation resource in an integrated circuit comprising the steps of:assigning a unique memory address within a device data memory map to at least one nonemulator resource; assigning a unique memory address within said device data memory map to at least one emulation resource; assigning a unique identification number to each of said at least one emulation resource; generating a special memory access to memory addresses within said device data memory map; each of said at least one nonemulator resource not responding upon receipt of said special memory access at said assigned memory address; and each of said at least one emulation resource responding with an acknowledgement of said memory access and said identification number corresponding to the emulation resource upon receipt of said special memory access at said assigned memory address. 2. The method of claim 1, further comprising:generating said special memory access to each address of a plurality of addresses in order to locate and identify all emulation resources within said plurality of addresses. |
CITATION OF RELATED APPLICATIONSThis application claims priority under 35 USC [section]119(e)(1) of Provisional Application No. 60/120,960, filed Feb. 19, 1999.This application is related to co-assigned applications all of which are incorporated herein by reference:Ser. No. 09/154,385 entitled "METHOD OF INITIALIZING A CPU CORE FOR EMULATION" filed Sep. 16, 1998, now U.S. Pat. No. 6,167,385 issued Dec. 26, 2000; andSer. No. 09/483,367, entitled "EMULATION SUSPEND MODE WITH DIFFERING RESPONSE TO DIFFERING CLASSES OF INTERRUPTS" claiming priority from U.S. Provisional Application No. 60/120,809 filed Feb. 19, 1999, now U.S. Pat. No. 6,553,513;Ser. No. 09/481,852, entitled "EMULATION SUSPENSION MODE WITH STOP MODE EXTENSION" claiming priority from U.S. Provisional Application No. 60/120,809 filed Feb. 19, 1999, now U.S. Pat. No. 6,567,933;Ser. No. 09/483,568, entitled "EMULATION SUSPEND MODE HANDLING MULTIPLE STOPS AND STARTS" claiming priority from U.S. Provisional Application No. 60/120,809 filed Feb. 19, 1999, now U.S. Pat. No. 6,564,339;Ser. No. 09/483,697, entitled "EMULATION SUSPEND MODE WITH FRAME CONTROLLED RESOURCE ACCESS" claiming priority from U.S. Provisional Application No. 60/120,809 filed Feb. 19, 1999, now U.S. Pat. No. 6,557,116;Ser. No. 09/482,902, entitled "EMULATION SUSPEND MODE WITH INSTRUCTION JAMMING" claiming priority from U.S. Provisional Application No. 60/120,809 filed Feb. 19, 1999;Ser. No. 09/483,570, entitled "SOFTWARE EMULATION MONITOR EMPLOYED WITH HARDWARE SUSPEND MODE" claiming priority from U.S. Provisional Application No. 60/120,683 filed Feb. 19, 1999;Ser. No. 09/483,783, entitled "EMULATION SYSTEM WITH ADDRESS COMPARISON UNIT AND DATA COMPARISON UNIT OWNERSHIP ARBITRATION" claiming priority from U.S. Provisional Application No. 60/120,791 filed Feb. 19, 1999; andSer. No. 09/481,853, entitled "EMULATION SYSTEM WITH PERIPHERALS RECORDING EMULATION FRAME WHEN STOP GENERATED" claiming priority from U.S. Provisional Application No.60/120,810 filed Feb. 19, 1999; andSer. No. 09/483,321 entitled "EMULATION SYSTEM EMPLOYING SERIAL TEST PORT AND ALTERNATIVE DATA TRANSFER PROTOCOL" claiming priority from U.S. Provisional Application No. 60/120,667 filed Feb. 19, 1999.TECHNICAL FIELD OF THE INVENTIONThe technical field of this invention is complex integrated circuits including embedded digital processor cores and more particularly in circuit emulation of integrated circuits with embedded digital processor cores.BACKGROUND OF THE INVENTIONProgrammable digital processors such as microprocessors and digital signal processors have become very complex machines. Testing these programmable digital processors has also become complex task. It is now common for semiconductor manufactures to build single integrated circuit programmable digital processors with millions of transistors. The current trend is to devote many of these transistors to on-chip cache memories. Even so, the number of logic circuits and their complex relationships makes testing such integrated circuits an increasingly difficult task.A trend in electronics makes this testing problem more difficult. Single integrated circuit programmable digital processors are becoming more and more of the electronics of many end products. A single integrated circuit used in this way typically includes a programmable digital processor, read only memory storing the base program, read/write memory for operation and a set of peripherals selected for the particular product. This trend is known as system level integration. In the ultimate system level integration, all the electronics are embodied in a single integrated circuit. This level of integration is now achieved in electronic calculators. Many electronic calculators consist of a single integrated circuit, a keyboard, a display, the battery or solar panel power source and a plastic case. Such integration provides less "visibility" into the operation of the programmable digital signal processor. Because the address and data busses of the digital processor are no longer brought out the device pins, it is more difficult to determine the behavior of the embedded processor from external connections.Another trend in electronics makes this testing problem more difficult. Many new product applications require differing types of processing. Often control processes and user interface processes are better handled with a different programmable digital processor than digital signal processes. An example is wireless telephones. Many coding/decoding and filtering tasks are best handled by a digital signal processor (DSP). Other tasks such as dialing, controlling user inputs and outputs are best handled by microprocessors such as a RISC (Reduced Instruction Set Computer) processor. There is a trend for a system integrated circuit to include both a RISC processor and a DSP. These two processors will typically operate independently and employ differing instruction set architectures. Thus there may be more than one programmable digital processor on a single integrated circuit, each having limited visibility via the device pins.Another problem is product emulation when employing these programmable digital processors. Product development and debugging is best handled with an emulation circuit closely corresponding to the actual integrated circuit to be employed in the final product. In circuit emulation (ICE) is in response to this need. An integrated circuit with ICE includes auxiliary circuits not needed in the operating product included solely to enhance emulation visibility. In the typical system level integration circuit, these emulation circuits use only a very small fraction of the number of transistors employed in operating circuits. Thus it is feasible to include ICE circuits in all integrated circuits manufactured. Since every integrated circuit can be used for emulation, inventory and manufacturing need not differ between a normal product and an emulation enhanced product.As a result of these trends there is a need in the art for integrated circuits which are easier to test and easier to emulate.SUMMARY OF THE INVENTIONThis invention permits location and identification of optional emulation resources in in-circuit-emulation of an integrated circuit. Each emulation resource is assigned a memory address. The in-circuit-emulation generates a special memory access to memory addresses. If the special memory access corresponds to the address of an emulation resource, the emulation resource responds with an acknowledgement and a corresponding identification number. Nonemulation circuits do not respond to the special memory access. This technique permits manufacture of plural integrated circuits with corresponding sets of emulation resources, where an emulation program can determine the available resources for the particular integrated circuit. The emulation resources preferrably includes a set of emulation resources common to all integrated circuits with predetermined memory addresses and a predetermined identification numbers as well as optional emulation resources. The emulation program can locate and identify all emulation resources by generating the special memory access to each address of a range of addresses.BRIEF DESCRIPTION OF THE DRAWINGSThese and other aspects of this invention are illustrated in the drawings, in which:FIG. 1 illustrates the environment of the debugging system of this invention which is known in the art;FIG. 2 illustrates the known 14-pin JTAG header used to interface the target system to the access adapter;FIG. 3 illustrates an emulation level view of the target system;FIG. 4 illustrates an electrical connection view of the coupling between the access adapter and the target system;FIG. 5 illustrates the possible operation states in the debugging environment of the preferred embodiment of this invention;FIG. 6 illustrates the inputs and outputs of the debug frame counter; andFIG. 7 illustrates in greater detail circuits located on each megamodule concerned with emulation.DETAILED DESCRIPTION OF PREFERRED EMBODIMENTSFIG. 1 illustrates the environment of the debugging system of this invention. This environment connects high level debugging software executing on a debug host computer 1 to a low level debug interface supported by the target system 3. In this invention the target system 3 may include more than one programmable digital processor and possibly more than one such programmable digital processor on a single integrated circuit. In this application the term programmable digital processor is meant to encompass devices commonly known as microprocessors, microcontrollers and digital signal processors. The target system 3 provides a standard interface to the access adapter 2.Debug host computer 1 consists of a computer, for example a PC, running a CPU core specific software debugger as one of its tasks. The debug host computer 1 allows the user to issue high level commands such as setting breakpoint, single stepping the programmable digital processor in target system 3 and displaying the contents of a memory range.Access adapter 2 is a combination of hardware and software that connects the debug host computer 1 to target system 3. Access adapter 2 utilizes one or more hardware interfaces and/or protocols to convert messages created by user interface commands of debug host computer 1 into debug commands operable on target system 3. Access adapter 2 can be either loosely coupled or tightly coupled to the debug host computer 1. In the case of a PC host computer, access adapter 3 can be an XDS 510 scan controller attached directly to the PC bus. This implements a tightly coupled configuration requiring the PC to perform even the lowest level actions necessary to manage debug activity. In loosely coupled configurations, debug host computer 1 CPU communicates with another processor over a high bandwidth interface such as a SCSI, LAN or other interface. An example of this is a XDS 510WS controller connected to the target system debug interface and to the PC through a SCSI port. In this case access adapter 2 is a debug subsystem manager and handles the detailed manipulation of the target debug capability, and debug host computer 1 send high level commands to the debug subsystem. Access adapter 2 returns data and error conditions to debug host computer 1. Current PC operating systems may not be able to service the low level debug requirements continuously. Thus it may be advantageous to partition the problem into the display and user interface and control sections.The target system 3 contains one or more programmable digital processor cores. The programmable digital processor core(s) contain hardware designed explicitly to ease debugging. This special hardware of target system 3 is the lowest element of the system debug environment 10. The programmable digital processor core debug facilities allow the user to control the program execution, examine or change system memory, core CPU resources in real-time.The interface of access adapter 2 to target system 3 is preferably an extension to the IEEE 1149.1 (JTAG) test standard. The JTAG standard includes 5 primary signals known as nTRST, TCK, TMS, TDI, and TDO. The JTAG standard typically employs three additional signals Test Clock Out (TCKO), the target supply (Vdd) and ground (GND). The preferred embodiment of this invention also employs the two extension signals nET1 and nET0. Table 1 lists these signals, states whether the signal is an input, an output or both, and gives the descriptive name of the signal.<tb> <sep> <sep>TABLE 1<tb> <sep> <sep>Pin<sep>Input/Output<sep>Description<tb> <sep> <sep>nTRST<sep>I<sep>Test Logic Reset Not<tb> <sep> <sep>TCK<sep>I<sep>Test Clock<tb> <sep> <sep>TMS<sep>I<sep>Test Mode Select<tb> <sep> <sep>TDI<sep>I<sep>Test Data Input<tb> <sep> <sep>TDO<sep>O<sep>Test Data Output<tb> <sep> <sep>TCKO<sep>O<sep>Test Clock Out<tb> <sep> <sep>PD(Vdd)<sep>I<sep>Target Power Supply<tb> <sep> <sep>GND<sep>I/O<sep>Ground<tb> <sep> <sep>nET1<sep>I/O<sep>Emulation and Test 1 Not<tb> <sep> <sep>nET0<sep>I/O<sep>Emulation and Test 0 NotThe signal nTRST is called Test Logic Reset Not. A low applied to this pin causes all test and debug logic in the target device to be reset along with the IEEE 1149.1 interface.The signal TCK is called Test Clock. This signal is used to drive the IEEE 1149.1 state machine and logic. The same TCK supplied to the target device is supplied to this pin.The signal TMS is called Test Mode Select. This signal directs the next state of the IEEE 1149.1 test access port state machine.The signal TDI is called Test Data Input. This signal is the scan data input to the target device.The signal TDO is called Test Data Output. This signal is the scan data output of the target device.FIG. 2 illustrates a 14-pin JTAG header used to interface target system 3 to access adapter 2. The JTAG header requires three additional pins and further includes two extension pins. The signal TCKO is called Test Clock Out. This signal is a test clock supplied by the scan controller to the target system. This test clock can be used as the system TCK source, or it can be ignored with the TCK source being generated by the target system. In many target systems, TCKO is simply connected to TCK and used as the test clock. The PD(Vdd) is called the Target Power Supply. This is used as a power detect input to access adapter 2. The JTAG header also includes ground connections.The preferred embodiment of this invention includes an extension to the JTAG interface. Two pins nET0 and nET1 serve as a two pin trigger channel function. This function supplements the serial access capability of the standard interface with continuous monitoring of device activity. The two added pins create debug and test capabilities that cannot be created with the standard interface. The nET0 signal is called Emulation and Test 0 Not. This signal helps create a trigger to channel zero. Similarly, the nET1 signal is called Emulation and Test 1 Not. This signal helps create a trigger to channel one. These channels will be further explained below.FIG. 3 illustrates an emulation level view of target system 3. Target system 3 may include plural devices 11, 13 and 15. FIG. 3 illustrates details of example device 13 which includes plural megamodules 21, 23 and 25. FIG. 3 illustrates details of example megamodule 23. Example megamodule 23 includes debug and test control unit 30 and plural device domains. These device domains include central processing unit (CPU) core 31, analysis unit 33, memory 35 and debug/test direct memory access (DT_DMA) unit 37.Debug and test control unit 30 contains the IEEE interface. It provides a bridge between the Extended IEEE Interface and the debug and test capability distributed through the domains. Debug and test control unit 30 can independently control by the domains 31, 33, 35 and 37. In other words, one or more domains can be scanned or controlled while other domains continue operate in their normal functional way.FIG. 4 illustrates an electrical connection view of the coupling between access adapter 2 and target system 3. FIG. 4 shows the connections of the of the various signals of the JTAG header 5 illustrated in FIG. 2. All these signals are connected to scan controller 41. The signals nTRST, TCK and TMS are connected to two example megamodules 31 and 33. FIG. 4 illustrates the optional connection of TCKO to the target system clock SYSCLK. The scan input TDI connects to a scan input of megamodule 31. The scan output of megamodule 31 supplies the scan input of megamodule 33. The scan output of meg module 33 supplies the scan output TDO. The two extension signals nET0 and nET1 control megamodules 31 and 33 via merge unit 32. These extension signals are monitored by test equipment 43.The debugging environment illustrated in FIGS. 1 to 4 permit the user to control application execution by any programmable digital processor of target system 3. Typical control processes include: keyboard directives such as run, halt and step; software breakpoints using op-code replacement; internal analysis breakpoints specified program counter or watchpoints specified by data accesses; and externally generated debug events.Actions such as decoding a software breakpoint instruction (DSTOP), the occurrence of an analysis breakpoint or watchpoint (ASTOP), or the occurrence of a debug host computer event (HSTOP) are referred to as debug events. Debug events cause execution to suspend. Debug events tied to the execution of specific instructions are called breakpoints. Debug events generated by memory references are called watchpoints. External debug events can also suspend execution. Debug events cause entry into the Debug State. Debug events cause entry into the Debug State.FIG. 5 illustrates the possible operation states in the debugging environment of the preferred embodiment of this invention. These operation states are execute (EXE) 101, debug suspend (DSUSP) 102 and interrupt during debug suspend (IDS) 103.Execute state 101 corresponds to the ordinary operation of target device 3. In the execute state 101 instructions are executed by the programmable digital processor in normal fashion. There are, no outstanding debug suspend conditions. A low logic level applied to the nTRST terminal or a software directive requesting functional run forces the operational state to execute state 101. In execute state 101 the pipeline fetches and executes instructions and processes interrupts in a normal way.The operational state transits from execute state 101 to debug suspend state 102 upon a debug event. The debugging environment of the preferred embodiment of this invention allows the suspension of program execution at points defined by breakpoint, watchpoints, and debug software directives, provided the application is an allowable debug suspend window. In general, debug events are allowed at an instruction boundary, when reset is inactive and no interrupts are active. Program execution suspends at an instruction boundary and the operational state changes to debug suspend state 102. When any debug condition is not met, the operational state remains in execute state 101 and no debug event processing occurs. The debugging environment permits debug event processing in the delayed slots of delayed branch instructions. Debug events occurring outside the debug suspend window create a debug pending condition. This condition suspends program execution when the application enables debug interrupts by opening the debug suspend window.In the debug suspend state 102 background instruction execution is inactive. This state permits debug/emulation visibility into the state of target device 3 while background execution is suspended. In debug suspend state 102, the program counter (PC) and status bits are generally preserved at their values prior to the debug event. The PC points to the instruction to be executed next. When execution resumes, the instruction referenced by the PC and those following is fetched for execution. This is facilitated by flushing the front end of the pipeline upon entry into debug suspend state 102 from execute state 101.The operational state may return to execute state 101 by a debug run directive. This may be either an unconditional run directive or a single step run directive. A single step run directive enters execute state 101 long enough to advance 9 the instruction pipeline one stage and then returns to debug suspend state 102.It is important to note that debug suspend state 102 consumes no CPU bandwidth because no monitor code executes as a result of suspending execution. The debug architecture provides for complete register and memory accessibility without the aid of a monitor program. The user may change the operating mode at any time without restrictions.Certain interrupts transit the operation state from debug suspend state 102 to interrupt during suspend (IDS) state 103. These interrupts are defined by a separate interrupt mask independent of the normal interrupt mask. Those interrupts defined as high priority interrupts (HPI) or foreground interrupts cause the operation state to enter the interrupt during suspend state 103 from the debug suspend state 102. The debug suspend state 102 enables high priority interrupts independent of the state of the global interrupt enable bit or of software run directives. This enables debugging of background tasks while the target device 3 continues to service a real time application via high priority interrupts.The interrupt pipeline jam for such a high priority interrupt moves the operational state to interrupt during suspend state 103. This jam causes an extra word to be pushed on the stack containing the debug status describing the reason the debug suspend state 102 entry occurred. Interrupt during suspend state 103 differs from the execute state 101 in that the interrupt processing creates a thread, linking the interrupt execution to the debug suspend state 102 as described in above. A digital frame counter (DFC) is incremented upon each high priority interrupt taken. The high priority interrupt sets an interrupt during debug state bit (IDS), which is part of the CPU status. The IDS bit sets after the context save stores the previous value on the stack with the status word. When returning from an interrupt the IDS bit indicates whether to re-enter debug suspend state 102. If the IDS bit is set, the interrupt occurred during a debug suspend state 102 and the operational state should return to the debug suspend state 102. If the IDS bit is not set, the interrupt occurred during the execute state 101 and the operational state should return to execute state 101. Upon returning from the interrupt, the PC and status return to their state before the interrupt unless the interrupt service routine has purposely modified values on the stack. This is required because it is possible for multiple interrupts to occur and be serviced while the device is in debug suspend state 102.The digital frame counter is decremented upon each return from interrupt. This count permits the debug environment to track the status of the suspended foreground task. For example, a taken high priority interrupt may change the machine state and thus the current machine state would not reflect the suspended background task. However, if the digital frame counter were zero, then the debug environment is assured no interrupts have not temporarily changed the machine state.The interrupt during suspend state 103 is exited at the end of the interrupt service routine. A normal end of an interrupt involves a return from interrupt instruction (RTI). Upon execution of a return from interrupt instruction, the machine status is popped from the stack. As noted above, the interrupt during debug state bit indicates whether the interrupt occurred during execute state 101 or debug suspend state 102. The operational state return to the former state as indicated by the interrupt during debug state bit. The prior value of the program counter is reloaded to recover the prior machine status. Execution of a return from interrupt instruction also decrements the digital frame counter. Because it is possible to receive a higher priority interrupt while servicing a prior interrupt, more than one interrupt level may be pending. The digital frame counter indicates the current interrupt level. This is useful to debug processing as the machine status may be changed by the multiple interrupts. The debug software can read the digital frame counter and supply a debug level identity to identify currently targeted interrupt level. Execution and register operations target a specific debug level. Memory operations can target a specific debug level or bypass the level comparison. In particular, the status of the background task suspended on initial entry into debug suspend state 102 can only be reliably determined if the digital frame counter is zero. The maximum number of levels of the digital frame counter corresponds to the size of the interrupt hierarchy. This data preserves a path back to the debug suspend state 102 when the application concludes the interrupt service routine with a return from interrupt instruction.The interrupt during suspend state 103 transits to the execute state 102 upon execution of an abort interrupt (ABORTI) instruction. The abort interrupt instruction would ordinarily be used only on detection of a unrecoverable error in the interrupt service routine. The path back to the debug suspend state is broken upon execution of the abort interrupt instruction. The status of the interrupt during debug state bit and the digital frame counter are ignored in this case. In particular, the interrupt during debug state bit is cleared and the digital frame counter is set to zero. This mechanism enables recovery to the background task when a high priority interrupt service routine has an unrecoverable error.Interrupts can be serviced the while the debug software views or modifies the CPU state. The debug state shown to the programmer reflects the machine state when there is no interrupt service routine active. The debug software works with on-chip debug support to ensure the high priority interrupts are transparent to a debug session. Likewise this hardware and software combination works to make debug activity transparent to high priority interrupt service routines. Note that program execution can actually be suspended in multiple locations if it is desired to break within a time critical interrupt while still allowing others to be serviced.FIG. 6 illustrates the inputs and outputs of debug frame counter 200. Debug frame counter 200 is reset to zero by either entry into execute state 101 or occurrence of an ABORTI abort interrupt instruction. Debug frame counter 200 counts up on each taken interrupt. Debug frame counter 200 counts down on each return from interrupt.FIG. 7 illustrates in greater detail circuits located on each megamodule concerned with emulation. These include address comparison unit (ACU) 310, data comparison unit (DCU) 320 and the external comparison unit 330 (ECU). The address comparison unit 310 provides breakpoint, counter, parallel signature analysis and data logging support. The data comparison unit 320 provides breakpoint and parallel support. The external comparison unit 330 controls external inputs to the event functions. Interaction with the programmable digital processor within the megamodule is handled by the memory unit 301. The application and debug software share access to address comparison unit 310, data comparison unit 320 and external comparison unit 330 by access to their registers.Memory unit 301 provides the ability to read and write memory. For reads, it sources an address (AUXA) as selected by multiplexer 303 and receives either program read data (PD) or memory read data (MD) as selected by multiplexer 315. For writes it sources an address (AUXA) selected by multiplexer 303 and data (AUXD) selected by multiplexer 305.Address comparison unit 310 contains two 32 bit registers AREF and AMSK and one 16 bit register ACNTL. The AREF and AMSK registers are preferably 32 bit data registers that can be addressed as sixteen bit registers in 16 bit architectures. Their function is defined by the ACNTL register described in Table 2. The ACNTL register configures the AREF and AMSK registers in a number of modes, including: DMA reads and writes for data logging, downloads and uploads; event generation such as breakpoints, watchpoints and nET0 and nET1 triggers; counts for benchmarking, watchdog timing and period counters; parallel signature analysis functions for test; off performing no function and ownership by the application or debug is unchanged; and unclaimed performing no function and either the application or debug can obtain ownership. Address comparison unit 310 is responsive to the bus input selected by multiplexer 311.The address comparison unit 310 configures for event generation where the AMSK register serves as an address mask register and the a AREF register serves as an address reference. The address comparison unit 310 generates a debug suspend request when the ACNTL register ASTOP and AFEN bits are TRUE. The AMSK field defines the address comparison unit 310 debug suspend request rudeness level. The ability to generate an event without generating a debug suspend request allows the address comparison unit 310 event to be used as a trigger generator through the ETx pins without altering core execution. This function supports breakpoints, watchpoints, and trigger generation. Table 2 shows the function specific bit mode bit definition of register ACNTL for event generation.<tb> <sep>TABLE 2<tb> <sep>Function<sep>Bit(s)<sep>Description<tb> <sep>ASELA [1:0]<sep>08:07<sep>Select Address - Select address for<tb> <sep> <sep> <sep>event comparison<tb> <sep> <sep> <sep>00 - Select no address<tb> <sep> <sep> <sep>01 - Select program address<tb> <sep> <sep> <sep>10 - Select memory address<tb> <sep> <sep> <sep>11 - Reserved<tb> <sep>AMSKON<sep>06<sep>Mask On - Logically OR the AMSK<tb> <sep> <sep> <sep>register contents with the address<tb> <sep> <sep> <sep>selection<tb> <sep>AREVT<sep>05<sep>Write Event - Generate event on read<tb> <sep> <sep> <sep>cycles (watchpoint)<tb> <sep>AWEVT<sep>04<sep>Write Event - Generate event on write<tb> <sep> <sep> <sep>cycles (watchpoint)<tb> <sep>AIEVT<sep>03<sep>Instruction Event - Generate event on<tb> <sep> <sep> <sep>instruction cycles, break only if<tb> <sep> <sep> <sep>instruction executes (breakpoint)<tb> <sep>AEXTQ<sep>02<sep>External Qualifier - When a one, the<tb> <sep> <sep> <sep>external qualifier input qualifies ACU<tb> <sep> <sep> <sep>event generation at the point the<tb> <sep> <sep> <sep>address comparison is made.<tb> <sep>AJOIN<sep>01<sep>Join - The event for the ACU is<tb> <sep> <sep> <sep>qualified by the DCU event output.<tb> <sep> <sep> <sep>Both the ACU and DCU comparisons must<tb> <sep> <sep> <sep>be TRUE to declare an ACU event. For<tb> <sep> <sep> <sep>cases where an ACU address comparison<tb> <sep> <sep> <sep>is joined to a DCU data comparison,<tb> <sep> <sep> <sep>the ACU comparison is delayed to align<tb> <sep> <sep> <sep>in time with the DCU data comparison.Specific alignment of address, data, and cycle qualifiers is architecture specific to the particular programmable digital processor. Breakpoint events are processed if and only if the instruction referenced by the breakpoint tag reaches the point in the instruction decode where the instruction would execute had the break event not been generated.The address comparison unit 310 configures for counter functions where the AMSK register serves as a counter and the AREF register either configures an eight bit counter that extends the reach of the AREF counter or serves as compare value that identifies the reload point for the AMSK counter. The counter configurations are a 40 bit benchmarking counter, a 32 bit period counter (reloadable when count reaches zero), or two sixteen bit reloadable counters. These counter functions support benchmarking, watchdog, period, and external event counting in addition to supporting execution pauses in anticipation of externally generated debug suspend requests. Table 3 shows the function specific bit mode bit definition of register ACNTL for counter functions.<tb> <sep> <sep>TABLE 3<tb> <sep> <sep>Function<sep>Bit(s)<sep>Description<tb> <sep> <sep>ACM [1:0]<sep>08:07<sep>Count Mode<tb> <sep> <sep> <sep> <sep>00 - Pause Period (8 bits)/Period<tb> <sep> <sep> <sep> <sep>(24 bits)<tb> <sep> <sep> <sep> <sep>01 - Period (2-16 bit)<tb> <sep> <sep> <sep> <sep>10 - Period (32 bits)<tb> <sep> <sep> <sep> <sep>11 - Benchmark (40 bits)<tb> <sep> <sep>ACEN1 [1:0]<sep>06:05<sep>Count Enable<tb> <sep> <sep> <sep> <sep>00 - Continuous count<tb> <sep> <sep> <sep> <sep>01 - Count when DCU event output<tb> <sep> <sep> <sep> <sep>is TRUE, else no count<tb> <sep> <sep> <sep> <sep>10 - Count when nET0 is TRUE, else<tb> <sep> <sep> <sep> <sep>no count<tb> <sep> <sep> <sep> <sep>11 - Count when nET1 is TRUE, else<tb> <sep> <sep> <sep> <sep>no count<tb> <sep> <sep>ACEN0 [1:0]<sep>04:03<sep>Count Enable<tb> <sep> <sep> <sep> <sep>00 - Continuous count<tb> <sep> <sep> <sep> <sep>01 - Count when DCU event output<tb> <sep> <sep> <sep> <sep>is TRUE, else no count<tb> <sep> <sep> <sep> <sep>10 - Count when the nET0 is TRUE,<tb> <sep> <sep> <sep> <sep>else no count<tb> <sep> <sep> <sep> <sep>11 - Count when the nET1 is TRUE,<tb> <sep> <sep> <sep> <sep>else no count<tb> <sep> <sep>ARL [1:0]<sep>02:01<sep>Count Reload<tb> <sep> <sep> <sep> <sep>00 - No reload, roll over at zero<tb> <sep> <sep> <sep> <sep>01 - Reload at zero and count<tb> <sep> <sep> <sep> <sep>10 - No reload, generate debug<tb> <sep> <sep> <sep> <sep>suspend at zero, stay at zero<tb> <sep> <sep> <sep> <sep>11 - Wait at zero for external<tb> <sep> <sep> <sep> <sep>start, Count up to reload,<tb> <sep> <sep> <sep> <sep>reload to zero, wait for<tb> <sep> <sep> <sep> <sep>external start<tb> <sep> <sep> <sep> <sep>Reload occurs when the count value<tb> <sep> <sep> <sep> <sep>equals the reference value and a<tb> <sep> <sep> <sep> <sep>count condition occurs.<tb> <sep> <sep>AFREE<sep>00<sep>Free<tb> <sep> <sep> <sep> <sep>0 - count free of the CPU<tb> <sep> <sep> <sep> <sep>execution state<tb> <sep> <sep> <sep> <sep>1 - if debug enable bit is FALSE,<tb> <sep> <sep> <sep> <sep>count<tb> <sep> <sep> <sep> <sep>if debug enable bit is TRUE,<tb> <sep> <sep> <sep> <sep>don't countThe address comparison unit 310 configures for parallel signature analysis functions where the AMSK and AREF registers serve a parallel signature analysis generator. Either the program address or memory address can be configured as the parallel signature analysis input. The parallel signature analysis calculation begins when the parallel signature analysis function is enabled and terminates when the address comparison unit 310 function is specified as OFF or the function is changed. Table 4 shows the function specific bit definition of register ACNTL for counter functions.<tb> <sep> <sep>TABLE 4<tb> <sep> <sep>Function<sep>Bit(s)<sep>Description<tb> <sep> <sep>ASELA [1:0]<sep>08:07<sep>Select Address - Select address for<tb> <sep> <sep> <sep> <sep>event comparison<tb> <sep> <sep> <sep> <sep>00 - Select no address<tb> <sep> <sep> <sep> <sep>01 - Select program address<tb> <sep> <sep> <sep> <sep>10 - Select memory address<tb> <sep> <sep> <sep> <sep>11 - Reserved<tb> <sep> <sep>Reserved<sep>06<sep>Reserved<tb> <sep> <sep>Don't Care<sep>05:00<sep>These bits are a don't care for the<tb> <sep> <sep> <sep> <sep>parallel signature analysis functionThe address comparison unit 310 configures to an off mode where either the debug software or application retains ownership but the address comparison unit 310 block is off. In this off configuration, the current owner retains ownership. For the unclaimed mode, neither the debug software or application retains ownership and the address comparison unit 310 block is off.The data comparison unit 310 contains two 32 bit registers DREF and DMSK and one 16 bit register DCNTL. The DREF and DMSK registers are merely 32 bit data registers that can be addressed as sixteen bit registers in 16 bit architectures. Their function is defined by the DCNTL register described in Table 5. This DCNTL register configures the DREF and. DMSK registers in a number of modes, including: event generation such as breakpoints, watchpoints and nET0 and nET1 triggers; parallel signature analysis functions for test; reloadable period counts; off performing no function and ownership by the application or debug is unchanged; and unclaimed performing no function and either the application or debug can obtain ownership. Data comparison unit 320 is responsive to the bus input selected by multiplexer 321.The data comparison unit 320 configures for event generation where the DMSK register serves as a mask register and the DREF register serves as a comparison reference. The data comparison unit 320 generates a debug suspend request when the DCNTL register DSTOP and DFEN bits are TRUE. The DMSK field defines the data comparison unit 320 debug suspend request rudeness level. Generation of an event without generating a debug suspend request allows the data comparison unit 320 event to be used as a trigger generator through the nET0 and nET1 pins without altering core execution. This function supports break and watchpoints, execution pause, and event counting.The data comparison unit 320 event generation works in tandem with the address comparison unit 310 event generation to provide address and data breakpoints. This feature requires that the two units be joined. The address comparison unit 310 event detects the address match while the data comparison unit 320 detects the read data or write data match associated with an access. The address comparison unit 310 address comparison is delayed to align with the data comparison unit 320 event processing.The data comparison unit 320 provides a unique ability to compare up to 32 user supplied inputs to a reference. The user inputs supplied to the megamodule core can be parallel signature analyzed or used as events. The selection of the data comparison unit320 parallel signature analysis mode is made available to the logic outside the CPU megamodule. Table 5 shows the function specific bit mode bit definition for data comparison unit 320 event generation.<tb> <sep> <sep>TABLE 5<tb> <sep> <sep>Function<sep>Bit(s)<sep>Description<tb> <sep> <sep>DMSK [1:0]<sep>10:09<sep>Debug suspend Request Mask - Generate<tb> <sep> <sep> <sep> <sep>one of four debug suspend requests<tb> <sep> <sep> <sep> <sep>provided the DSTOP suspend field<tb> <sep> <sep> <sep> <sep>specifies a debug suspend request.<tb> <sep> <sep> <sep> <sep>00 - debug enable bit and not high<tb> <sep> <sep> <sep> <sep>priority interrupt<tb> <sep> <sep> <sep> <sep>01 - not high priority interrupt<tb> <sep> <sep> <sep> <sep>10 - debug enable bit<tb> <sep> <sep> <sep> <sep>11 - any, debug enable bit and high<tb> <sep> <sep> <sep> <sep>priority interrupt are don't<tb> <sep> <sep> <sep> <sep>cares<tb> <sep> <sep>DSEL [2:0]<sep>08:06<sep>Select Comparison Input - Select<tb> <sep> <sep> <sep> <sep>program address for event comparison<tb> <sep> <sep> <sep> <sep>000 - Program Address<tb> <sep> <sep> <sep> <sep>001 - Memory Address<tb> <sep> <sep> <sep> <sep>010 - Program Read Data<tb> <sep> <sep> <sep> <sep>011 - Memory Read Data<tb> <sep> <sep> <sep> <sep>100 - Program Write Data<tb> <sep> <sep> <sep> <sep>101 - Memory Write Data<tb> <sep> <sep> <sep> <sep>110 - External parallel signature<tb> <sep> <sep> <sep> <sep>analysis inputs<tb> <sep> <sep> <sep> <sep>111 - Reserved, no selection<tb> <sep> <sep>DREVT<sep>05<sep>Read Event - Generate event on read<tb> <sep> <sep> <sep> <sep>cycles (watchpoint)<tb> <sep> <sep>DWEVT<sep>04<sep>Write Event - Generate event on write<tb> <sep> <sep> <sep> <sep>cycles (watchpoint)<tb> <sep> <sep>DIEVT<sep>03<sep>Instruction Event - Generate event on<tb> <sep> <sep> <sep> <sep>instruction cycles, break only if<tb> <sep> <sep> <sep> <sep>instruction is executes (breakpoint)<tb> <sep> <sep>DEXTQ<sep>02<sep>External Qualifier - When a one, the<tb> <sep> <sep> <sep> <sep>external qualifier input qualifies DCU<tb> <sep> <sep> <sep> <sep>event generation at the time of the<tb> <sep> <sep> <sep> <sep>comparison.<tb> <sep> <sep>DJOIN<sep>01<sep>Join - The event for the DCU is<tb> <sep> <sep> <sep> <sep>qualified by the ACU pre event output,<tb> <sep> <sep> <sep> <sep>both must be true to declare event<tb> <sep> <sep>DSTOP<sep>00<sep>Debug suspend<tb> <sep> <sep> <sep> <sep>0 - qualify the event generation but<tb> <sep> <sep> <sep> <sep>generate no debug suspend action<tb> <sep> <sep> <sep> <sep>1 - generate a debug suspend request<tb> <sep> <sep> <sep> <sep>defined by DMSK [1:0]The data comparison unit 320 counter function, when implemented, are identical to those for the address comparison unit 310. Please refer to Table 3 for a description of counter modes. The input from the address comparison unit is named DAEVT instead of ADEVT.The data comparison unit 320 configures for a parallel signature analysis functions where the DMSK and DREF registers serve a parallel signature analysis generator. The data comparison unit 320 parallel signature analysis function provides for the selection of any of the six sources shown in Table 6 as the parallel signature analysis input. The parallel signature analysis calculation begins when the parallel signature analysis function is enabled and terminates when the data comparison unit 320 function is specified as OFF. Changing the function to another function has undetermined results. Table 6 shows the function specific bit mode bit definition for parallel signature analysis functions.<tb> <sep> <sep>TABLE 6<tb> <sep> <sep>Function<sep>Bit(s)<sep>Description<tb> <sep> <sep>Don't Care<sep>10:09<sep>These bits are a don't care for the<tb> <sep> <sep> <sep> <sep>parallel signature analysis function.<tb> <sep> <sep>DSEL [2:0]<sep>08:06<sep>Select Comparison Input - Select<tb> <sep> <sep> <sep> <sep>program address for event comparison<tb> <sep> <sep> <sep> <sep>000 - Program Address<tb> <sep> <sep> <sep> <sep>001 - Memory Address<tb> <sep> <sep> <sep> <sep>010 - Program Read Data<tb> <sep> <sep> <sep> <sep>011 - Memory Read Data<tb> <sep> <sep> <sep> <sep>100 - Program Write Data<tb> <sep> <sep> <sep> <sep>101 - Memory Write Data<tb> <sep> <sep> <sep> <sep>110 - External parallel signature<tb> <sep> <sep> <sep> <sep>analysis inputs<tb> <sep> <sep> <sep> <sep>111 - Reserved, no selection<tb> <sep> <sep>Don't Care<sep>05:00<sep>These bits are a don't care for the<tb> <sep> <sep> <sep> <sep>parallel signature analysis functionThe data comparison unit 320 configures in off and unclaimed modes identical to that of the address comparison unit 310.The external comparison unit 330 includes a register ECNTL that manages external events that can generate debug suspend requests. The ECNTL register manages emulation and test pin zero and one inputs as well as external input used by the logic for external hardware triggering. Refer to Table 7 for a description of this function.<tb> <sep> <sep>TABLE 7<tb> <sep> <sep>Function<sep>Bit(s)<sep>Description<tb> <sep> <sep>EDBGO<sep>15<sep>External Debug Ownership<tb> <sep> <sep> <sep> <sep>0 - if any of bits 14-11 are 1, the<tb> <sep> <sep> <sep> <sep>application owns<tb> <sep> <sep> <sep> <sep>1 - if any of bits 14-11 are 1, debug<tb> <sep> <sep> <sep> <sep>owns<tb> <sep> <sep> <sep> <sep>If neither of these two conditions are<tb> <sep> <sep> <sep> <sep>true, the function is unclaimed<tb> <sep> <sep>EFEN<sep>14<sep>External Function Enable<tb> <sep> <sep> <sep> <sep>0 - external event function is<tb> <sep> <sep> <sep> <sep>disabled this function cannot<tb> <sep> <sep> <sep> <sep>generate debug suspend events<tb> <sep> <sep> <sep> <sep>1 - external event function is<tb> <sep> <sep> <sep> <sep>enabled and function can generate<tb> <sep> <sep> <sep> <sep>debug suspend events<tb> <sep> <sep>ET1EN<sep>13<sep>ET1 Input Enable<tb> <sep> <sep> <sep> <sep>0 - nET1 ignored<tb> <sep> <sep> <sep> <sep>1 - if EFEN is 1, synchronized nET1<tb> <sep> <sep> <sep> <sep>input can generate debug suspend<tb> <sep> <sep>ET0EN<sep>12<sep>ET0 Input Enable<tb> <sep> <sep> <sep> <sep>0 - nET0 ignored<tb> <sep> <sep> <sep> <sep>1 - if EFEN is 1, synchronized nET0<tb> <sep> <sep> <sep> <sep>input can generate debug suspend<tb> <sep> <sep>EXTEN<sep>11<sep>External Input Enable<tb> <sep> <sep> <sep> <sep>0 - external event ignored<tb> <sep> <sep> <sep> <sep>1 - if EFEN is 1, synchronized<tb> <sep> <sep> <sep> <sep>external event can generate debug<tb> <sep> <sep> <sep> <sep>suspend<tb> <sep> <sep>EMSK [1:0]<sep>10:09<sep>Debug suspend Request Mask - Generate<tb> <sep> <sep> <sep> <sep>one of four debug suspend requests<tb> <sep> <sep> <sep> <sep>provided EFEN true<tb> <sep> <sep> <sep> <sep>00 - debug enable bit and not high<tb> <sep> <sep> <sep> <sep>priority interrupt<tb> <sep> <sep> <sep> <sep>01 - not high priority interrupt<tb> <sep> <sep> <sep> <sep>10 - debug enable bit<tb> <sep> <sep> <sep> <sep>11 - any, debug enable bit and high<tb> <sep> <sep> <sep> <sep>priority interrupt are don't<tb> <sep> <sep> <sep> <sep>caresThe application and debug software share registers controlling external trigger event inputs, breakpoints and watchpoints, data logging, parallel signature analysis, and count functions. The application and debug software can not simultaneously own these resources but establish ownership and release ownership through memory mapped control registers continuously visible to both the application and debug software. The debug software has the ability to seize any resource if necessary, or negotiate with the application through software sequences.Debug accesses generate a debug request signal (DBG_REQ) identifying the access as a debug request. A three bit debug request type (DBG_TYPE[2:0]) code accompanies this request defining eight different cycle types. These cycle types are listed in Table 8.<tb> <sep>TABLE 8<tb> <sep>Code<sep>Cycle Type<tb> <sep>000<sep>Program Memory - Access to program memory<tb> <sep>001<sep>Data Memory Minimum - polite control access<tb> <sep> <sep>to data memory<tb> <sep>010<sep>CPU Registers - CPU register access<tb> <sep>011<sep>Memory Map Registers - Access to memory map<tb> <sep> <sep>registers present accessible through the<tb> <sep> <sep>memory map but not by the application<tb> <sep>100<sep>Software Breakpoint Memory - Access to<tb> <sep> <sep>software breakpoint memory<tb> <sep>101<sep>Maximum Data Memory - rude control access<tb> <sep> <sep>to data memory, used to override system<tb> <sep> <sep>resources owned by application<tb> <sep>110<sep>Emulation Peripheral Search - Read looking<tb> <sep> <sep>for emulation peripheral response and<tb> <sep> <sep>associated emulation function ID.<tb> <sep>111<sep>Execute InstructionThese codes appear static as debug software establishes them with the instruction register op-code before any request for accesses are made. They are not valid unless accompanied by DBG_REQ. Separating the debug request from the normal processor request allows debug software to access certain resources without the processor having similar privileges. An example is a software breakpoint memory that responds only to the debug memory request strobe. The application and debug software share other resources, with the debug software determining if the application has access permission. A request for any debug accesses is qualified by debug the high priority interrupt and debug enable bit status register flags as explained below.The debug unit connections to the processor allow debug software to execute any instruction in the instruction set while the CPU executes the application code. The execution of these instructions does not affect the application as they do not change the condition code or status. This capability is also available when the code execution is halted during debug suspend state 102. The debug software allows the user to restrict the use of this capability. It may be optionally excluded within high priority interrupt service routines by setting the high priority interrupt bit to one. Debug software may be excluded from areas that the application has defined as off limits by setting the debug enable bit to zero. The debug software and underlying debug hardware allows the user to qualify actions with either one or both of these restrictions. It is the responsibility of the debug software to not insert instructions in the execution stream that alter the application results or program flow.Address comparison unit 310, data comparison unit 320 and external comparison unit 330 are individual emulation peripherals that are installed in the device data memory map. Additional optional emulation peripherals can also be installed in the memory map. Debug must be able to locate these emulation peripherals in the memory map and use them. The mechanism used to locate these functions utilizes the debug/test direct memory access (DT_DMA) capabilities of address comparison unit 310. It is assumed that address comparison unit 310, data comparison unit 320 and external comparison unit 330 will always be part of the megamodule. A special memory access searches the data memory space for emulation peripherals. The emulation peripherals, including address comparison unit 310, data comparison unit 320 and external comparison unit 330 and any optional additional emulation peripherals, respond in a predetermined manner.A debug access including a three bit debug request type code identifying an emulation search cycle allows the memory space to be read with emulation peripherals responding to these accesses in a special way. Other memory and peripherals do not see or respond to this cycle type. This prevents the search from disturbing the system state in some way. The debug/test direct memory access (DT_DMA) capability of address comparison unit generates this type of access. The address comparison unit 310 is programmed to search memory with reads continuously until an emulation peripheral register responds by supplying a 1 on the CTOOLS_ACK megamodule input. In the preferred embodiment the last register in an emulation peripheral set responds. In the case of address comparison unit 310 this is ACNTL, for data comparison unit 320 this is DCNTL and for external comparison unit 330 this is ECNTL. When an emulation peripheral register responds to a query, it applies a logic 1 to the CTOOLS_ACK terminal and places its identification number on the memory read bus. The address comparison unit 310 debug/test direct memory access responds to the acknowledge by ending the DT_DMA burst read. When the DT_DMA suspends, the AMSK register of address comparison unit 310 contains the address of the emulation peripheral register acknowledging the query and the AREF register of address comparison unit 310 contains the identification number of the emulation peripheral. Debug software recognizes the DT_DMA search has halted by observing system status through the status register. Debug software reads these two address comparison unit 310 registers upon detecting the DMA state machine halt.Address comparison unit 310, data comparison unit 320 and external comparison unit 330 are given predetermined identification codes. An emulation peripheral search cycle code with the address of the ACNTL register yields the identification number for address comparison unit 310, which is zero. An emulation peripheral search cycle code with the address of the DCNTL yields the identification number of data comparison unit 320, which is one. An emulation peripheral search cycle code with the address of the ECNTL yields the identification number of the external comparison unit 330, which is two.The DMA state machine operates as follows when performing emulation peripheral searches. The destination of read data is the AREF register. The DMA makes accesses continuously without removing the read data from the AREF register. When the CTOOLS_ACK signal stops the block transfer, the AREF register contains the identification number of the last emulation peripheral responding.It may be desirable to search the entire data memory space to detect emulation peripherals. An endless access mode of address comparison unit 310 is specified. The search is started at an address past ACNTL register. The search will find data comparison unit 320, external comparison unit 330 and, after the address wraps, address comparison unit 310. The debug software knows the search is complete when the address of the ACNTL register is detected following and DMA halt. For more limited searches the word count or data comparison unit 320 directed end options can be selected to limit the search range.The identification number of address comparison unit 310 is zero, that of data comparison unit 320 is one and that of external comparison unit 330 is two. Emulation peripherals outside the processor megamodule can share these identification numbers only if they have identical capability. The identification number identifies capability while the address identifies the location and establishes existence of multiple copies of the same capability. Debug software uses the identification numbers found to determine the capability supported by the emulation peripheral in the device available at the debug user interface. After an emulation peripheral is found, the search may be restarted from the address identified plus one. The response of an emulation peripheral to an emulation peripheral query is independent of whether the application or debug software owns the emulation peripheral. |
A method of preparing a semiconductor die comprises pre-treating exposed silicon (101) to form an oxide (110) prior to silicide (105) formation; and depositing metal (120) on the oxide. In disclosed embodiments, chemical oxide surface pre-treatment retards nickel (Ni) diffusion to provide controlled NiSi silicide formation. This addresses the problem of rapid diffusion of Ni and Ni decoration of Si silicon defects. |
CLAIMS 1. A method of preparing a die, comprising: prior to suicide formation, treating exposed silicon to form an oxide; and depositing a metal on the oxide. 2. The method of Claim 1 wherein depositing the metal on the oxide comprises depositing nickel on the oxide. 3. The method of Claim 2, wherein treating exposed silicon to form an oxide comprises forming an oxide that is less than or equal to about 15 angstroms thick. 4. The method of Claim 1 wherein treating exposed silicon to form an oxide comprises forming a non-thermal oxide. 5. The method of Claim 1 or 3, wherein treating the exposed silicon to form an oxide comprises treating -the exposed silicon- with a solution comprising ammonium hydroxide, hydrogen peroxide, and water; hydrochloric acid, hydrogen peroxide, and water; hydrogen peroxide; ozone; ozonated deionized water; or combinations thereof. 6. The method of Claim 2, further comprising: heating the die comprising metal on oxidized silicon to form a suicide. 7. The method of Claim 6, wherein treating the die to expose silicon comprises contacting the die with a hydrofluoric acid solution. 8. The method of Claim 1 , wherein depositing metal on the oxide comprises depositing titanium, cobalt, nickel, platinum, palladium, tungsten, molybdenum, or combinations thereof. 9. A system for forming metal suicide comprising: 1 a vessel wherein oxide is formed on a die comprising exposed silicon; and a processing chamber wherein a metal source is disposed to deposit metal on the oxide. |
TREATMENT OF SILICON PRIOR TO MJCKEL SILICIDE FORMATIONTECHNICAL FIELDThe subject matter disclosed herein relates generally to semiconductor processing and in particular to a method of preparing a die for suicide formation via treating silicon to form an oxide prior to metal deposition.BACKGROUNDIntegrated circuits are fabricated on the surface of a semiconductor wafer in layers, and later singulated into individual semiconductor devices, or "dies." Many fabrication processes are repeated numerous times, constructing layer after layer until fabrication is complete. Metal layers, which typically increase in number as device complexity increases, include patterns of conductive material that are vertically insulated from one another by alternating layers of insulating material. Conductive traces are also separated within each layer by an insulating, or dielectric, material. Vertical, conductive tunnels called "vias" typically pass through insulating layers to form conductive pathways between adjacent conductive patterns. Defects in semiconductor devices may result from, among other things, diffusion of mobile species and deficiencies in the layers of materials forming device structures.Metals are commonly employed in fabrication of semiconductor devices.Certain metals, e.g., cobalt, nickel, titanium, and platinum, may be suitable for employment as a constituent in formation of a metal suicide (or "suicide"), which may act as a low resistance contact between metal layers and the silicon substrate in a device. The processes involved in preparing a die for pre-silicide metal deposition and suicide formation may affect suicide film integrity, and the potential for undesirable diffusion of metal in the device. In application of nickel (Ni) to exposed silicon (Si), there may be problems associated with rapid diffusion of Ni and Ni decoration of Si defects.SUMMARYIn some embodiments, a method for preparing a die comprises treating exposed silicon to form an oxide, prior to suicide formation and depositing metal on the oxide. The metal may comprise titanium, cobalt, nickel, platinum, palladium, tungsten, molybdenum, or combinations thereof, on the oxide. The oxide may be less than or equal to about 15 angstroms thick. In various embodiments, treating exposed silicon to form an oxide comprises forming a non-thermal oxide. Treating exposed silicon to form an oxide may also comprise treating the exposed silicon with an oxidizing plasma; alternatively, treating exposed silicon to form an oxide may comprise forming a chemical oxide. In certain other embodiments, treating exposed silicon to form an oxide comprises treating exposed silicon with a solution comprising ammonium hydroxide, hydrogen peroxide, and water; hydrochloric acid, hydrogen peroxide, and water; hydrogen peroxide; ozone; ozonated deionized water; or combinations thereof. The time between treating exposed silicon to form an oxide and depositing metal on the oxide may be in a range from about 0 to about 60 hours.In embodiments, a method for improving suicide film integrity comprises oxidizing exposed silicon on a die, depositing metal on the oxidized silicon, and heating the die comprising metal on oxidized silicon to form a suicide. The method may further comprise treating the die to expose silicon prior to oxidizing exposed silicon. In other embodiments, a system for forming metal suicide comprises a vessel wherein oxide is formed on a die comprising exposed silicon; and a processing chamber-wherein a metal<"> source is disposed to deposit metal on the oxide. The vessel in which oxide is formed may comprise a wet chemical tank, a plasma reactor, or combinations thereof. The processing chamber may deposit metal via sputtering. In certain embodiments, the suicide is formed at a temperature in a range from about 250 to about 500 degrees Celsius.For Ni deposition onto exposed Si, chemical oxide surface pre-treatment retards nickel (Ni) diffusion to provide controlled NiSi suicide formation. This addresses the problem of rapid diffusion of Ni and Ni decoration of Si silicon defects. BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 illustrates a cross-sectional view of materials in a semiconductor device that may be involved in preparing a die for metal suicide formation;FIG. 2 illustrates another cross-sectional view of materials in a semiconductor device after a metal reacts with silicon to form a metal suicide; and FIG. 3 illustrates a system for preparing a die and carrying out suicide formation. NOTATION AND NOMENCLATURECertain terms are used throughout the following description and claims to refer to particular components. As one skilled in the art will appreciate, companies may refer to a component by different names. This document does not intend to distinguish between components that differ in name but not function. Also employed throughout this document are the terms "including" and "comprising," which are used in an open-ended fashion, and thus should be interpreted to mean "including, but not limited to . . . ."The term "integrated circuit" or "IC" refers to a set of electronic components and their interconnections (internal electrical circuit elements, collectively) that are patterned on the surface of a microchip. The term "semiconductor device" refers generically to an integrated circuit (IC). The term "die" ("dies" for plural) refers generically to an integrated circuit or semiconductor device, which may be a portion of a wafer, in various stages of completion, including the underlying semiconductor substrate, insulating materials, and all circuitry patterned thereon. DETAILED DESCRIPTION OF EMBODIMENTSFIG. 1 illustrates an embodiment of a semiconductor structure 100 involved jn-preparing a die-for- suicide formation. The view of FIG<"> 1 is a cross-section of a semiconductor structure 100, e.g., a die, die sample, or portion of a wafer, at an intermediate point in the construction of an IC. In various embodiments, preparing for suicide formation comprises treating exposed silicon 101 to form an oxide 110, and depositing a metal 120 on the oxide 110. In certain embodiments, the metal 120 is nickel, which for suicide formation reacts with silicon 101 to form nickel suicide (not shown).FIG. 2 illustrates an embodiment of a semiconductor structure 200 after formation of suicide 105, where nickel is employed as the metal 120. Formation of nickel suicide 105 is carried out after forming an oxide 110 on exposed silicon 101 and depositing nickel 120 on the oxide 110. FIG. 2 shows nickel suicide 105 formed between silicon 101 and nickel 120. After formation, the oxide 110 may primarily be at the interface of the nickel suicide 105 and any unreacted nickel 120. During formation, the nickel 120 diffuses through the oxide 110 and reacts with the silicon 101 to form nickel suicide 105. Thus, some of the silicon 101 and some of the nickel 120 are consumed in formation of nickel suicide 105.The metal 120 can be any metal effective for reacting with silicon to form a suicide that creates a low resistance contact between metal layers and the silicon substrate. As technology advances and the dimensions of structures within semiconductor devices shrink, some metais may be more suitable than others. In some embodiments, the metal 120 comprises titanium, cobalt, nickel, platinum, palladium, tungsten, molybdenum, or combinations thereof. Some examples of considerations when selecting a metal comprise silicon consumption during suicide formation, diffusion rate, thermal stability of the desired phase of suicide, and electrical characteristics, such as junction integrity, contact resistance, and sheet resistance.The treatment employed to form the oxide affects the characteristics of the oxide, which in turn may impact diffusion of metal to the silicon and suicide integrity. Some pertinent characteristics of the oxide may comprise density, stoichiometry, thickness, hydrogen content, OH<"> content, and uniformity. In some embodiments, treating exposed silicon forms a non-thermal oxide. Examples of non-thermal oxides comprise chemical oxides and oxides formed via treating exposed silicon with<"> an oxidizing plasma source. Examples of oxidizing plasma sources for treating silicon prior to metal deposition comprise a direct energetic plasma of Ar<+>, He<+>, Ne<+> in combination with an oxygen source, such as O2 or H2O; and a remote plasma that contains reactive oxidizing radicals of O* or OH*. In certain embodiments, chemical oxides are oxides formed via treating exposed silicon with a solution comprising ammonium hydroxide, hydrogen peroxide, and water; hydrochloric acid, hydrogen peroxide, and water; hydrogen peroxide; ozonated deionized water; or combinations thereof. Effective treatments of exposed silicon to form an oxide typically control oxide thickness. In certain embodiments, as measured by an Opti-Probe(R) with a desorber manufactured by Therma-Wave of Fremont, California, the thickness of the oxide may be less than or equal to about 15 angstroms; alternatively, less than or equal to about 12 angstroms; alternatively, less than or equal to about 10 angstroms. The oxide formed via treating exposed silicon may also be a protective layer for the silicon. Whereas typically the time window between exposing silicon and depositing pre-silicide metal may have been a concern, forming an oxide on the exposed silicon may reduce such concerns. Thus, the protective oxide layer may add manufacturing robustness. In some embodiments, the time window between exposing silicon and depositing pre-silicide metal is in a range from about 0 to about 24 hours; alternatively, the time window is in a range from about 0 to about 12 hours; alternatively, from about 0 to about 2 hours. In other embodiments, the time window between treating exposed silicon to form an oxide and depositing pre-silicide metal is in a range from about 0 to about 120 hours; alternatively, from about 0 to about 60 hours.It is believed the oxide on the silicon provides a more desirable surface for the silicon prior to metal deposition. Exposed silicon may be substantially lacking in stability and uniformity, and susceptible to undesirable reactions and diffusion before and after metal deposition. It is believed the oxide layer permits metal diffusion necessary for suicide formation while also providing a more stable surface on the silicon. In at least some cases, the oxide layer may be associated with less variation in island diode leakage and breakdown, less variation in contact resistance, and less variation in transistor gain.<~>Silicide<~>s<~~>(OT<~>"metal suicides") are typically <"> employed in<"> semiconductor devices to create low resistance contact between metal layers and the silicon substrate. Suicide, which includes a self-aligned suicide or "salicide," is the product of a reaction between metal and silicon. The characteristics of the suicide and suicide formation process may vary substantially depending on the type of metal employed. Suicides of cobalt, for example, may possess higher resistance and consume more silicon during formation than suicides of nickel. Also, the quality of the suicide formed with a single metal may vary depending on reaction conditions. For example, the suicide of nickel with silicon may comprise different phases, e.g., di-nickel suicide (Ni2Si), nickel di- silicide (NiSi2), and/or nickel suicide (NiSi). Reaction kinetics, e.g., time and temperature, typically determine which suicide phase is formed and/or the proportions of each suicide phase formed. In the case of nickel, NiSi possesses substantially lower resistance and is, thus, typically more desirable in the role of contact suicide. In at least some instances, the desired suicide is NiSi and formation of an oxide on exposed silicon prior to metal deposition may minimize formation of unwanted NiSi2 and Ni2Si. Also provided in the present application is a system 300 for forming suicide, an aspect of which is illustrated by FIG. 3. In some embodiments, the system 300 comprises a vessel 310 and a processing chamber 330. A die (not shown) including exposed silicon is placed in the vessel 310 for processing. The vessel 310 may treat the exposed silicon to form an oxide. Subsequently, the die is transferred from the vessei 310 to the processing chamber 330. A metal source 320 may be disposed in the processing chamber 330 at conditions effective for depositing metal on the oxide. In some embodiments, the processing chamber 330 effects both deposition and suicide formation. Thus, after metal deposition, suicide formation may also be accomplished in the processing chamber 330.In some embodiments, forming an oxide on exposed silicon in the vessel 310 may be preceded by a process for exposing silicon. Such a process for exposing silicon would not typically be carried out in the vessel 310 where an oxide is formed. Effective techniques for exposing silicon may be those characteristically employed by one skilled in the art. In at least one case, the process for exposing silicon prior to forming an oxide comprises a hydrofluoric acid treatment. Such a hydrofluoric acid treatment- may comprise hydrofluoric acid in the gas <~> phase<">, liquid <"> phase,<"> of combinations thereof. As an example, in order to expose silicon the die may be treated in a chemical bath comprising a 0.2 - 1 percent by volume hydrofluoric acid solution in deionized water at 23 degrees Celsius for 1-10 minutes.In various embodiments, the vessel 310 receives a die comprising exposed silicon and is set at conditions effective for forming an oxide on the exposed silicon as described above. In certain cases, the vessel 310 may comprise a wet chemical tank, plasma reactor, or combinations thereof. An example of an appropriate wet chemical tank is that found in the F-Wet FC-821 L manufactured by DNS Electronics of Sunnyvale, California. An example of conditions appropriate for forming an oxide on exposed silicon in the F-Wet FC-821 L may comprise a mixture of 0.1-10 weight percent hydrochloric acid, 0.3-10 weight percent hydrogen peroxide, and the remainder water; and a process time and temperature of 10 minutes at 25 degrees Celsius. The processing chamber 330 may receive the die comprising oxide on silicon. In at least one embodiment, the processing chamber 330 first deposits metal on the oxide and then achieves conditions effective for forming the metal suicide. In some cases a protective layer may be deposited on the metal before formation. An example of such a protective layer is a titanium nitride layer. Such a protective layer may help prevent oxidation of the metal. An example of an occasion presenting risk of oxidation is where there is a delay between metal deposition and suicide formation, such as where metal deposition and suicide formation occur in separate chambers as described below. The protective layer may remain on the die during suicide formation. In certain embodiments, the processing chamber 330 may accomplish all of deposition of a metal, deposition of a protective layer, and suicide formation. The deposition(s) may be accomplished via any means effective for putting down a metal and optionally a protective layer as described herein. In various embodiments, the deposition(s) occur via sputtering as would typically be effected by those skilled in the art. An example of a processing chamber suitable for depositing a metal, optionally depositing a protective layer, and forming suicide is the Endura 5500 manufactured by Applied Materials of Santa Clara, California. Conditions effective for suicide formation may comprise a temperature range from about 250 to about 500 degrees Celsius at a pressure in the <~>range from about 10<"8> tcrabo[alpha]t 760 torr.Variations among components of the system 300 are practicable. In at least one instance, the system 300 is designed to form self-aligned suicide or "salicide". In some embodiments, a separate deposition chamber (not shown) may precede the processing chamber 330 and accomplish metal deposition following oxide formation in the vessel 310. In cases where a protective layer, e.g., titanium nitride, is needed as well, such a deposition chamber may accomplish deposition, e.g., via sputtering, of both the metal and protective layer. Methods of executing such a deposition may be those methods employed by one skilled in the art. An example of such a deposition chamber is the Endura 5500 manufactured by Applied Materials of Santa Clara, California.Yet additional components may optionally be employed in the system 300 for forming suicide. In some embodiments, the die having oxidized silicon may be treated in a plasma reactor (not shown) prior to metal deposition. Such a plasma reactor typically exposes the oxidized silicon to a plasma etch. An example of a process sequence including a plasma reactor may be: form oxide on exposed silicon, expose to plasma etch, deposit metal, and form suicide. Methods of contacting with a plasma etch prior to metal deposition may be those methods typically employed by one skilled in the art. Examples of suitable plasma sources comprise the Pre-Sputter Etch Endura Chamber manufactured by Applied Materials of Santa Clara, California, or the Highlands Ash Chamber manufactured by Mattson of Fremont, California. in addiiionai embodiments, the system 300 may comprise one piece of equipment. An example of such a scheme would comprise treating exposed silicon to form an oxide, depositing a metal and optionally a protective layer, and forming suicide, all of which may be executed in the same chamber. In at least one embodiment, a processing chamber receives a die comprising exposed silicon, forms a non-thermal oxide on the exposed silicon, deposits nickel on the oxide, and executes suicide formation as described herein.While various embodiments of the invention have been shown and described, modifications thereof can be made by one skilled in the art without departing from the scope of the invention. |
A method of forming an elevationally extending conductor laterally between a pair of conductive lines comprises forming a pair of conductive lines spaced from one another in at least one vertical cross-section. Conductor material is formed to elevationally extend laterally between and cross elevationally over the pair of conductive lines in the at least one vertical cross-section. Sacrificial material is laterally between the elevationally extending conductor material and each of the conductive lines of the pair in the at least one vertical cross-section. The sacrificial material is removed from between the elevationally extending conductor material and each of the conductive lines of the pair while the conductor material is crossing elevationally over the pair of conductive lines to forma void space laterally between the elevationally extending conductor material and each of the conductive lines of the pair in the at least one vertical cross-section. |
1.A method of laterally forming a vertically extending conductor between a pair of wires, comprising:Forming a pair of wires spaced apart from one another in at least one vertical section;Forming a conductor material extending laterally vertically between the pair of wires and vertically across the pair of wires in the at least one vertical section, the sacrificial material being laterally interposed in the at least one vertical section Between the vertically extending conductor material and each of the pair of wires; andRemoving the sacrificial material from between the vertically extending conductor material and each of the pair of wires when the conductor material vertically spans the pair of wires to be in the at least one vertical section A vertically extending conductor material forms a void laterally between each of the pair of wires.2.The method of claim 1 whereinThe pair of wires are formed to extend horizontally;The conductor material is formed to include a horizontally extending conductor material line that vertically spans the pair of horizontally extending wires and has the conductor material extending laterally inwardly between the pair of wires.3.The method of claim 2 wherein forming the conductor material line comprises subtractive patterning of the conductor material.4.The method of claim 2 wherein forming the conductor material line comprises:Forming a trench; andThe trench is filled with the conductor material.5.The method of claim 4 wherein said trench is formed in a dielectric material and said trench is filled with said conductor material comprises:Overfilling the trench with the conductor material and forming the conductor material vertically above the dielectric material;The conductor material is removed such that it is not vertically above the dielectric material.6.The method of claim 5 wherein forming the trench comprises:Forming and subtracting the patterned location material to have a longitudinal extent and location corresponding to a longitudinal extent and location of the conductor material line;Forming the dielectric material on top of the patterned reposition material and over sidewalls thereof;Removing the dielectric material vertically inward to expose a vertical outermost surface of the patterned reposition material and laterally over the sidewall of the patterned reposition material; andAfter exposing the vertical outermost surface of the patterned reposition material, the patterned reposition material is removed to form the trench.7.The method of claim 1 whereinForming the void to open it, and the method further includes sealing the void from opening prior to the removing of the conductor material, the removal of the conductor material reopening the void, Reopening to open the gap vertically;The reopened void is resealed after the removal of the conductor material so that it is not open vertically.8.A method of laterally forming a vertically extending conductor between a pair of wires, comprising:Forming a pair of wires spaced apart from one another in at least one vertical section;Forming a sacrificial material over the sidewalls of the pair of wires in the at least one vertical section;Forming a conductor material that extends laterally vertically between the pair of wires laterally above the sacrificial material and vertically across the pair of wires in the at least one vertical section, the vertically extending conductor Material extends in the at least one vertical section to be electrically coupled to a position of a node laterally between the pair of wires;Subtracting the conductor material to form a conductor material line having a conductor material extending vertically to a position laterally between the pair of wires, the conductor material line vertically spanning the pair wire;Removing the sacrificial material between the conductor material extending vertically to the node position and each of the pair of wires when the conductor material line vertically spans the pair of wires Forming a space laterally between the conductor material extending vertically to the node position and each of the pair of wires in at least one vertical section; andAfter forming the void, the conductor material is removed such that it does not traverse the pair of wires vertically, while leaving at least some of the conductor material extending vertically to the node location.9.The method of claim 8 including, after forming said void, on said opposite side of said vertically extending conductor material prior to said removing said conductor material from vertically across said pair of conductors A dielectric material is formed laterally above the wall.10.The method of claim 9 wherein said subtracting patterning said conductor material forms another conductor material line laterally spaced from said conductor material line and having a vertical inward extension to laterally A conductor material for another node location between the wires, the other conductor material line vertically spanning the pair of wires, the dielectric material filling a space between the conductor material line and the other wire.11.A method of forming a vertically extending conductor between a pair of wires, comprising:Forming a pair of wires spaced apart from one another in at least one vertical section;Forming a first sacrificial material line spanning the pair of wires in the at least one vertical cross-section, the first sacrificial material line comprising a first sacrificial material extending laterally vertically between the pair of wires;Forming a dielectric material on opposite sides of the first sacrificial material line;Forming a second sacrificial material over the sidewalls of the pair of wires in the at least one vertical section;Substituting the first sacrificial material line with a conductor material to form a conductor material line that vertically spans the pair of wires, the conductor material line having a vertical extension to the lateral direction in the at least one vertical section a conductor material for the position of the nodes between the wires;Removing the second sacrificial material between the conductor material extending vertically to the node position and each of the pair of wires when the conductor material line vertically spans the pair of wires Forming a void laterally between the conductor material extending vertically to the node location and each of the pair of wires in the at least one vertical section; andAfter forming the void, the conductor material is removed such that it does not span vertically across the pair of wires while leaving at least some of the conductor material extending vertically to the node location.12.The method of claim 11 including forming the second sacrificial material over the sidewalls of the pair of wires prior to forming the first sacrificial material line.13.The method of claim 11 including forming the second sacrificial material over the sidewalls of the pair of wires after forming the first sacrificial material line.14.The method of claim 11 including selectively etching the dielectric material vertically inwardly relative to the conductor material line prior to the removing of the conductor material after the replacing.15.The method of claim 14 wherein said etching causes a vertical outermost surface of said dielectric material to be vertically higher than a vertical outermost surface of said electrically conductive material of said pair of wires.16.The method of claim 15 wherein said pair of wires have a dielectric material on top of the composition that is different from the composition of said dielectric material.17.The method of claim 11 wherein said void comprises a first void and said method further comprises:Forming the second sacrificial material on opposite sides of the dielectric material prior to the replacing, the removal of the second sacrificial material also in another perpendicular cross-section orthogonal to the one vertical cross-section Removing the second sacrificial material between the conductor material extending vertically to the node location and the dielectric material on each of the opposite sides of the vertically extending conductor material to Forming a laterally between the conductor material vertically extending to the node position and the dielectric material on each of the opposite sides of the vertically extending conductor material in another vertical section The two voids, the first void and the second void are joined together to form a single void surrounding the conductor material that extends vertically to the node location.18.The method of claim 11 including wherein said first sacrificial material is predominantly carbon.19.The method of claim 18 wherein said removing said second sacrificial material to form said void comprises: selectively isotropically wet etching said second with respect to said conductor material and said dielectric material Sacrifice materials.20.The method of claim 18 wherein said second sacrificial material and said dielectric material are identical on a composition, said removing said second sacrificial material to form said void comprises opposing in said same etching step The conductor material and the dielectric material selectively isotropically wet etch the second sacrificial material and the dielectric material.21.The method of claim 18 wherein said second sacrificial material and said dielectric material are different in composition;The method further includes selectively etching the dielectric material vertically inwardly relative to the conductor material line prior to the removing, prior to the removing, the etching causing the dielectric material The vertical outermost surface is vertically higher than the vertical outermost surface of the electrically conductive material of the pair of wires.22.A method comprising:Forming first and second wires extending generally parallel to each other with a space therebetween, the first wire comprising a first side surface facing the second wire, the second wire comprising facing the first a second side surface of a wire;Forming a first sacrificial material such that the first sacrificial material includes a first portion covering a first portion of the first side surface of the first wire and forming a second sacrificial material such that the second sacrificial material comprises a cover a second portion of the second portion of the second side surface of the two wires;Forming a conductor material to continuously span the first wire and the second wire such that the conductor material comprises the first portion filling the first sacrificial material and the second portion of the second sacrificial material a portion of the space between the conductive portions; andRemoving the first portion of the first sacrificial material and the second portion of the second sacrificial material while maintaining the conductor material continuously across the first wire and the second wire to Forming a first air gap between the conductive portion of the conductor material and the first portion of the first side surface of the first wire, and the conductive portion and the second portion of the conductor material A second air gap is formed between the second portions of the second side surface of the wire.23.The method of claim 22,Wherein the first sacrificial material is elongated along the first side surface such that the first sacrificial material further comprises a third portion covering a third portion of the first side surface, and the second sacrificial material along Extending the second side surface such that the second sacrificial material further comprises a fourth portion covering a fourth portion of the second side surface;Wherein the forming the conductor material comprises:Forming a conductor layer over the entire surface; andPatterning the conductor layer such that the third portion of the first sacrificial material and the fourth portion of the second sacrificial material are exposed;The removing the first portion of the first sacrificial material and the second portion of the second sacrificial material includes exposing the third portion of the first sacrificial material and the second sacrificial material The exposed fourth portion is subjected to an etchant for the sacrificial material.24.The method of claim 22,Wherein the first sacrificial material is elongated along the first side surface such that the first sacrificial material further comprises a third portion covering a third portion of the first side surface and the second sacrificial material is along The second side surface is elongated such that the second sacrificial material further comprises a fourth portion covering a fourth portion of the second side surface;The method further includes:Forming a dielectric material to define a region in which the conductor material is to be formed before forming the conductive material; andAfter forming the conductive material, removing the dielectric material such that the third portion of the first sacrificial material and the fourth portion of the second sacrificial material are exposed;The removing the first portion of the first sacrificial material and the second portion of the second sacrificial material includes exposing the third portion of the first sacrificial material and the second sacrificial material The exposed fourth portion is subjected to an etchant for the sacrificial material.25.The method of claim 22 further comprising:The method further includes:Forming a first dielectric material and a second portion extending substantially parallel to each other to span each of the first wire and the second wire prior to forming the first sacrificial material and the second sacrificial material a dielectric material, the first dielectric material comprising a third side surface facing the second dielectric material, and the second dielectric material comprising a fourth side surface facing the first dielectric material;Forming a third sacrificial material covering a portion of the third side surface of the first dielectric material and a fourth portion covering a portion of the fourth side surface of the second dielectric material before forming the conductive material a sacrificial material, the third sacrificial material is combined with each of the first sacrificial material and the second sacrificial material, and the fourth sacrificial material and the first sacrificial material and the second sacrificial material Each of them merges; andAfter forming the conductor material, removing the third dielectric material and the fourth dielectric material such that the third sacrificial material and the fourth sacrificial material are exposed;The removing the first portion of the first sacrificial material and the second portion of the second sacrificial material includes subjecting the exposed third and fourth sacrificial materials to an etchant for a sacrificial material. |
Method of laterally forming a vertically extending conductor between a pair of wiresTechnical fieldEmbodiments disclosed herein relate to a method of laterally forming a vertically extending conductor between a pair of wires.Background techniqueA continuing goal of integrated circuit fabrication is to make smaller and more closely packed circuit components. As integrated circuits increase in density, the horizontal dimensions of circuit components tend to be more reduced than vertical dimensions. In many cases, the vertical dimension has increased. Vertically extending conductors are typically used to electrically couple circuit components at different heights relative to one another.In many cases, the conductor extends vertically between the two wires and has a very high aspect ratio (height to width). Historically, conductors have been separated from the wires only by solid dielectric materials. More recently, air gaps have been proposed as part of the dielectric material separating the sides of the vertically extending conductor from the immediately adjacent conductor. Maintaining a high conductor upright when forming and sealing such an air gap can be difficult.DRAWINGS1A is a schematic top plan view of a semiconductor substrate in a process in accordance with an embodiment of the present invention.Fig. 1B is a cross-sectional view taken through line B-B in Fig. 1A.Fig. 1C is a cross-sectional view taken through line C-C in Fig. 1A.Fig. 1D is a cross-sectional view taken through line D-D in Fig. 1A.Fig. 1E is a cross-sectional view taken through line E-E in Fig. 1A.2A is a view of the substrate of FIG. 1A in a processing step subsequent to the steps illustrated by FIG. 1A.Fig. 2B is a cross-sectional view taken through line 2B-2B of Fig. 2A.Figure 2E is a cross-sectional view taken through line 2E-2E of Figure 2A.FIG. 3A is a view of the substrate of FIG. 2A in a processing step subsequent to the steps illustrated by FIG. 2A.Fig. 3B is a cross-sectional view taken through line 3B-3B of Fig. 3A.4A is a view of the substrate of FIG. 3A in a processing step subsequent to the steps illustrated by FIG. 3A.Fig. 4B is a cross-sectional view taken through line 4B-4B in Fig. 4A.Figure 5A is a view of the substrate of Figure 4A in a processing step subsequent to the processing steps illustrated by Figure 4A.Fig. 5B is a cross-sectional view taken through line 5B-5B in Fig. 5A.Figure 6A is a view of the substrate of Figure 5A in a processing step subsequent to the steps illustrated by Figure 5A.Fig. 6B is a cross-sectional view taken through line 6B-6B in Fig. 6A.Fig. 6C is a cross-sectional view taken through line 6C-6C in Fig. 6A.Figure 7A is a view of the substrate of Figure 6A in a processing step subsequent to the steps illustrated by Figure 6A.Fig. 7B is a cross-sectional view taken through line 7B-7B in Fig. 7A.Fig. 7C is a cross-sectional view taken through line 7C-7C in Fig. 7A.Figure 8A is a view of the substrate of Figure 7A in a processing step subsequent to the steps illustrated by Figure 7A.Fig. 8B is a cross-sectional view taken through line 8B-8B in Fig. 8A.Figure 8C is a cross-sectional view taken through line 8C-8C of Figure 8A.Fig. 8D is a cross-sectional view taken through line 8D-8D in Fig. 8A.Figure 9A is a view of the substrate of Figure 8A in a processing step subsequent to the steps illustrated by Figure 8A.Fig. 9B is a cross-sectional view taken through line 9B-9B in Fig. 9A.Fig. 9C is a cross-sectional view taken through line 9C-9C in Fig. 9A.Figure 9D is a cross-sectional view taken through line 9D-9D in Figure 9A.Figure 9E is a cross-sectional view taken through line 9E-9E in Figure 9A.Figure 10A is a view of the substrate of Figure 9A in a processing step subsequent to the steps illustrated by Figure 9A.Figure 10B is a cross-sectional view taken through line 10B-10B of Figure 10A.Figure 10C is a cross-sectional view taken through line 10C-10C of Figure 10A.Figure 10.1 is an enlarged cross-sectional view of a portion of the substrate of Figure 10A taken through line 10.1-10.1 of Figure 10B.Figure 11A is a view of the substrate of Figure 10A in a processing step subsequent to the steps illustrated by Figure 10A.Fig. 11B is a cross-sectional view taken through line 11B-11B in Fig. 11A.Figure 11D is a cross-sectional view taken through line 11D-11D in Figure 11A.Figure 12A is a view of the substrate of Figure 11A in a processing step subsequent to the steps illustrated by Figure 11A.Figure 12B is a cross-sectional view taken through line 12B-12B of Figure 12A.Figure 13A is a view of the substrate of Figure 12A in a processing step subsequent to the steps illustrated by Figure 12A.Figure 13B is a cross-sectional view taken through line 13B-13B of Figure 13A.Figure 14A is an enlarged view of the substrate of Figure 13A in a processing step subsequent to the steps illustrated by Figure 13A.Figure 14B is a cross-sectional view of a standard scale taken through line 14B-14B of Figure 14A.Figure 14E is a standard scale cross-sectional view taken through line 14E-14E of Figure 14A.Figure 104.1A is a schematic top plan view of a semiconductor substrate in the process in accordance with an embodiment of the present invention.Figure 104.1B is a cross-sectional view taken through line 104.1B-104.1B of Figure 104.1A.Figure 104.2A is a view of the substrate of Figure 104.1A in a processing step subsequent to the steps illustrated by Figure 104.1A.Figure 104.2B is a cross-sectional view taken through line 104.2B-104.2B of Figure 104.2A.Figure 104.2C is a cross-sectional view taken through line 104.2C-104.2C of Figure 104.2A.Figure 104.3A is a view of the substrate of Figure 104.2A in a processing step subsequent to the steps illustrated by Figure 104.2A.Figure 104.3C is a cross-sectional view taken through line 104.3C-104.3C of Figure 104.3A.Figure 104.3D is a cross-sectional view taken through line 104.3D-104.3D of Figure 104.3.Figure 104.4A is a view of the substrate of Figure 104.3A in a processing step subsequent to the steps illustrated by Figure 104.3A.Figure 104.4B is a cross-sectional view taken through line 104.4B-104.4B of Figure 104.4A.Figure 104.4D is a cross-sectional view taken through line 104.4D-104.4D of Figure 104.4A.Figure 106A is a view of the substrate of Figure 104.4A in a processing step subsequent to the steps illustrated by Figure 104.4A.Figure 106B is a cross-sectional view taken through line 106B-106B of Figure 106A.Figure 106C is a cross-sectional view taken through line 106C-106C of Figure 106A.Figure 106D is a cross-sectional view taken through line 106D-106D in Figure 106A.Figure 106.1A is a view of the substrate of Figure 106A in a processing step subsequent to the steps illustrated by Figure 106A.Figure 106.1C is a cross-sectional view taken through line 106.1C-106.1C of Figure 10.1A.Figure 106.1D is a cross-sectional view taken through line 106.1D-106.1D in Figure 10.1A.Figure 107A is a view of the substrate of Figure 106.1A in a processing step subsequent to the steps illustrated by Figure 106.1A.Figure 107B is a cross-sectional view taken through line 107B-107B in Figure 107A.Figure 107C is a cross-sectional view taken through line 107C-107C in Figure 107A.Figure 108A is a view of the substrate of Figure 107A in a processing step subsequent to the steps illustrated by Figure 107A.Figure 108B is a cross-sectional view taken through line 108B-108B of Figure 108A.Figure 108C is a cross-sectional view taken through line 108C-108C of Figure 108A.Figure 108D is a cross-sectional view taken through line 108D-108D of Figure 108A.Figure 204.1A is a schematic top plan view of a semiconductor substrate in a process in accordance with an embodiment of the present invention.Figure 204.1B is a cross-sectional view taken through line 204.1B-204.1B of Figure 204.1A.Figure 204.2A is a view of the substrate of Figure 204.1A in a processing step subsequent to the steps illustrated by Figure 204.1A.Figure 204.2B is a cross-sectional view taken through line 204.2B-204.2B of Figure 204.2A.Figure 204.2C is a cross-sectional view taken through line 204.2C-204.2C of Figure 204.2A.Figure 204.3A is a view of the substrate of Figure 204.2A in a processing step subsequent to the steps illustrated by Figure 204.2A.Figure 204.3C is a cross-sectional view taken through line 204.3C-204.3C of Figure 204.3A.Figure 204.3D is a cross-sectional view taken through line 204.3D-204.3D of Figure 204.3A.Figure 204.4A is a view of the substrate of Figure 204.3A after the processing steps following the steps illustrated by Figure 204.3A.Figure 204.4B is a cross-sectional view taken through line 204.4B-204.4B of Figure 204.4A.Figure 204.4D is a cross-sectional view taken through line 204.4D-204.4D of Figure 204.4A.Figure 204.4E is a cross-sectional view taken through line 204.4E-204.4E of Figure 204.4A.Figure 204.5A is a view of the substrate of Figure 204.4A after the processing steps following the steps illustrated by Figure 204.4A.Figure 204.5B is a cross-sectional view taken through line 204.5B-204.5B of Figure 204.5A.Figure 204.5C is a cross-sectional view taken through line 204.5C-204.5C of Figure 204.5A.Figure 204.5D is a cross-sectional view taken through line 204.5D-204.5D of Figure 204.5A.Figure 204.5E is a cross-sectional view taken through line 204.5E-204.5E of Figure 204.5A.Figure 204.6A is a view of the substrate of Figure 204.5A in a processing step subsequent to the steps illustrated by Figure 204.5A.Figure 204.6B is a cross-sectional view taken through line 204.6B-204.6B of Figure 204.6A.Figure 204.6C is a cross-sectional view taken through line 204.6C-204.6C of Figure 204.6A.Figure 204.6D is a cross-sectional view taken through line 204.6D-204.6D of Figure 204.6A.Figure 204.6E is a cross-sectional view taken through line 204.6E-204.6E of Figure 204.6A.Figure 206A is a view of the substrate of Figure 204.6A in a processing step subsequent to the steps illustrated by Figure 204.6A.Figure 206B is a cross-sectional view taken through line 206B-206B of Figure 206A.Figure 206C is a cross-sectional view taken through line 206C-206C of Figure 206A.Figure 206D is a cross-sectional view taken through line 206D-206D of Figure 206A.Figure 206E is a cross-sectional view taken through line 206E-206E of Figure 206A.Figure 206.1A is a view of the substrate of Figure 206A in a processing step subsequent to the steps illustrated by Figure 206A.Figure 206.1C is a cross-sectional view taken through line 206.1C-206.1C of Figure 206.1A.Figure 206.1D is a cross-sectional view taken through line 206.1D-206.1D of Figure 206.1A.Figure 206.1E is a cross-sectional view taken through line 206.1E-206.1E in Figure 206.1A.Figure 207A is a view of the substrate of Figure 206.1A in a processing step subsequent to the steps illustrated by Figure 206.1A.Figure 207B is a cross-sectional view taken through line 207B-207B in Figure 207A.Figure 207C is a cross-sectional view taken through line 207C-207C of Figure 207A.Figure 207D is a cross-sectional view taken through line 207D-207D in Figure 207A.Figure 207E is a cross-sectional view taken through line 207E-207E in Figure 207A.Figure 207.1 is an enlarged cross-sectional view of a portion of the substrate of Figure 207D taken through line 207.1-207.1 of Figure 207D.Figure 306.1A is a schematic top plan view of a semiconductor substrate in the process in accordance with an embodiment of the present invention.Figure 306.1B is a cross-sectional view taken through line 306.1B-306.1B of Figure 306.1A.Figure 306.1C is a cross-sectional view taken through line 306.1C-306.1C of Figure 306.1A.Figure 306.1D is a cross-sectional view taken through line 306.1D-306.1D of Figure 306.1A.Detailed waysEmbodiments of the present invention contemplate a method of laterally forming a vertically extending conductor between a pair of wires. In this document, "vertically extending" refers to a direction that is at least 45[deg.] away from the major surface, the substrate being processed relative to the major surface during manufacture and the major surface can be considered to define a generally horizontal direction. Further, "vertical" and "horizontal" as used herein are directions that are substantially perpendicular to each other independent of the orientation of the substrate in the three-dimensional space. In addition, in this document, unless otherwise stated, "vertical (ground)", "higher", "upper", "lower", "top", "on top of", "bottom", " Above "," "below", "below" and "below" are generally referred to in the vertical direction.In one embodiment, a memory circuit can be formed, for example, a dynamic random access memory (DRAM). In one such embodiment, the pair of wires are digital lines and the vertically extending conductor interconnects the transistor active area with the capacitor storage node of the capacitor of the memory cell. A first exemplary such embodiment is described with reference to Figures 1A through 14E. With respect to all of the figures herein, the drawings designated with the "A" suffix are schematic top plan views of a portion of a semiconductor substrate during fabrication. With respect to all of the figures herein, the figures with the suffixes "B", "C", "D", and "E" are cross-sectional views taken with respect to their corresponding numbered plan view "A" views as shown. Although primarily discussed with respect to the fabrication of DRAM circuits, the present invention contemplates methods of laterally forming any vertically extending conductor between any pair of wires, including for any memory circuit and/or non-memory circuit.Referring to Figures 1A through 1E, a portion of an example starting substrate 10 is shown and may include a semiconductor substrate. In the context of this document, the term "semiconductor substrate" or "or semiconducting substrate" is defined to mean any configuration comprising a semiconducting material, including but not limited to a bulk semiconducting material, such as a semiconducting material. Wafers (in separate or on other assemblies comprising other materials), and layers of semiconducting material (alone or in an assembly comprising other materials). The term "substrate" refers to any support structure including, but not limited to, the semiconductive substrates described above. The material may be alongside the material depicted in Figures 1A through 1E, vertically inside or vertically outside. For example, other or all of the fabrication components of the integrated circuit may be provided somewhere above, around, or within the substrate 10. Substrate 10 can include any one or more of conductive/conductor/conducting (i.e., electrically conductive herein), semiconducting or insulating/insulator/insulating (i.e., electrically insulating herein) materials. In any event, any of the materials, regions, and structures described herein can be homogenous or non-homogenous, and can be continuous or discontinuous regardless of any material overlying it. Moreover, unless otherwise stated, any suitable or yet to be developed technique can be used to form each material, with atomic layer deposition, chemical vapor deposition, physical vapor deposition, epitaxial growth, diffusion doping, and ion implantation being examples.Substrate 10 includes a base substrate 12 that includes a semiconductor material 13 (eg, suitably doped monocrystalline silicon) in which trench isolation regions 14 (eg, silicon dioxide and/or silicon nitride) have been formed. . Perhaps as best seen in FIG. 1A, substrate 10 can be considered to have a longitudinally elongated region island of semiconductor material 13 surrounded by a plurality of interconnected trench isolation regions 14 or interconnected in a plurality of trench isolation regions 14. 15. A series of recessed access gate lines 16 having a gate insulator 17 (e.g., silicon dioxide) near their perimeter are shown extending horizontally within the semiconductor material 13 and trench isolation regions 14. Any suitable electrically conductive material can be used to access the gate line 16, wherein an elemental metal, an alloy or mixture of two or more elemental metals, a conductive metal compound, and a conductive doped semiconducting material are examples. Access gate line 16 can be formed using any suitable existing or yet to be developed technique with or without pitch multiplication. The access gate line 16 is covered with a dielectric material 20 (e.g., silicon dioxide and/or silicon nitride). For the sake of clarity, the gate line 16 is shown as having hatching in FIG. 1A, although as shown in FIGS. 1C through 1E, the conductive material of the gate line 16 is buried within the base substrate 12 and the trench isolation region 14, And below the dielectric material 20.The vertical outermost portion of the semiconductor material 13 has been suitably electrically doped with conductivity enhancing impurities to conduct electricity (eg, at least 1 x 1020 atoms/cm3 of peak p-type or n-type doping) to form islands in individual regions of action. Three transistor source/drain regions 18/18.1/18 are formed in the object 15. In an example embodiment, the longitudinal outer/drain regions 18 in each island 15 will be electrically coupled (directly electrically coupled in one embodiment) to the storage nodes of the capacitors of the individual memory cells. The central source/drain region 18.1 will be electrically coupled (directly electrically coupled in one embodiment) to the bit line/digital line passing vertically therethrough. In this document, if in normal operation current can flow continuously from region/material/component to another region/material/component, and mainly through the movement of subatomic positive and/or negative charges as they are fully generated Do, then the regions/materials/components are "electrically coupled" with respect to each other. Another electronic component can be between the regions/materials/components and electrically coupled to the regions/materials/components. In contrast, when a region/material/component is referred to as "direct electrical coupling," there are no intervening electronic components between the directly electrically coupled regions/materials/components (eg, no diodes, transistors, resistors, transducers) , switches, fuses, etc.). When a suitable voltage is applied to the access line 16, the conductive vias are formed in the semiconductor material 13 proximate to the gate insulator 17 such that current can be present in the longitudinally active source/drain regions below the access line 16 within the individual active region islands 15. 18 flows between the central source/drain region 18.1. Thus, in the example embodiment, each island 15 includes two field effect transistors each sharing a central source/drain region 18.1.Referring to Figures 2A/B/E, a dielectric material 21 (e.g., silicon dioxide and/or silicon nitride) has been deposited and patterned to form a digital line contact opening 27 therethrough over the source/drain regions 18.1 and The source/drain regions 18 are covered with a dielectric material 21. Next, wires 22, 23, 24, and 25 have been formed over the dielectric material 21, with each wire being spaced apart from each other in at least one vertical section (eg, through a vertical section as shown in FIG. 2B). As with access gate line 16, any suitable electrically conductive material can be used for lines 22 through 25 and such can be formed using any suitable technique. In one embodiment, wires 22 through 25 are formed to extend horizontally. Conductors 22 through 25 are shown with a dielectric/insulator cover 26 (e.g., silicon nitride and/or silicon dioxide) formed thereon. For the sake of clarity in Figure 2A, different materials of conductive material below line 22 through 25 as shown in Figure 2B are not shown in Figure 2A and in most subsequent "A" figures. It is primarily discussed that a vertically extending conductor (not shown in Figures 2A, 2B, and 2E) is formed between a pair of wires 23, 24. However, it will be apparent in the example embodiments that the vertically extending conductors are also formed between other adjacent pairs of wires and that additional such conductors are also formed between the wires 23 and 24.Referring to Figures 3A/B and in one embodiment, dielectric material 28 and sacrificial material 30 have been formed over the sidewalls of the pair of wires 23, 24 in the depicted vertical cross-section. In one embodiment and as shown, the dielectric material 28 has the same composition as the composition of the dielectric material 26, as exemplified by a dashed interface between materials 26 and 28. The sacrificial material 30 can be completely removed from the substrate in subsequent processing and, if so, can include any of semiconducting, electrically conductive, and/or dielectric materials. Desirably, the sacrificial material 30 has a composition that is different from the composition of the material 28, with silicon nitride and silicon dioxide being an example of materials 28 and 30, respectively. Another material (not shown) may be placed over the sacrificial material 30, for example, another non-sacrificial dielectric material having the same or a different composition than the combination of dielectric materials 26 and/or 28. As used herein, for example, if such materials are non-homogenous, "different compositions" only require that the portions of the two materials that are directly abuttable against each other are chemically and/or physically different. . If such materials are not homogeneous and if the two materials do not directly abut each other, the "different composition" only requires that the portions of the two materials closest to each other be chemically and/or physically different. In this document, a material or structure is "directly against" another material or structure when there is at least some physical touch contact of the materials or structures relative to each other. In contrast, “above”, “on”, “adjacent”, “along” and “resistance” without “directly” in front of “directly” and “intermediate material” or A configuration in which the materials or structures are in non-physical contact with respect to each other.Examples of materials 28 and 30 have a thickness of 30 angstroms and 50 angstroms, respectively. In this document, "thickness" itself (without the aforementioned directional adjectives) is defined as the average linear distance from a given material or region that is perpendicular to the closest surface of the adjacent composition or the immediate vicinity of the different compositions. Moreover, the various materials or regions described herein can have a substantially constant thickness or have a variable thickness. If it has a variable thickness, unless otherwise indicated, the thickness refers to the average thickness, and because the thickness is variable, the material or region will have a certain minimum thickness and some maximum thickness.Referring to Figures 4A/4B, materials 21, 28, and 30 have been subjected to a suitable anisotropic etch to substantially remove such material from above the horizontal surface, thus re-exposing source/drain regions 18.Referring to FIG. 5A/B, a conductor material 32 has been formed over the substrate 12 to laterally laterally between the pair of wires 23, 24 and above the sacrificial material 30 in the depicted vertical cross-section (eg, Extending vertically along the sacrificial material 30 and vertically across the pair of wires 23, 24. An example vertical thickness of conductor material 32 over materials 26, 28, and 30 is 500 angstroms. Any suitable conductor material can be used, one of which is a conductive doped semiconducting material (e.g., conductive doped polysilicon). The vertically extending conductor material 32 extends to directly electrically couple (in one embodiment) to a node position laterally between the pair of wires 23, 24 in the depicted vertical profile (eg, source/drain One of the regions 18).Referring to Figures 6A-C and in one embodiment, the conductor material 32 has been reduced to patterning to form a conductor material line 34 (showing four such lines 34) having a vertical extension between the pair of conductors 23, 24. The conductor material 32 of the node location 18, wherein the conductor material line 34 vertically spans the wires 23, 24. Any suitable subtractive patterning technique (eg, lithographic patterning and etching) can be used with or without pitch multiplication. Regardless and as shown, the formation of the line 34 may expose the vertically extending lateral ends/edges of the sacrificial material 30 (Fig. 6A).The above process is only one example technique for forming a conductor material (eg, 32) that extends laterally vertically between a pair of wires (eg, 23, 24) in at least one vertical section and spans the pair of wires (eg, 32) That is, whether or not the conductor material 32 is formed as a contour of a longitudinally extending line). The sacrificial material (eg, 30) is laterally interposed between the vertically extending conductor material and each of the pair of wires in a vertical cross section (ie, whenever a sacrificial material is formed). In one embodiment and as shown, the conductor material is formed to include a horizontally extending conductor material line (eg, 34) that vertically spans the pair of wires and has a laterally inwardly inwardly between the pair of wires Extending its conductor material.Referring to Figures 7A-C, when the conductor material line 34 spans the wires 23, 24 vertically, the sacrificial material 30 (not shown) has been extended from the vertical to the conductor location 32 of the node location 18 and each of the wires 23, 24. Removed between. This forms a gap 35 laterally between the conductor material 32 extending vertically to the node location 18 and each of the wires 23, 24 in the depicted vertical section. This removal of the sacrificial material can occur by any suitable technique (eg, selective wet isotropic etching of the sacrificial material 30 (not shown) relative to other exposed materials). In this document, selective etching or removal is the etching or removal of one of the materials removed relative to the other material(s) in a ratio of at least 2.0:1. As shown, the individual voids 35 are vertically covered by the conductor material 32 and open along their respective lateral directions relative to the vertical ends/edges (Fig. 7A). An example wet isotropic etch chemistry for selectively etching silicon dioxide (eg, sacrificial material 30) relative to polysilicon (eg, conductor material 32) and silicon nitride (eg, materials 26 and 28) is diluted aqueous HF (H2O and HF are 100:1 by volume).The process described above removes the sacrificial material only from the vertically extending conductor material and each of the pair of wires when the conductor material vertically spans the pair of wires (ie, whether the conductor material extends longitudinally or not) An example of a line) is an example technique for laterally forming a void between a vertically extending conductor material and each of the pair of wires in at least one vertical section.A dielectric material is formed laterally (eg, vertically along the opposing sidewalls) above the opposing sidewalls of the vertically extending conductor material to seal the voids 35 without completely filling the voids 35 (if the voids are completely filled). By way of example, Figures 8A through D show lateral formation of a dielectric liner 38 (e.g., silicon dioxide and/or silicon nitride) and a dielectric material 40 (e.g., silicon nitride) over opposing sidewalls of vertically extending conductor material. And / or silica). In one embodiment and as shown, the dielectric material 38/40 fills the remaining space laterally between the conductor material lines 34, and in one embodiment laterally interposed between the wires 23, 24. For purposes of clarity in Figure 8A, materials 38 and 40 are shown as the combination/single material of Figure 8A. An example technique is to deposit materials 38, 40 to overfill such spaces, and then planarize at least the materials 38, 40 back to the vertical outermost surface of conductor material 32 of line 34. A dielectric liner 38 as a thin layer may be deposited prior to the dielectric material 40 to promote sealing and maintain the voids 35 as compared to what may occur if the dielectric material 40 is deposited separately. For example, if material 40 is initially deposited as a spin-on liquid dielectric, this may undesirably fill all or a majority of void 35.Referring to Figures 9A-E, the conductor material 32 has been removed such that it does not vertically span the pair of wires 23, 24 while leaving at least some of the conductor material 32 extending vertically to the node location 18. An example technique for doing so is selective timed dry etching of conductor material 32 relative to other exposed materials. This may have the effect of re-exposed (unsealed) voids 35, such as shown. The vertical spanning of the conductor material as demonstrated with respect to the processes of Figures 7A and 7B can facilitate preventing the conductor material that extends vertically between the pair of wires from tilting or tipping before removing the vertical span of conductor material. An example dry etch for selectively etching polysilicon (eg, conductor material 32) relative to silicon nitride (eg, materials 26 and 28) and silicon dioxide is 20 sccm of SF6, 150 sccm of Ar, 10 mTorr, 600W transformer coupled plasma (TCP) power and 0W bias.Referring to Figures 10A through C and 10.1, for example, by depositing a dielectric material 42 (e.g., 35 angstroms of silicon nitride and/or silicon dioxide) followed by anisotropic etching thereof to move generally upward from the horizontal surface. The reopened void 35 is resealed except for the material 42. The example dielectric material 42 has the same composition as the material 28 as shown by the dashed interface between the materials 42 and 28. The void 35 can be finally sealed (e.g., Figures 10B and 10.1) when exposed to an indoor environment, thereby forming the void 35 as an air space or air gap. Alternatively, this may ultimately be sealed in a vacuum or in an environment that includes a gas other than air (eg, an inert gas such as nitrogen or argon).Referring to FIGS. 11A/B/D, a conductive material 46 (eg, an elemental metal, a mixture or alloy of two or more elemental metals, and/or a conductive metal compound) has been deposited over the substrate, and in one embodiment Directly against the conductor material 32. A metal silicide (not shown) may be formed between materials 32 and 46, one of which is silicon and the other of which is metal.Referring to Figures 12A/B, conductive material 46 has been patterned (e.g., by photolithography and subtractive etching) back at least to the vertical outermost surfaces of materials 26, 28, and 42 as shown.Referring to FIGS. 13A/B, dielectric material 50 (eg, silicon dioxide and/or silicon nitride) has been deposited and opening 52 is formed through dielectric material 50 and exposes the vertical outermost surface of conductive material 46.Referring to Figures 14A/B/E, conductive storage node material 54 has been deposited to line opening 52 and then has been planarized back to at least the vertical outermost surface of dielectric material 50. Next, capacitor dielectric 56 and conductive cell capacitor material 58 have been deposited, thus forming an example conventional DRAM cell of a DRAM array according to an embodiment described by only one example. For purposes of clarity in Figure 14A, materials 54, 56, and 58 are not shown in Figure 14A.Referring next to Figures 104.1A through 108D (using 100 series of numbers to identify the figures), another example method of laterally forming a vertically extending conductor between a pair of wires is described with respect to an alternate embodiment substrate 10a. The same numbers from the above embodiments have been used where appropriate, with some suffix "a" or with different numbers indicating some structural differences. To assist the reader, a common continuous sequence of numbers has been used in the figures and descriptions of all of the embodiments following the first described embodiment of FIGS. 1A through 14E of substrate 10. In particular, the last Arabic digit (if any) immediately before the decimal point corresponds to the first described embodiment in the process sequence. For example, Figures 106 and 206 correspond to the same sequence of processes illustrated by Figure 6, and correspond to each other, since the last digit in any of these before any decimal point is the number 6. The decimal point with the Arabic numerals thereafter is used to specify alternative and sequential processing that does not correspond to the processing shown in the first described embodiment. For example, FIGS. 206.1, 206.2, 206.3, etc., which occur sequentially after the process depicted by FIG. 206, still do not correspond to the process illustrated by FIG. 6 or thereafter prior to FIG. 7 in the first described embodiment. Thus, Figures 104.1A and 104.1B show the processing immediately following the processes illustrated by Figures 4A and 4B in the first described embodiment. Thus, sacrificial material 30 has been formed over the sidewalls of the pair of wires 23, 24 and then materials 21, 28, and 30 have been etched to substantially remove from the horizontal surface. Alternatively, as explained with respect to additional embodiments below, the sacrificial material 30 may be deposited without being etched to be substantially removed from the horizontal surface, or the sacrificial material 30 may not be deposited at all at this time during some alternative embodiments. Regardless, Figures 104.1A and 104.1B show the first sacrificial material 62 that has been formed over the substrate 12. In one embodiment, the sacrificial material 30 can be viewed as a second sacrificial material that has been formed over the sidewalls of the pair of wires 23, 24 in at least one vertical section, regardless of when this is formed. References to "first" and "second" to different components or materials herein are merely convenient to describe when referring to different components, different materials, and/or the same materials or components formed at different times. Thus, and unless otherwise indicated, "first" and "second" may be interchanged independently of the relative position within the completed circuit configuration and independently of the manufacturing sequence. The first sacrificial material 62 can be inorganic, including, for example, one of silicon dioxide or silicon nitride. Alternatively, this may be organic, including, for example, carbon and one or more inorganic antireflective materials or consist essentially of carbon and one or more inorganic antireflective materials. In one embodiment, the first sacrificial material 62 is primarily carbon (ie, at least 75 atomic percent carbon). One such example is a stack consisting of an organic underlayer (900 angstroms), elemental carbon (900 angstroms), an inorganic silicon rich antireflective coating (150 angstroms), an organic underlayer (800 angstroms), and an inorganic antireflective coating (from bottom to top). 200 angstroms).Referring to Figures 104.2A through C, the first sacrificial material 62 has been reduced to patterning to form a first sacrificial material line 63 (showing four lines 63) across the pair of wires 23, 24 in the depicted vertical cross-section. The first sacrificial material line 63 includes a first sacrificial material 62 that extends laterally laterally between the pair of wires 23, 24. Line 63 can be formed using any suitable technique with or without pitch multiplication. In one embodiment, the wire 63 has a longitudinal extent and location corresponding to the longitudinal extent and location of the conductor material line to be formed. In one embodiment, as will be apparent from the continuing discussion of a plurality of different embodiments, the sacrificial material 62 can also be considered to be at least partially used to form a place-preserving material in the conductor material line.Referring to Figures 104.3A/C/D, a dielectric material 64 (e.g., silicon nitride and/or silicon dioxide) has been formed over the opposite sides of the patterned sacrificial material line 63. One technique for doing this is to initially deposit material 64 on top of the patterned material 62 (eg, line 63) and over the sidewalls, then vertically remove the dielectric material 64 to expose the vertical direction of the patterned material 62. The outermost surface and the dielectric material 64 are laterally above the sidewalls of the patterned material 62.Referring to Figure 104.4 A/B/D, patterned material 62 (eg, line 63, and neither of which is shown) has been removed to form trenches 66. An example technique for doing this is etching. An example selective etch chemistry in which material 62 primarily comprises carbon and material 64 is silicon nitride or silicon dioxide is plasma O2 or plasma O2/SO2.Referring to Figures 106A-D, the grooves 66 have been filled with a conductor material 32. An example technique for doing so involves overfilling the trenches 66 with a conductor material 32 comprising vertically forming a conductor material 32 over a (not shown) dielectric material 64. Thereafter, the conductor material 32 can be removed vertically from above the dielectric material 64, resulting in an example configuration as shown. This process is merely an example technique in which the first sacrificial material line 63 (not shown) is replaced with a conductor material 32 to form a conductor material line 34 that vertically spans the pair of conductors 23, 24, wherein the conductor material line 34 has The conductor material 32 extends vertically in the depicted vertical section to a position laterally between the pair of conductors 23, 24.Referring to Figures 106.1 A/C/D, and in one embodiment, the dielectric material 64 has been selectively vertically inward relative to the conductor material line 34 and is ideally etched as shown to leave the vertical dimension of the dielectric material 64 The outer surface 67 is vertically above the surface 69 (Fig. 106.1C) of the electrically conductive material of the pair of wires 23, 24 (e.g., up to at least about 100 angstroms). In one embodiment and as shown, materials 26 and 28 have a different composition than dielectric material 64, and the material 64 is selectively etched relative to materials 26 and 28. Alternatively, dielectric materials 26 and 64 (and perhaps 28) may have the same composition relative to one another, wherein each is desirably etched back so that the upper surface of dielectric material 26 is at least about 100 angstroms thick above wires 23 and 24. To keep its upper surface covered by dielectric 26 (this alternative etch is not shown). Regardless, in one embodiment and as shown, the second sacrificial material 30 has a different composition than the dielectric material 64, and the illustrated etching of the material 64 is selectively performed with respect to the material 30. An example dry anisotropic etch chemistry for selectively etching silicon nitride (eg, dielectric material 64) relative to polysilicon (eg, conductor material 32) and silicon dioxide is plasma CH2F2/O2/Ar, or Plasma CH3F/O2/Ar. An example wet aqueous chemical is 90% by volume of H3PO4.Referring to Figures 107A-C, the conductor material 32 extending from the vertical to the node location 18 and each of the pair of conductors 23, 24 has been removed between the conductor material lines 34 vertically across the pair of conductors ( For example, the second sacrificial material 30 (not shown) is selectively wet-isotropically etched to each of the conductor material 32 and the wires 23, 24 that extend vertically to the node location 18 in the depicted vertical cross-section. A gap 35 is formed laterally between. In one embodiment, the etching of the dielectric material 64 as best described above and perhaps best seen in Figures 106.1B and 106.1C may facilitate the passage of more of the second sacrificial material 30 by initially in the process of Figures 107A-C ( For example, it is exposed to chemical etching from a greater vertical thickness of the side and the second sacrificial material 30 is removed by chemical etching therethrough. Alternatively, by way of example, the dielectric material 64 may be sufficiently etched only in the process depicted by Figure 10.1 to expose very little (not shown) the second sacrificial material 30 or only the vertical outermost surface of the second sacrificial material 30 and the height of use Selective (i.e., at least a 10:1 removal rate) wet isotropic etch chemistry/condition to selectively etch material 30 relative to other exposed materials.Referring to Figures 108A-D, dielectric materials 38, 40 are then formed laterally over opposing sidewalls of the vertically extending conductor material and desirably seal the voids 35. The processing may then be as described above or otherwise (not shown for substrate 10a), for example, including removing conductor material 32 from vertically across the pair of wires 23, 24 while leaving a vertical extension to At least some of the conductor material 32 at the node location.Any other attributes or aspects as shown and/or described above may be used with the embodiments shown and described with respect to Figures 104.1A through 108D.The above-described process with respect to FIGS. 104.1A through 108D forms a second sacrificial material 30 over the sidewalls of the pair of wires 23, 24 prior to forming the first sacrificial material line 63. Alternatively, a second sacrificial material 30 may be formed over the sidewalls of the pair of wires 23, 24 after forming the first sacrificial material line 63. In some embodiments, the second sacrificial material 30 can be formed immediately prior to forming the conductor material 32, such as shown in the process of the substrate 10b in Figures 204.1A through 207.1 (using 200 series numbers to identify the map). The same numbers from the above embodiments have been used where appropriate, with some suffix "b" or with different numbers indicating some structural differences.204.1A/B shows the process immediately following the process illustrated by FIGS. 2A and 2B and corresponds to the process illustrated by FIGS. 104.1A and 104.1B, wherein a first sacrificial material 62 has been formed over the substrate 12. However, substrate 10b differs from substrate 10a in that dielectric material 28 and second sacrificial material 30 have not been deposited (and therefore cannot be etched) prior to deposition of dielectric material 62 in Figure 204.1 A/B.Next, shown in Figures 204.2A/B/C, Figures 204.3A/C/D and 204.4A/B/D/E, respectively, corresponding to Figure 104.2A/B/C, Figure 104.3A/C/D, and Figure Subsequent processing of the process depicted by 104.4A/B/D. For further clarification, an additional "E" profile is added at some locations in the 200 series diagram compared to the 100 series of diagrams, starting with Figure 204.4E. For the purpose of continuous discussion, the dielectric material 64 can be viewed as having opposite sides 65 in Figures 204.4D and 204.4E (e.g., in the lateral direction).Referring to Figures 204.5A-E, a second sacrificial material 30 (and dielectric material 28) has been formed on the opposite side 65 of the dielectric material 64. This forms a shallow opening 59 vertically above the wires 22 to 25 and a deep opening 61 between the wires 22 to 25. As shown, the shallow opening 59 and the deep opening 61 are joined/interconnected along the B-B section line at their respective longitudinal edges above the dielectric material 26. Prior to forming materials 28 and 30, material 64 (and perhaps material 26) in the configuration of Figures 204.3 A/C/D may be selectively isotropically wet etched relative to other exposed materials, especially in the lateral direction (eg, , y) The sides are widened/enlarged (not shown) to obtain openings 59 and 61.Referring to Figures 204.6A through E, materials 28 and 30 have been subjected to a suitable anisotropic etch to substantially remove such material from above the horizontal surface, similar to the process depicted by Figures 4A and 4B.Referring to Figures 206A-E, the conductor material 32 has been deposited to overfill the openings 59 and 61 and then planarized back, thus forming a conductor material line 34b, similar to the process shown and described above with respect to Figures 106A/B/D.Referring to Figures 206.1 A/C/D/E, and in one embodiment, the dielectric materials 64 and 28 have been selectively anisotropically etched with respect to the conductor material lines 34b and the second sacrificial material 30, for example, The vertical outermost surface 67 of the dielectric material 64 is made higher than such surfaces 69 of the wires 23 and 24, as described above with respect to the process of Figures 106.1 A/C/D.Referring to Figures 207A-E and 207.1, the second sacrificial material 30 (not shown) has been removed (e.g., by selective wet isotropic etching with respect to other exposed materials) to form the voids 35 as described above. Moreover, in another perpendicular section (eg, the vertical section of FIG. 207D) orthogonal to a vertical section (eg, the vertical section of FIG. 207B), the second sacrificial material 30 has been extended from the vertical to the conductor at node location 18. The material 32 is removed between the dielectric material 64 on each of the opposing sides 65 with respect to the vertically extending conductor material 32, thus in each of the conductor material 32 and the opposite side 65 in another vertical section. A second gap 75 is laterally formed between the upper dielectric materials 64. In one embodiment, the void 35 can be considered a first void and the void 75 can be considered a second void, wherein such first void and second void are combined together to surround the conductor material 32 that extends vertically to the node location 18. The single void of the portion is best viewed as shown in the enlarged view of Figure 207.1. Due to the additive deposition of the second sacrificial material 30 (and material 28) above the wall 65 of the dielectric material 64 (Figs. 204.5A-E), the example conductor material 32 as seen in Figure 207A/D/E/.1 is thin. The same conductor material in the above embodiments (for example, in the "y" side of the depiction). The conductor material can be made wider in the lateral/"y" direction by optional isotropic wet etching of the material 64 mentioned above prior to forming the materials 28 and 30 in Figures 204.5A-E.The process can then be as described above or otherwise processed (not shown for substrate 10b), for example comprising laterally forming dielectric material 38, 40 over opposing sidewalls of vertically extending conductor material, followed by removal of conductor material 32. It does not vertically span the pair of wires 23, 24 while leaving at least some of the conductor material 32 extending vertically to the node location.Any other attributes or aspects as shown and/or described above may be used in the embodiments of Figures 204.1A through 207.1.Referring next to Figures 306.1A-D (using 300 series of numbers to identify the figures), another example method of laterally forming a vertically extending conductor between a pair of wires is described with respect to an alternate embodiment substrate 10c. The same numbers from the above embodiments have been used where appropriate, with some suffix "c" or with different numbers indicating some structural differences. Such a diagram shows the processing immediately following the processing shown by Figures 106A-D and 206A/E and instead of the processing shown by Figures 106.1A/C/D and 206.1A-E. Figures 306.1A through D show the processing corresponding to the substrate 10a of Figures 106A-D, but the same processing as the substrate 10b of Figures 206A/C/D/E can be performed. In substrate 10c, sacrificial material 30 (not shown) and dielectric material 64 (not shown) as shown in Figures 106A/C/D/E have the same composition and thereby have been used in such materials relative to other exposed materials. All such materials are removed in a single/same selective etch (eg, wet isotropic), thus forming voids 35 again. If the sacrificial material 30 is deposited and anisotropically etched immediately prior to depositing the conductor material 32, a void 75 (not shown) will also be formed as described above in connection with Figures 207A-E. Subsequent processing (not shown for substrate 10c) can occur as described above.Any other attributes or aspects as shown and/or described above may be used in the embodiment of Figures 306.1A through D.to sum upIn some embodiments, a method of laterally forming a vertically extending conductor between a pair of wires includes forming a pair of wires spaced apart from each other in at least one vertical section. A conductor material is formed to extend laterally vertically between the pair of wires in the at least one vertical section and vertically across the pair of wires. The sacrificial material is laterally interposed between the vertically extending conductor material and each of the pair of wires in at least one vertical section. Removing the sacrificial material from the vertically extending conductor material and each of the pair of wires as the conductor material vertically spans the pair of wires to vertically extend the conductor material in at least one vertical section A void is formed laterally between each of the wires.In some embodiments, a method of laterally forming a vertically extending conductor between a pair of wires includes forming a pair of wires spaced apart from each other in at least one vertical section. A sacrificial material is formed over the sidewalls of the pair of wires in the at least one vertical section. A conductor material extends laterally vertically between the pair of wires laterally above the sacrificial material in the at least one vertical cross-section and vertically across the pair of wires. A vertically extending conductor material extends in the at least one vertical section to be electrically coupled to a node position laterally between the pair of wires. The conductor material is subtractively patterned to form a conductor material line having a conductor material extending vertically to a position laterally between the pair of wires, the conductor material lines vertically spanning the pair of wires. Removing the sacrificial material between the conductor material extending vertically to the node position and the each of the pair of wires as the conductor material line vertically spans the pair of wires to extend vertically in at least one vertical section A gap is formed laterally between the conductor material to the node location and each of the pair of wires. After the void is formed, the conductor material is removed such that it does not vertically span the pair of wires while leaving at least some of the conductor material that extends vertically to the node location.In some embodiments, a method of forming a vertically extending conductor between a pair of wires includes forming a pair of wires spaced apart from each other in at least one vertical section. A first line of sacrificial material spanning the pair of wires is formed in the at least one vertical section. The first sacrificial material line includes a first sacrificial material that extends laterally laterally between the pair of wires. A dielectric material is formed on the opposite side of the first sacrificial material line. Forming a second sacrificial material over the sidewalls of the pair of wires in the at least one vertical section. The first sacrificial material line is replaced with a conductor material to form a conductor material line that vertically spans the pair of wires. The conductor material line has a conductor material that extends vertically in the at least one vertical section to a position that is laterally interposed between the pair of wires. Removing the second sacrificial material between the conductor material extending vertically to the node position and each of the pair of wires as the conductor material line vertically spans the pair of wires, in the at least one vertical section A gap is formed laterally between the conductor material extending vertically to the node location and each of the pair of wires. After the void is formed, the conductor material is removed such that it does not vertically span the pair of wires while leaving at least some of the conductor material that extends vertically to the node location.In some embodiments, a method includes forming first and second wires that extend generally parallel to each other with a space therebetween. The first wire includes a first side surface that faces the second wire. The second wire includes a second side surface that faces the first wire. Forming a first sacrificial material such that the first sacrificial material includes a first portion covering a first portion of a first side surface of the first wire and forming a second sacrificial material such that the second sacrificial material includes a second side overlying the second wire The second side of the second part of the surface. A conductor material is formed to continuously traverse the first wire and the second wire such that the conductor material comprises a conductive portion that fills a portion of the space between the first portion of the first sacrificial material and the second portion of the second sacrificial material. Removing the first portion of the first sacrificial material and the second portion of the second sacrificial material while maintaining the conductor material continuously across the first wire and the second wire to be on the conductive portion of the conductor material and the first side surface of the first wire A first air gap is formed between the first portions, and a second air gap is formed between the conductive portion of the conductor material and the second portion of the second side surface of the second wire.In accordance with the regulations, the subject matter disclosed herein has been described in language more or less specifically with respect to structural and method features. It should be understood, however, that the claims are not limited to the particular features shown and described, as the method disclosed herein includes example embodiments. Therefore, the claims should be accorded the full scope of the wording, and should be interpreted as appropriate according to the teachings of the equivalents. |
A Micro Electro-Mechanical System (MEMS) varactor (100, 200) having a bottom electrode (116) formed over a substrate (112) and a dielectric material (130) disposed over the bottom electrode (116). A pull-down electrode (122) is formed over spacer (120) and the dielectric material (130). The MEMS varactor (100, 200) is adapted to operate in a stiction mode, with at least a portion of pull-down electrode (122) in contact with dielectric material (130). The MEMS varactor (100, 200) has a high Q, large tuning range, and high sensitivity. |
What is claimed is: 1. A Micro Electro-Mechanical System (MEMS) varactor, comprising:a bottom electrode formed over a substrate; a dielectric material disposed over said bottom electrode; a spacer Proximate said bottom electrode; and a pull-down electrode over said spacer and said dielectric material, wherein said MEMS varactor is adapted to operate in a stiction mode, wherein said pull-down electrode maintains contact with said dielectric material over a range of voltage signals, and wherein said range of voltage signals is from approximately 3 V to 10 V, wherein a capacitance in the range of 13 to 25 pF is produceable by said MEMS varactor in response to said range of voltage signals. 2. The MEMS varactor according to claim 1 wherein a first voltage signal applied across said bottom electrode and said pull-down electrode produces a first capacitance.3. The MEMS varactor according to claim 2 wherein said a second voltage signal applied across said bottom electrode and said pull-down electrode produces a second capacitance.4. The MEMS varactor according to claim 1 further comprising an insulating layer disposed over said substrate beneath said bottom electrode, wherein a distance D1 is defined between said insulating layer and said pull-down electrode.5. The MEMS varactor according to claim 4 wherein said distance D1 is approximately 0.5-2.0 micrometers.6. The MEMS varactor according to claim 4 wherein said distance D1 is greater than 2.0 micrometers.7. The MEMS varactor according to claim 4 wherein said distance D1 is approximately 0.5-2.0 micrometers.8. The MEMS varactor according to claim 4 wherein said distance D1 is greater than 2 micrometers.9. A Micro Electro-Mechanical System (MEMS) varactor, comprising:a bottom electrode formed over a substrate; a dielectric material disposed over said bottom electrode; a spacer proximate said bottom electrode; and a pull-down electrode over said spacer and said dielectric material, wherein said MEMS varactor is adapted to operate in a stiction mode, wherein a voltage signal applied across said bottom electrode and said pull-down electrode produces a capacitance, wherein said pull-down electrode maintains contact with said dielectric material over a range of voltage signals in said stiction mode, and wherein said range of voltage signals is from approximately 3 V to 10 V, wherein a capacitance in the range of 13 to 25 pF is produceable by said MEMS varactor in response to said range of voltage signals. 10. The MEMS varactor according to claim 9 further comprising an insulating layer disposed over said substrate beneath said bottom electrode, wherein a distance D1 is defined between said insulating layer and said pull-down electrode.11. The MEMS varactor according to claim 10 wherein said distance D1 is approximately 0.5-2.0 micrometers.12. The MEMS varactor according to claim 10 wherein said distance D1 is greater than 2.0 micrometers.13. A method of operating a MEMS varactor having a bottom electrode formed on a substrate, a dielectric material disposed over the bottom electrode, a spacer formed on the substrate supporting a pull-down electrode, wherein a voltage applied across the bottom electrode and the pull-down electrode responsively changes the capacitance of the varactor, comprising:applying a voltage signal across the bottom electrode and the pull-down electrode to produce a Predetermined capacitance across said bottom and pull-down electrode, wherein at least a portion of said pull-down electrode is adapted to contact said dielectric material during a stiction mode, wherein said applying a voltage signal comprises applying a voltage of approximately 3 V to 10 V to produce a varactor capacitance in the range of 13 to 25 pF. 14. The method according to claim 13 wherein the area of said pull-down electrode portion in contact with said dielectric material varies responsively to changes in said voltage signal. |
TECHNICAL FIELDThis invention relates generally to integrated circuits, and more particularly to Micro Electro-Mechanical System (MEMS) devices.BACKGROUND OF THE INVENTIONIn the telecommunications industry, the demand for lightweight portable devices such as personal computing devices, Personal Digital Assistants (PDA's) and cellular phones has driven designers to reduce the size of existing components. A Q value is a ratio of the power stored in a device to the dissipated power in a device. Due to the need for Q values beyond the capabilities of conventional IC technologies, board-level passive components continue to occupy a substantial portion of the overall area in transceivers of handheld telecommunications equipment, presenting a bottleneck against further miniaturization. For example, discrete components currently occupy approximately 50% of the space in cellular phones.Recently MEMS devices including resonators, filters, and switches have been developed that offer an alternative set of strategies for transceiver miniaturization and improvement. MEMS devices are high-Q, chip-level, lower power replacements for board-level components that greatly decrease space and area requirements.One such MEMS device is an RF switch for switching RF signals, shown in a cross-sectional view in FIG. 1. RF drumhead capacitive MEMS switch 10, disclosed by Goldsmith et al. in U.S. Pat. No. 5,619,061, comprises an insulator 14 such as SiO2 deposited over a substrate 12, which may comprise silicon, for example. A bottom electrode 16 is formed on insulator 14 and a dielectric 18 is formed over bottom electrode 16. Capacitor dielectric 18 typically comprises Si3N4, Ta2O5 or other suitable dielectric materials, for example. An active element comprising a thin metallic membrane 22 is suspended away from electrode 16 by an insulating spacer 20. Membrane 22 which serves as a top electrode is movable through the application of a DC electrostatic field between membrane 22 and bottom electrode 16. Membrane 22, dielectric 18 and bottom electrode 16 comprise a metal-dielectric-metal capacitor when the MEMS switch 10 is in the "on" position, shown in FIG. 2. In the "off" position shown in FIG. 1, with no voltage applied to membrane 22 and bottom electrode 16, the capacitance value is at a minimum. MEMS switches 10 have low insertion loss, good isolation, high power handling, and very low switching and static power requirements.A MEMS switch 10 may be designed for use as a varactor. A varactor is a discrete electronic component, usually comprising a P-N junction semiconductor, designed for microwave frequencies, in which the capacitance varies with the applied voltage. Varactors are sometimes referred to as tunable capacitors. Varactors are used in frequency up and down conversion in cellular phone communication, for example. Existing varactors are usually p-n diodes specifically designed for operation in the reverse bias regimes where the capacitance(CJ) of the depletion region is varied to set frequency ([omega]0) of operation as reflected in Equation 1:[omega]0≈1/(CJ*RS*RP)<[1/2]> Equation 1:where resistances RP and RS are the parallel and series resistances of the diode, respectively. Some primary requirements of a varactor are that it have a high quality factor (Q) for increased stability to thermal variations and noise spikes, and a large linear tuning range (TR). High-performing varactors are usually made of GaAs. Unfortunately, these devices use a different processing technology that is not amenable to integration into standard Si-CMOS process.MEMS devices offer a means by which high Q large tuning range varactors can be integrated in higher level devices such as voltage controlled oscillators and synthesizers using the current Si-CMOS process. The drumhead capacitive switch 10 shown in FIG. 1 may be designed to produce a MEMS varactor. The voltage across the electrodes is varied to pull down and up membrane 22, which varies the distance Dair between membrane 22 and dielectric 18, which changes the capacitance of the device 10 accordingly.A problem in MEMS devices is stiction, which is the unintentional adhesion of MEMS device 10 surfaces. Stiction may arise from the strong interfacial adhesion present between contacting crystalline microstructure surfaces. The term stiction also has evolved to often include sticking problems such as contamination, friction driven adhesion, humidity driven capillary forces on oxide surface, and processing errors. Stiction is particularly a problem in current designs of MEMS varactors, due to the membrane 22 possibly adhering to dielectric 18, resulting in device 10 failure, either temporarily or permanently. To prevent stiction, material and physical parameters, and voltage signal levels of the varactor are designed to avoid contact of membrane 22 with dielectric 18. Coatings such as Teflon-like materials that resist stiction are frequently applied over dielectric 18.SUMMARY OF THE INVENTIONThe present invention achieves technical advantages as a MEMS varactor designed to operate in a stiction mode. The pull-down electrode or top membrane maintains contact with the underlying dielectric covering the bottom electrode during operation of the varactor. As the voltage across the pull-down electrode and the bottom electrode is varied, the area of the pull-down electrode contacting the dielectric is varied, which varies the capacitance.Disclosed is a MEMS varactor, comprising a bottom electrode formed over a substrate, a dielectric material disposed over the bottom electrode, and a spacer proximate the bottom electrode. A pull-down electrode is disposed over the spacer and the dielectric material, wherein the varactor is adapted to operate in a stiction mode.Also disclosed is a method of manufacturing a MEMS varactor, comprising depositing an insulator on a substrate, forming a bottom electrode on the insulator, and depositing a dielectric material over the bottom electrode. A spacer is formed over the insulator, and a pull-down electrode is formed over the spacer and the dielectric material, wherein the varactor is adapted to operate in a stiction mode.Further disclosed is a method of operating a MEMS varactor, comprising applying a voltage across the bottom electrode and the pull-down electrode to produce a predetermined capacitance across the bottom and pull-down electrode, wherein at least a portion of the pull-down electrode is adapted to contact the dielectric material during operation in a stiction mode.Advantages of the invention include solving the stiction problems of the prior art by providing a varactor adapted to operate in a stiction mode. The present MEMS varactor is a high Q varactor having a large tuning range. The distance between the dielectric and the membrane may be increased in accordance with the present invention, allowing for a larger tuning range and providing more sensitivity to a change in voltage. A wider range of voltages and capacitances is available with the present MEMS varactor design. Furthermore, the use of Teflon-like coatings on dielectric to prevent stiction of membrane is not required, as in some prior art designs. A wider variety of dielectric materials may be used for dielectric than in the prior art because there is no need for concern about stiction of the membrane to the dielectric. The invention provides an extended tuning range that is not possible with only an air gap for the capacitive medium.BRIEF DESCRIPTION OF THE DRAWINGSThe above features of the present invention will be more clearly understood from consideration of the following descriptions in connection with accompanying drawings in which:FIG. 1 illustrates a cross-sectional view of a prior art MEMS capacitive RF switch;FIG. 2 illustrates a cross-sectional view of the MEMS varactor of the present invention adapted to operate in a stiction mode, with the majority of the membrane above the bottom electrode in contact with the dielectric;FIG. 3 illustrates a top view of the MEMS varactor shown in FIG. 2;FIG. 4 shows a model schematic representation of the MEMS varactor having a capacitance across the membrane and the bottom electrode;FIG. 5 illustrates a capacitance to voltage relationship of the MEMS varactor output capacitance over a range of voltages;FIG. 6 illustrates a cross-sectional view of the present MEMS varactor with a portion of the membrane in contact with the dielectric;FIG. 7 illustrates a top view of the MEMS varactor shown in FIG. 6; andFIG. 8 illustrates a cross-sectional view of the present MEMS varactor with a minimal portion of the membrane in contact with the dielectric and having an increased spacer height, increasing the tuning range of the varactor.Corresponding numerals and symbols in the different figures refer to corresponding parts unless otherwise indicated.DETAILED DESCRIPTION OF PREFERRED EMBODIMENTSA cross-sectional view of the MEMS varactor 100 of the present invention is shown in FIG. 2. MEMS varactor 100 comprises an insulator 114 deposited over a substrate 112, and a bottom electrode 116 formed on insulator 114. A dielectric 130 is formed over bottom electrode 116 to eliminate the possibility of electrode/electrode fusion and for creating a capacitance that is greater than possible with air. Spacer 120 are formed over the insulator 114 for supporting membrane 122 a distance D1 above insulator 114. Distance D1 may be, for example, 0.5-2.0 micrometers. Membrane 122 is also referred to herein as a pull-down electrode or top electrode. Membrane 122 may comprise holes 124 which are used to remove a temporary filler material (not shown) from cavity 126. Membrane 122 is movable through the application of a DC electrostatic field across membrane 122 and bottom electrode 116, similar to the operation of the MEMS RF switch 10 previously discussed.The MEMS varactor 100 of the present invention is adapted to operate in a stiction mode. A stiction mode is defined herein as an active operating mode during which a voltage is applied across membrane 122 and bottom electrode 116, and the membrane 122 maintains contact with at least a portion of dielectric 130 covering bottom electrode 116.The amount of area or portion 132 of membrane 122 that contacts dielectric material 130 is varied to change the capacitance C. The contact portion 132 is varied by changing voltage V across electrodes 122 and 116. In the stiction mode, the maximum capacitance Cmax is achieved when membrane 122 is biased with a voltage V such that membrane 122 makes complete contact at portion 132 to dielectric 130 as shown in FIG. 2. Capacitance Cmax may be expressed by Equation 2,Cmax≈[element of]dieA/Ddie Equation 2:where A is the cross-sectional area 132 of the electrode 122 in contact with dielectric 130, [element of]die is the dielectric constant of the dielectric 130 covering bottom electrode 116, and Ddie is the thickness of the dielectric 130. The capacitance is reduced by decreasing the membrane 122/dielectric 130 contact area, shown in FIGS. 6-8, which is accomplished by changing the voltage V. The relationship of capacitance C to area A, where A is varied by changing the voltage V, is a linear relationship. The minimum capacitance Cmin, expressed in Equation 3, occurs when the membrane 122 is not contacting the dielectric 130,1/Cmin≈1/([element of]airA/Dair)+1/([element of]dieA/Ddie) Equation 3:where [element of]air is the dielectric constant of the air and Dair is the thickness of the air space between membrane 122 and top of dielectric 130. The tuning range TR is reflected by Equation 4:TR=(Cmax-Cmin)/Cmin*100% Equation 4:The tuning range of the MEMS varactor may be extended or reduced by changing the material parameters, e.g. the materials of dielectric 130 and distances Dair and Ddie, of Equations 2 and 3, for example.FIG. 3 illustrates a top view of the MEMS varactor shown in FIG. 2, with a circular region 132 of membrane 122 in contact with dielectric 130 in a maximum amount, giving a maximum capacitance value Cmax for the varactor 100. FIG. 4 shows a model schematic representation of the MEMS varactor 100 having a capacitance C between the membrane 122 and the bottom electrode 116 for a voltage signal V input to either electrode 122, 116 of the varactor 100. FIG. 5 illustrates the capacitance to voltage relationship of the MEMS varactor 100 over a range of voltages, for example, a range of voltage signals from 3 to 10 volts produces a capacitance ranging from 13 to 25 pF in the stiction mode. These voltages and capacitances are exemplary and may vary with air gap distances D1 and dielectric material properties. FIG. 6 illustrates a cross-sectional view of the present MEMS varactor with a portion 136 of membrane 122 in contact with dielectric 130, membrane portion 136 being smaller than membrane portion 132 shown in FIG. 2. FIG. 7 illustrates a top view of the MEMS varactor 100 shown in FIG. 2, with circular portion 136 of membrane 122 in contact with dielectric 130. FIG. 8 illustrates a cross-sectional view of an alternate embodiment of the present MEMS varactor 200 with a minimal portion 138 of membrane 122 in contact with dielectric 130 and having an increased spacer 120 height D2, increasing the tuning range of the varactor 200. Increasing the distance D2 to greater than 2 micrometers also provides more sensitivity to a change in voltage signal V.There are many preferred and alternate configurations for the present varactor 100, 200 adapted to operate in a stiction mode. A first voltage signal applied across the bottom electrode and the pull-down electrode produces a first capacitance, and a second voltage signal applied across the bottom electrode and the pull-down electrode produces a second capacitance, where the first and second voltages are different.Although preferably the pull-down electrode 122 maintains contact with the dielectric material 130 over a range of voltage signals, the varactor 100, 200 may also be operated in a non-stiction mode in an alternate embodiment. In this embodiment, the tuning range of the varactor may be increased if the membrane starts at the undeformed (no voltage signal applied) position and then is deflected so that it makes contact with bottom electrode . The height of the membrane is varied over the air gap until it makes contact partially, then fully with the bottom electrode. In this embodiment, a larger tuning range is achievable. However, the varactor may not be reliably operated across the entire tuning range if the membrane permanently sticks, in which case the varactor would then operate only in the stiction mode.The invention also includes a method of manufacturing a MEMS varactor 100, 200 comprising depositing an insulator 114 on substrate 112, forming bottom electrode 116 on insulator 114 and depositing dielectric material 130 over bottom electrode 116. Spacer 120 are formed over insulator 114, and pull-down electrode 122 is formed over spacer 120 and dielectric material 130, wherein the varactor 100, 200 is adapted to operate in a stiction mode. At least a portion 132, 136, 138 of pull-down electrode 122 contacts dielectric material 130 in a stiction mode.The invention also includes a method of operating a MEMS varactor 100, 200. The method comprises applying a voltage signal V across bottom electrode 116 and pull-down electrode 122 to produce a predetermined capacitance C across bottom 116 and pull-down 122 electrode, wherein at least a portion 132, 136, 138 of pull-down electrode 122 is adapted to contact dielectric material 130 during a stiction mode.The novel MEMS varactor 100, 200 of the present invention achieves technical advantages by providing a high Q varactor having a large tuning range and increased sensitivity. MEMS varactor 100, 200 solves the stiction problems of prior art MEMS varactors by being adapted to operate in a stiction mode. The distance D1, D2 between the dielectric and the membrane may be increased in accordance with the present invention, allowing for a larger tuning range. A wider range of voltages and capacitances is available with the present MEMS varactor design compared with the prior art. Furthermore, the use of Teflon-like coatings on dielectric 130 to prevent stiction of membrane 122 is not required as in some prior art designs. A wider variety of dielectric materials may be used for dielectric 130 than in the prior art because there is no need for concern about stiction of the membrane 122 to the dielectric 130. The invention provides an extended tuning range that is not possible with only an air gap for the capacitive medium. Furthermore, the MEMS varactor 100, 200 preferably comprises silicon rather than GaAs, and may comprise metals that maintain low insertion loss and good isolation of the MEMS varactor 100, 200.While the invention has been described with reference to illustrative embodiments, this description is not intended to be construed in a limiting sense. Various modifications in combinations of the illustrative embodiments, as well as other embodiments of the invention, will be apparent to persons skilled in the art upon reference to the description. For example, although membrane portions132 and 136 in contact with dielectric 130 are shown in a top view as being circular, other shapes for contact membrane portion 132, 136 are anticipated, for example, square, oval rectangular, or any other geometrical shape. The MEMS varactor 100 may be designed to also operate in a non-stiction mode, wherein membrane 122 is not in contact with dielectric 130, as well as the stiction mode described herein. It is therefore intended that the appended claims encompass any such modifications or embodiments. |
Methods and apparatus relating to location-based haptic direction finding are described. In an embodiment, logic (e.g., included in a mobile computing device) redirects one or more navigational hints to one or more trembler devices instead of a display device and/or speakers of the mobile computing device in response to a request to provide haptic directional cues. Other embodiments are also disclosed and claimed. |
CLAIMS1. An apparatus comprising:logic, the logic at least partially comprising hardware logic, to redirect one or more navigational hints to one or more trembler devices instead of a display device of a mobile computing device in response to a request to provide haptic directional cues,wherein the mobile computing device is to comprise the logic.2. The apparatus of claim 1, wherein the one or more trembler devices are to communicate with the mobile device wirelessly.3. The apparatus of claim 2, wherein the wireless communication is to be provided via one or more of Bluetooth™ communication and Near Field Communication (NFC).4. The apparatus of claim 1, wherein the one or more navigational hints are to be received from a navigation or mapping provider.5. The apparatus of claim 1, wherein the logic is to cause the display device to enter a low power consumption state in response to selection of the haptic directional cues.6. The apparatus of claim 1, wherein the mobile computing device is to comprise one of: a smartphone, a tablet, a UMPC (Ultra-Mobile Personal Computer), a laptop computer, an Ultrabook™ computing device, and a wearable device.7. The apparatus of claim 6, wherein the wearable device is to include one of a smart watch, a helmet, a jacket, a shirt, a pair of pants, a pair of shorts, a shoe, or glasses.8. The apparatus of claim 1, wherein the logic is to redirect the one or more navigational hints to one or more trembler devices instead of one or more speakers in response to selection of the haptic directional cues.9. The apparatus of claim 8, wherein the logic is to cause audio logic coupled to the one or more speakers to enter a low power consumption state.10. The apparatus of claim 1, wherein a processor, having one or more processor cores, is to comprise the logic.1 1. The apparatus of claim 1, wherein one or more of the logic, a processor having one or more processor cores, and memory are on a single integrated circuit die.12. A method comprising:redirecting, at logic in a mobile computing device, one or more navigational hints to one or more trembler devices instead of a display device of the mobile computing device in response to a request to provide haptic directional cues.13. The method of claim 12, further comprising the one or more trembler devices communicating with the mobile device wirelessly.14. The method of claim 13, wherein the wireless communication is provided via one or more of Bluetooth™ communication and Near Field Communication (NFC).15. The method of claim 12, further comprising receiving the one or more navigational hints from a navigation or mapping provider.16. The method of claim 12, further comprising causing the display device to enter a low power consumption state in response to selection of the haptic directional cues.17. The method of claim 12, further comprising redirect the one or more navigational hints to one or more trembler devices instead of one or more speakers in response to selection of the haptic directional cues.18. The method of claim 17, further comprising causing audio logic coupled to the one or more speakers to enter a low power consumption state.19. A system comprising: a mobile computing device having memory to store data; logic to redirect one or more navigational hints to one or more trembler devices instead of a display device of the mobile computing device in response to a request to provide haptic directional cues, wherein the mobile computing device is to comprise the logic.20. The system of claim 19, wherein the one or more trembler devices are to communicate with the mobile device wirelessly.21. The system of claim 20, wherein the wireless communication is to be provided via one or more of Bluetooth™ communication and Near Field Communication (NFC).22. The system of claim 19, wherein the one or more navigational hints are to be received from a navigation or mapping provider.23. The system of claim 19, wherein the logic is to cause the display device to enter a low power consumption state in response to selection of the haptic directional cues.24. A computer-readable medium comprising one or more instructions that when executed on a processor configure the processor to perform one or more operations of any one of claims 12 to 17.25. An apparatus comprising means to perform a method as set forth in any one of claims 12 to 17. |
LOCATION BASED HAPTIC DIRECTION FINDINGFIELDThe present disclosure generally relates to the field of electronics. More particularly, an embodiment relates to techniques for location based haptic direction finding. BACKGROUNDPortable computing devices are gaining popularity, in part, because of their decreasing prices and increasing performance. Another reason for their increasing popularity may be due to the fact that some portable computing devices may be operated at many locations, e.g., by relying on battery power. However, as more functionality and features are integrated into portable computing devices, the need to reduce power consumption becomes increasingly important, for example, to maintain battery power for an extended period of time.BRIEF DESCRIPTION OF THE DRAWINGSThe detailed description is provided with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical items.Figs. 1 and 4-5 illustrate block diagrams of embodiments of computing systems, which may be utilized to implement various embodiments discussed herein.Fig. 2 illustrates a new usage model, according to an embodiment.Fig. 3 illustrates a flow diagram of a method for location based haptic direction finding, according to an embodiment.Fig. 6 illustrates a block diagram of an SOC (System On Chip) package in accordance with an embodiment.DETAILED DESCRIPTIONIn the following description, numerous specific details are set forth in order to provide a thorough understanding of various embodiments. However, various embodiments may be practiced without the specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail so as not to obscure the particular embodiments. Further, various aspects of embodiments may be performed using various means, such as integrated semiconductor circuits ("hardware"), computer- readable instructions organized into one or more programs ("software"), or some combination of hardware and software. For the purposes of this disclosure reference to "logic" shall mean either hardware, software, firmware, or some combination thereof.Some mobile computing devices (such as smartphones, tablets, etc.) rely on sensor data to enhance user experience for a range of applications. One such application is navigation (e.g., based on information provided by a GPS (Global Positioning System) sensor). However, the embodiments are not limited to GPS based implementations and might also respond location based services, such as at thresholds to stores, Cue "tap to pay" at entries, "Boarding now", etc.Navigation applications generally rely on a display device to indicate where the device is located on a map and any other information such as directional arrows, etc. However, the display in a mobile device can be a significant power consumer. Also, requiring that users observe information on a display for navigational guidance may be distracting, e.g., causing a user to walk into a hazardous situation (and not even applicable in case of a user with visual disability). Further, navigational guidance via audio cues may not always work depending on a user's hearing ability, surrounding noise, etc.To this end, some embodiments provide techniques for location based haptic direction finding. An embodiment addresses the problems associated with pedestrian turn-by-turn navigation without access to audio and/or visual cues, e.g., delivered by conventional GPS modalities. Such solutions are not limited to pedestrian navigation and may also be used by any user having access to a mobile device, such as a driver, bicycle rider, motorcycle rider, etc. Moreover, such techniques are envisioned to provide a new usage model (e.g., providing navigation cues without using audio and/or visual hints), energy efficiency (since audio and/or video cues are not required and the audio and/or video logic may be powered down or entered into a low power consumption state, or alternatively used for other purposes), practicality (e.g., providing navigation cues/hints even in the presence of: environmental constrains (such as audible noise and/or visual interference (such as bright sunlight)), hearing ability constraints (e.g., hearing disabilities), and/or visual ability constraints (e.g., visual disabilities)), etc.For example, a user might navigate streets, campuses, or open space without referring to a hand-held, or head-mounted display map by being spurred via two trembler devices on the user's left and right sides: either in pockets, on wrists, ears etc. Moreover, a "turn right" might be signified by the trembler on the right side of the user vibrating, and vice- versa for left. Proximity, or hazard alerts might be indicated by frequency, or intensity of vibration, either, left right or simultaneously. Possible users might include tourists in unfamiliar cities, users in low visibility environments, the deaf and/or blind, or even animals with trained response.Also, a single trembler device may be used in some embodiments, e.g., with the number of trembles indicating a given direction (such as one tremble to turn right, a double tremble to turn left, a triple tremble to go straight, a quadruple tremble to go back, intense vibration to stop, etc.). Hence, differing number of trembles may be used to convey different directions and/or actions to a user carrying a trembler device.Some embodiments may be applied in computing systems that include one or more processors (e.g., with one or more processor cores), such as those discussed with reference to Figs. 1-6, including for example mobile computing devices such as a smartphone, tablet, UMPC (Ultra-Mobile Personal Computer), laptop computer, Ultrabook™ computing device, wearable devices (such as smart watch, smart glasses, and the like), etc. More particularly, Fig. 1 illustrates a block diagram of a computing system 100, according to an embodiment. The system 100 may include one or more processors 102- 1 through 102-N (generally referred to herein as "processors 102" or "processor 102"). The processors 102 may be general-purpose CPUs (Central Processing Units) and/or GPUs (Graphics Processing Units) in various embodiments. The processors 102 may communicate via an interconnection or bus 104. Each processor may include various components some of which are only discussed with reference to processor 102- 1 for clarity. Accordingly, each of the remaining processors 102-2 through 102-N may include the same or similar components discussed with reference to the processor 102- 1.In an embodiment, the processor 102- 1 may include one or more processor cores 106- 1 through 106-M (referred to herein as "cores 106," or "core 106"), a cache 108, and/or a router 1 10. The processor cores 106 may be implemented on a single integrated circuit (IC) chip. Moreover, the chip may include one or more shared and/or private caches (such as cache 108), buses or interconnections (such as a bus or interconnection 1 12), graphics and/or memory controllers (such as those discussed with reference to Figs. 4-6), or other components.In one embodiment, the router 1 10 may be used to communicate between various components of the processor 102- 1 and/or system 100. Moreover, the processor 102- 1 may include more than one router 1 10. Furthermore, the multitude of routers 1 10 may be in communication to enable data routing between various components inside or outside of the processor 102- 1.The cache 108 may store data (e.g., including instructions) that are utilized by one or more components of the processor 102- 1 , such as the cores 106. For example, the cache 108 may locally cache data stored in a memory 1 14 for faster access by the components of the processor 102 (e.g., faster access by cores 106). As shown in Fig. 1 , the memory 1 14 may communicate with the processors 102 via the interconnection 104. In an embodiment, the cache 108 (that may be shared) may be a mid-level cache (MLC), a last level cache (LLC), etc. Also, each of the cores 106 may include a level 1 (LI) cache (1 16- 1) (generally referred to herein as "LI cache 1 16") or other levels of cache such as a level 2 (L2) cache. Moreover, various components of the processor 102- 1 may communicate with the cache 108 directly, through a bus (e.g., the bus 1 12), and/or a memory controller or hub.As shown, system 100 may also include one or more positioning sensors 130 to facilitate navigation. Sensor(s) 130 may include any sensor capable of detecting, determining, and/or extrapolating locational data, including a GPS sensor, an accelerometer, a gyro senor, a magnetometer, a pedometer, etc. System 100 also includes trembler logic 140 to cause one or more trembler devices 150 to tremble to provide directional hits/cues to a user such as discussed here.Fig. 2 illustrates a new usage model, according to an embodiment. As shown, two trembler devices (A, which may be the same or similar to the trembler devices 150 of Fig. 1) linked to a cellphone (B) or other mobile computing device discussed herein via wires or wirelessly (e.g., via Bluetooth™ communication, Near Field Communication (NFC), etc.) draw turn by turn data from a navigation/mapping provider (like Google Maps™ mapping service, Bing™ Maps and MapPoint™ web service, etc.) to deliver navigation haptic outputs (e.g., based on positioning data from sensors 130), allowing a user to navigate to a desired destination.For example, a user can use their cell phone (or other mobile device, such as a smartphone, tablet, UMPC (Ultra-Mobile Personal Computer), laptop computer, Ultrabook™ computing device, wearable devices (such as smart watch, smart glasses, and the like), etc. to request directions to a specific point, so for example as they walk down a street and meet an intersection, one of the tremblers tremble depending on whether they should turn left or right. The effect is "right tremble- turn right" and vice versa. Distance might be indicated by both left and right vibrating simultaneously and rates dependent on proximity to the next turn. For the deaf, navigation request may use voice recognition, e.g., supported by the mobile device and/or from the data provider. The trembler devices might be worn in pockets, or in a more compact form, head mounted, or otherwise integrated into wearable/clothing items such as helmets, jackets, shirts, pants, shoes, glasses, etc.Also, a single trembler device (integrated in the mobile device in an embodiment) may be used in some embodiments, e.g., with the number of trembles indicating a given direction (such as one tremble to turn right, a double tremble to turn left, a triple tremble to go straight, a quadruple tremble to go back, intense vibration to stop, etc.). Hence, differing number of trembles may be used to convey different directions and/or actions to a user carrying a trembler device.Fig. 3 illustrates a flow diagram of a method 300 for location based haptic direction finding, according to an embodiment. One or more components discussed herein (e.g., with reference to Figs. 1-2 and 3-6) may be used to perform one or more operations discussed with reference to Fig. 3. For example, one or more operations of method 300 may be performed by logic 140 and/or trembler device(s) 150), as discussed herein.Referring to Figs. 1-3, at an operation 302, it is determined whether haptic direction is enabled. Operation 302 may be based on user settings. For example, a user may choose haptic direction (in lieu of visual and/or audio navigational cues) by changing a navigation application setting on user's mobile device, or a user may be provided the option for the type of directional cues each time the user requests directions to a new destination. At operation 304, the navigational output is redirected from visual and/or audio output devices (i.e., a display device and/or speakers) to logic 140 to cause trembler device(s) 150 to vibrate for directional guidance.At operation 306, each time a new navigational cue is received (e.g., from a navigation/mapping provider (like Google Maps™ mapping service, Bing™ Maps and MapPoint™ web service, etc.) to deliver navigation haptic outputs (e.g., based on positioning data from sensors 130), operation 308 determines whether the received new cue indicates the destination is reached. If not, logic 140 causes the trembler device(s) 150 to vibrate at operation 3 10. Method 300 terminates once destination is reached at operation 3 12.Fig. 4 illustrates a block diagram of a computing system 400 in accordance with an embodiment. The computing system 400 may include one or more Central Processing Units (CPUs) 402 or processors that communicate via an interconnection network (or bus) 404. The processors 402 may include a general purpose processor, a network processor (that processes data communicated over a computer network 403), or other types of a processor (including a reduced instruction set computer (RISC) processor or a complex instruction set computer (CISC)). Moreover, the processors 402 may have a single or multiple core design. The processors 402 with a multiple core design may integrate different types of processor cores on the same integrated circuit (IC) die. Also, the processors 402 with a multiple core design may be implemented as symmetrical or asymmetrical multiprocessors. In an embodiment, one or more of the processors 402 may be the same or similar to the processors 102 of Fig. 1. Further, one or more components of system 400 may include logic 140 coupled to trembler device(s) 150, as well as the sensor(s) 130, discussed with reference to Figs. 1-3 (including but not limited to those locations illustrated in Fig. 4). Also, the operations discussed with reference to Figs. 1 -3 may be performed by one or more components of the system 400.A chipset 406 may also communicate with the interconnection network 404. The chipset 406 may include a graphics memory control hub (GMCH) 408, which may be located in various components of system 400 (such as those shown in Fig. 4). The GMCH 408 may include a memory controller 410 that communicates with a memory 412 (which may be the same or similar to the memory 1 14 of Fig. 1). The memory 412 may store data, including sequences of instructions, that may be executed by the CPU 402, or any other device included in the computing system 400. In one embodiment, the memory 412 may include one or more volatile storage (or memory) devices such as random access memory (RAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), static RAM (SRAM), or other types of storage devices. Nonvolatile memory may also be utilized such as a hard disk. Additional devices may communicate via the interconnection network 404, such as multiple CPUs and/or multiple system memories.The GMCH 408 may also include a graphics interface 414 that communicates with the display device. In one embodiment, the graphics interface 414 may communicate with a display device via an accelerated graphics port (AGP) or Peripheral Component Interconnect (PCI) (or PCI express (PCIe) interface). In an embodiment, the display (such as a flat panel display) may communicate with the graphics interface 414 through, for example, a signal converter that translates a digital representation of an image stored in a storage device such as video memory or system memory into display signals that are interpreted and displayed by the display device. The display signals produced by the display device may pass through various control devices before being interpreted by and subsequently displayed on the display device.A hub interface 418 may allow the GMCH 408 and an input/output control hub (ICH) 420 to communicate. The ICH 420 may provide an interface to I/O device(s) that communicate with the computing system 400. The ICH 420 may communicate with a bus 422 through a peripheral bridge (or controller) 424, such as a peripheral component interconnect (PCI) bridge, a universal serial bus (USB) controller, or other types of peripheral bridges or controllers. The bridge 424 may provide a data path between the CPU 402 and peripheral devices. Other types of topologies may be utilized. Also, multiple buses may communicate with the ICH 420, e.g., through multiple bridges or controllers. Moreover, other peripherals in communication with the ICH 420 may include, in various embodiments, integrated drive electronics (IDE) or small computer system interface (SCSI) hard drive(s), USB port(s), a keyboard, a mouse, parallel port(s), serial port(s), floppy disk drive(s), digital output support (e.g., digital video interface (DVI)), or other devices.The bus 422 may communicate with an audio device 426, one or more disk drive(s)428, and a network interface device 430 (which is in communication with the computer network 403). Other devices may communicate via the bus 422. Also, various components (such as the network interface device 430) may communicate with the GMCH 408 in some embodiments. In addition, the processor 402 and the GMCH 408 may be combined to form a single chip. Furthermore, a graphics accelerator may be included within the GMCH 408 in other embodiments.Furthermore, the computing system 400 may include volatile and/or nonvolatile memory (or storage). For example, nonvolatile memory may include one or more of the following: read-only memory (ROM), programmable ROM (PROM), erasable PROM (EPROM), electrically EPROM (EEPROM), a disk drive (e.g., 428), a floppy disk, a compact disk ROM (CD-ROM), a digital versatile disk (DVD), flash memory, a magneto- optical disk, or other types of nonvolatile machine-readable media that are capable of storing electronic data (e.g., including instructions).Fig. 5 illustrates a computing system 500 that is arranged in a point-to-point (PtP) configuration, according to an embodiment. In particular, Fig. 5 shows a system where processors, memory, and input/output devices are interconnected by a number of point-to- point interfaces. The operations discussed with reference to Figs. 1 -4 may be performed by one or more components of the system 500.As illustrated in Fig. 5, the system 500 may include several processors, of which only two, processors 502 and 504 are shown for clarity. The processors 502 and 504 may each include a local memory controller hub (MCH) 506 and 508 to enable communication with memories 510 and 512. The memories 510 and/or 512 may store various data such as those discussed with reference to the memory 412 of Fig. 4.In an embodiment, the processors 502 and 504 may be one of the processors 402 discussed with reference to Fig. 4. The processors 502 and 504 may exchange data via a point-to-point (PtP) interface 514 using PtP interface circuits 516 and 518, respectively. Also, the processors 502 and 504 may each exchange data with a chipset 520 via individual PtP interfaces 522 and 524 using point-to-point interface circuits 526, 528, 530, and 532. The chipset 520 may further exchange data with a graphics circuit 534 via a graphics interface 536, e.g., using a PtP interface circuit 537.At least one embodiment may be provided within the processors 502 and 504. Further, one or more components of system 500 may include logic 140 coupled to trembler device(s) 150, as well as the sensor(s) 130, discussed with reference to Figs. 1 -4 (including but not limited to those locations illustrated in Fig. 5). Other embodiments, however, may exist in other circuits, logic units, or devices within the system 500 of Fig. 5. Furthermore, other embodiments may be distributed throughout several circuits, logic units, or devices illustrated in Fig. 5.The chipset 520 may communicate with a bus 540 using a PtP interface circuit 541. The bus 540 may communicate with one or more devices, such as a bus bridge 542 and I/O devices 543. Via a bus 544, the bus bridge 542 may communicate with other devices such as a keyboard/mouse 545, communication devices 546 (such as modems, network interface devices, or other communication devices that may communicate with the computer network 403), audio I/O device 547, and/or a data storage device 548. The data storage device 548 may store code 549 that may be executed by the processors 502 and/or 504.In some embodiments, one or more of the components discussed herein can be embodied as a System On Chip (SOC) device. Fig. 6 illustrates a block diagram of an SOC package in accordance with an embodiment. As illustrated in Fig. 6, SOC 602 includes one or more Central Processing Unit (CPU) cores 620, one or more Graphics Processing Unit (GPU) cores 630, an Input/Output (I/O) interface 640, and a memory controller 642. Various components of the SOC package 602 may be coupled to an interconnect or bus such as discussed herein with reference to the other figures. Also, the SOC package 602 may include more or less components, such as those discussed herein with reference to the other figures. Further, each component of the SOC package 620 may include one or more other components, e.g., as discussed with reference to the other figures herein. In one embodiment, SOC package 602 (and its components) is provided on one or more Integrated Circuit (IC) die, e.g., which are packaged into a single semiconductor device.As illustrated in Fig. 6, SOC package 602 is coupled to a memory 660 (which may be similar to or the same as memory discussed herein with reference to the other figures) via the memory controller 642. In an embodiment, the memory 660 (or a portion of it) can be integrated on the SOC package 602. The I/O interface 640 may be coupled to one or more I/O devices 670, e.g., via an interconnect and/or bus such as discussed herein with reference to other figures. I/O device(s) 670 may include one or more of a keyboard, a mouse, a touchpad, a display device, an image/video capture device (such as a camera or camcorder/video recorder), a touch screen, a speaker, or the like. Furthermore, SOC package 602 may include/integrate logic 140 and/or sensor(s) 130 in some embodiments. Alternatively, logic 140 and/or sensor(s) 130 may be provided outside of the SOC package 602 (i.e., as a discrete logic).Moreover, the scenes, images, or frames discussed herein (e.g., which may be processed by the graphics logic in various embodiments) may be captured by an image capture device (such as a digital camera (that may be embedded in another device such as a smart phone, a tablet, a laptop, a stand-alone camera, etc.) or an analog device whose captured images are subsequently converted to digital form). Moreover, the image capture device may be capable of capturing multiple frames in an embodiment. Further, one or more of the frames in the scene are designed/generated on a computer in some embodiments. Also, one or more of the frames of the scene may be presented via a display (such as the display discussed with reference to Figs. 4 and/or 5, including for example a flat panel display device, etc.).The following examples pertain to further embodiments. Example 1 includes 1 an apparatus comprising: logic, the logic at least partially comprising hardware logic, to redirect one or more navigational hints to one or more trembler devices instead of a display device of a mobile computing device in response to a request to provide haptic directional cues, wherein the mobile computing device is to comprise the logic. Example 2 includes the apparatus of example 1 , wherein the one or more trembler devices are to communicate with the mobile device wirelessly. Example 3 includes the apparatus of example 2, wherein the wireless communication is to be provided via one or more of Bluetooth™ communication and Near Field Communication (NFC). Example 4 includes the apparatus of example 1 , wherein the one or more navigational hints are to be received from a navigation or mapping provider. Example 5 includes the apparatus of example 1 , wherein the logic is to cause the display device to enter a low power consumption state in response to selection of the haptic directional cues. Example 6 includes the apparatus of example 1 , wherein the mobile computing device is to comprise one of: a smartphone, a tablet, a UMPC (Ultra-Mobile Personal Computer), a laptop computer, an Ultrabook™ computing device, and a wearable device. Example 7 includes the apparatus of example 6, wherein the wearable device is to include one of a smart watch, a helmet, a jacket, a shirt, a pair of pants, a pair of shorts, a shoe, or glasses. Example 8 includes the apparatus of example 1 , wherein the logic is to redirect the one or more navigational hints to one or more trembler devices instead of one or more speakers in response to selection of the haptic directional cues. Example 9 includes the apparatus of example 8, wherein the logic is to cause audio logic coupled to the one or more speakers to enter a low power consumption state. Example 10 includes the apparatus of example 1 , wherein a processor, having one or more processor cores, is to comprise the logic. Example 1 1 includes the apparatus of example 1 , wherein one or more of the logic, a processor having one or more processor cores, and memory are on a single integrated circuit die.Example 12 includes a method comprising: redirecting, at logic in a mobile computing device, one or more navigational hints to one or more trembler devices instead of a display device of the mobile computing device in response to a request to provide haptic directional cues. Example 13 includes the method of example 12, further comprising the one or more trembler devices communicating with the mobile device wirelessly. Example 14 includes the method of example 13, wherein the wireless communication is provided via one or more of Bluetooth™ communication and Near Field Communication (NFC). Example 15 includes the method of example 12, further comprising receiving the one or more navigational hints from a navigation or mapping provider. Example 16 includes the method of example 12, further comprising causing the display device to enter a low power consumption state in response to selection of the haptic directional cues. Example 17 includes the method of example 12, further comprising redirect the one or more navigational hints to one or more trembler devices instead of one or more speakers in response to selection of the haptic directional cues. Example 18 includes the method of example 17, further comprising causing audio logic coupled to the one or more speakers to enter a low power consumption state.Example 19 includes a system comprising: a mobile computing device having memory to store data; logic to redirect one or more navigational hints to one or more trembler devices instead of a display device of the mobile computing device in response to a request to provide haptic directional cues, wherein the mobile computing device is to comprise the logic. Example 20 includes the system of example 19, wherein the one or more trembler devices are to communicate with the mobile device wirelessly. Example 21 includes the system of example 20, wherein the wireless communication is to be provided via one or more of Bluetooth™ communication and Near Field Communication (NFC). Example 22 includes the system of example 19, wherein the one or more navigational hints are to be received from a navigation or mapping provider. Example 23 includes the system of example 19, wherein the logic is to cause the display device to enter a low power consumption state in response to selection of the haptic directional cues. Example 24 includes the system of example 19, wherein the mobile computing device is to comprise one of: a smartphone, a tablet, a UMPC (Ultra-Mobile Personal Computer), a laptop computer, an Ultrabook™ computing device, and a wearable device. Example 25 includes the system of example 19, wherein the logic is to redirect the one or more navigational hints to one or more trembler devices instead of one or more speakers in response to selection of the haptic directional cues, wherein the logic is to cause audio logic coupled to the one or more speakers to enter a low power consumption state. Example 26 includes the system of example 25, wherein the logic is to cause audio logic coupled to the one or more speakers to enter a low power consumption state. Example 27 includes the system of example 19, wherein a processor, having one or more processor cores, is to comprise the logic. Example 28 includes the system of example 19, wherein one or more of the logic, a processor having one or more processor cores, and the memory are on a single integrated circuit die.Example 29 includes an apparatus comprising means to perform a method as set forth in any preceding example. Example 30 includes machine-readable storage including machine-readable instructions, when executed, to implement a method or realize an apparatus as set forth in any preceding example.In various embodiments, the operations discussed herein, e.g., with reference to Figs. 1 -6, may be implemented as hardware (e.g., logic circuitry), software, firmware, or combinations thereof, which may be provided as a computer program product, e.g., including a tangible (e.g., non-transitory) machine-readable or computer-readable medium having stored thereon instructions (or software procedures) used to program a computer to perform a process discussed herein. The machine-readable medium may include a storage device such as those discussed with respect to Figs. 1 -6.Additionally, such computer-readable media may be downloaded as a computer program product, wherein the program may be transferred from a remote computer (e.g., a server) to a requesting computer (e.g., a client) by way of data signals provided in a carrier wave or other propagation medium via a communication link (e.g., a bus, a modem, or a network connection).Reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, and/or characteristic described in connection with the embodiment may be included in at least an implementation. The appearances of the phrase "in one embodiment" in various places in the specification may or may not be all referring to the same embodiment. Also, in the description and claims, the terms "coupled" and "connected," along with their derivatives, may be used. In some embodiments, "connected" may be used to indicate that two or more elements are in direct physical or electrical contact with each other. "Coupled" may mean that two or more elements are in direct physical or electrical contact. However, "coupled" may also mean that two or more elements may not be in direct contact with each other, but may still cooperate or interact with each other.Thus, although embodiments have been described in language specific to structural features and/or methodological acts, it is to be understood that claimed subject matter may not be limited to the specific features or acts described. Rather, the specific features and acts are disclosed as sample forms of implementing the claimed subject matter. |
A safe, secure, virtualized, domain specific hardware accelerator. This disclosure relates to various implementations an embedded computing system (200). The embedded computing system (200) comprisesa hardware accelerator (HWA) thread user (202A, 202B, 204) and a second HWA thread user (202A, 202B, 204) that creates and sends out message requests. The HWA thread user (202A, 202B, 204) and the second HWA thread user (202A, 202B, 204) are communication with a microcontroller (MCU) subsystem (214). The embedded computing system also comprises a first inter-processor communication (IPC) interfacebetween the HWA thread user and the MCU subsystem and a second IPC interface between the second HWA thread user and the MCU subsystem, wherein the first IPC interface is isolated from the second IPCinterface. The MCU subsystem is also in communication with a first domain specific HWA (208, 210, 212) and a second domain specific HWA (208, 210, 212) . |
1.One method includes:Receive the first message request and privilege credential information from the first hardware accelerator thread user equipment through the first communication interface of the multi-hardware accelerator function controller, wherein the multi-hardware accelerator function controller is configured to send the message request from the first The hardware accelerator thread user equipment provides a destination domain specific hardware accelerator, and the first communication interface is between the first hardware accelerator thread user equipment and the multi-hardware accelerator function controller;Writing the first message request and privilege credential information into the queue of the first communication interface;The multi-hardware accelerator function controller obtains the first message request and the privilege credential information from the queue of the first communication interface;The multi-hardware accelerator function controller receives a second message request from a second hardware accelerator thread user equipment, the second message request being designated for a destination domain specific hardware accelerator; andThe second message request is filtered out through the firewall.2.The method according to claim 1, wherein the multi-hardware accelerator function controller includes one or more microcontroller units (MCU).3.The method according to claim 1, wherein the filtering out the second message request comprises: determining that the hardware resource identifier of the second hardware accelerator thread user equipment is different from the first identifier of the first communication interface Matching, the first identifier is associated with the first hardware accelerator thread user equipment.4.The method according to claim 1, wherein the first communication interface is implemented as an inter-processor communication interface, that is, an IPC interface.5.The method of claim 1, wherein the destination domain specific hardware accelerator execution task includes at least one of a visual domain, a video domain, or a display domain.6.The method according to claim 1, wherein writing the first message request comprises: sending destination hardware accelerator thread information, one or more commands executed on the destination domain specific hardware accelerator, and the destination memory address Included in the first message request.7.The method according to claim 1, wherein the second communication interface is between the second hardware accelerator thread user and the multi-hardware accelerator function controller.8.The method according to claim 7, wherein the first communication interface is used for a first virtual machine running on the first hardware accelerator thread user equipment, and the second communication interface is used for The hardware accelerator thread is the second virtual machine running on the user device.9.A system including:The hardware accelerator thread user equipment is the HWA thread user equipment;The microcontroller unit subsystem that communicates with the HWA thread user equipment, that is, the MCU subsystem; andA domain-specific HWA communicating with the MCU subsystem, wherein the domain-specific HWA includes HWA threads;The MCU subsystem is configured as:Receiving a message request and privilege credential information from the HWA thread user equipment;Allocating the HWA thread of the domain-specific HWA to execute the message request;Classify the message request into one of a plurality of classes based on whether the domain-specific HWA can verify the privilege credential information; andAccording to the determination that the message request belongs to the first type indicating that the HWA thread can process the privilege credential information, forward the privilege credential information to the HWA thread.10.9. The system according to claim 9, wherein the privileged credential information includes a virtual machine identifier, a protected or non-protected mode identifier, a user or supervisor mode identifier, and an HWA thread user device identifier.11.The system of claim 9, wherein the message request includes destination HWA thread information, one or more commands executed on the domain-specific HWA, and a destination memory address.12.The system of claim 11, wherein the HWA thread user equipment is a general purpose processor.13.The system of claim 11, wherein the HWA thread user equipment is a programmable accelerator.14.9. The system according to claim 9, wherein the message request and privilege credential information are obtained from an inter-processor communication interface, that is, an IPC interface.15.The system according to claim 9, wherein the MCU subsystem is configured to classify the message request into one or more classes through the following operations:Based on the determination that the HWA thread can process the privileged credential information, classify the message request into the first category for domain-specific HWA with privileged credential information checking function;Based on the determination that the HWA thread cannot verify the privileged credential information and another hardware component is configured to assist in verifying the privileged credential information, classify the message request into the second category for the hardware-assisted category; andThe message request is classified into the third type for verification of unprivileged credential information.16.9. The system of claim 9, wherein the MCU subsystem is configured to perform address space translation when the HWA thread performs address expander operations.17.A system including:The first hardware accelerator thread user equipment is the first HWA thread user equipment;The second HWA thread user equipment;The microcontroller unit subsystem that communicates with the first HWA thread user equipment and the second HWA thread user equipment, that is, the MCU subsystem;The first inter-processor communication interface that is coupled between the first HWA thread user equipment and the MCU subsystem, that is, an IPC interface;A second IPC interface coupled between the second HWA thread user equipment and the MCU subsystem, wherein the first IPC interface is isolated from the second IPC interface;The first domain-specific HWA that communicates with the MCU subsystem; andThe second domain specific HWA communicating with the MCU subsystem.18.The system according to claim 17, wherein the first IPC interface comprises: a first hardware agent, the first hardware agent writing a message request received from the HWA thread user equipment into a queue; and a second hardware Agent, the second hardware agent reads the message request from the HWA thread user equipment.19.The system according to claim 17, wherein the first IPC interface includes a firewall, and the firewall prevents the second HWA thread user equipment from sending a message request to the first IPC interface.20.A system including:A microcontroller unit subsystem or MCU subsystem configured to communicate with a plurality of hardware accelerator thread user devices, that is, a plurality of HWA thread user devices, and a plurality of domain-specific HWAs, wherein each of the plurality of domain-specific HWAs includes HWA thread;The MCU subsystem is configured as:Receiving a message request and privilege credential information from the HWA thread user equipment;Allocating the HWA thread of the domain-specific HWA to execute the message request;Classify the message request into one of a plurality of classes based on whether the domain-specific HWA can verify the privilege credential information; andBased on the determination that the message request belongs to the first category indicating that the HWA thread can process the privilege credential information, forward the privilege credential information to the HWA thread.21.The system of claim 20, wherein the message request includes destination HWA thread information, one or more commands executed on the domain-specific HWA, and a destination memory address.22.The system of claim 20, wherein the message request and privilege credential information are obtained from an inter-processor communication interface.23.The system according to claim 20, wherein the MCU subsystem is configured to classify the message request into one or more classes through the following operations:Based on the determination that the HWA thread can process privileged credential information, classify the message request into the first category for domain-specific HWA with privileged credential information checking capabilities;Based on the determination that the HWA thread cannot verify the privileged credential information and another hardware component is configured to assist in verifying the privileged credential information, classify the message request into the second category for the hardware-assisted category; andThe message request is classified into the third type for verification of unprivileged credential information.24.The system of claim 20, wherein the MCU subsystem is configured to perform address space conversion when the HWA thread performs an address expander operation. |
Safe and reliable virtualization domain specific hardware acceleratorCross references to related applicationsThis application claims the priority of U.S. Provisional Application No. 62/786,616 filed on December 31, 2018, the entire content of which is incorporated herein by reference.Background techniqueToday's embedded computing systems are commonly found in various applications such as consumer, medical and automotive products. Design engineers usually create embedded computing systems to perform specific tasks, rather than acting as general-purpose computing systems. For example, due to security and/or availability requirements, some embedded computing systems need to meet certain real-time performance constraints. In order to achieve real-time performance, an embedded computing system usually includes a microprocessor that loads and executes software to perform various functions, and dedicated hardware that improves computing operations for certain tasks. An example of dedicated hardware found in embedded systems is the hardware accelerator (HWA), which improves the confidentiality and performance of embedded computing systems.As today's products increasingly continue to utilize embedded computing devices, design engineers continue to work to improve the security, confidentiality, and performance of these devices. For example, like any other computing system, embedded computing systems are also vulnerable to malware or other malicious security threats. For embedded computing systems used in applications that directly affect security or confidentiality or applications that are critical to security and confidentiality applications, confidentiality intrusion may be a problem. For example, the embedded computing system found in advanced driver assistance systems is designed to reduce human error and road deaths caused by motor vehicles. Malicious computer programs deliberately access and damage advanced driver assistance systems can cause system failures, which can lead to life-threatening or dangerous situations.Summary of the inventionThe following presents a simplified overview of the disclosed subject matter in order to provide a basic understanding of some aspects of the subject matter disclosed herein. This overview is not an exhaustive overview of the technology disclosed herein. This summary is not intended to identify key or important elements of the present invention or to delineate the scope of the present invention. Its sole purpose is to introduce some concepts in a simplified form as a prelude to the more detailed description discussed later.In one implementation, a non-transitory program storage device that includes instructions stored thereon that cause one or more processors to create a trusted sandboxed communication interface to facilitate designated HWA thread users Communication with the multi-HWA function controller, where the multi-HWA function controller is configured to provide a message request from the HWA thread user to the destination, domain-specific HWA. One or more processors can filter out the first message request for the destination domain-specific HWA received from the second HWA thread user, and write the second message request and privilege credential information received from the designated HWA to the trusted Sandboxed communication interface buffer. The one or more processors provide the second message request and privilege credential information from the buffer of the trusted sandboxed communication interface to the multifunctional HWA function controller.In another implementation, a system includes a HWA thread user, a microcontroller unit (MCU) subsystem communicating with the HWA thread user, and a domain-specific HWA communicating with the MCU subsystem, where the domain-specific HWA includes the HWA thread. The MCU subsystem is configured to receive message requests and privileged credential information from HWA thread users, allocate domain-specific HWA HWA threads to execute message requests, and classify the message requests into multiple classes based on whether the domain-specific HWA can verify privileged credential information A class of and forwarding the privileged credential information to the HWA thread based on the determination that the message request belongs to the first class indicating that the HWA thread can process the privileged credential information.In yet another implementation manner, a system includes an HWA thread user and a second HWA thread user who creates and issues a message request. The HWA thread user and the second HWA thread user communicate with the MCU subsystem. The embedded computing system also includes a first inter-processor communication (IPC) interface between the HWA thread user and the MCU subsystem and a second IPC interface between the second HWA thread user and the MCU subsystem, where the first IPC interface and Isolation between the second IPC interfaces. The MCU subsystem also communicates with the first domain specific HWA and the second domain specific HWA.Description of the drawingsFor a detailed description of various examples, reference will now be made to the accompanying drawings, in which:Figure 1 is a block diagram of an embedded computing system according to various implementations.Figure 2 is a high-level block diagram of an example embedded computing system including a multi-HWA function controller.3 is a block diagram of an example embedded computing system including an MCU subsystem as an example of a multi-HWA function controller and an IPC interface as an example of a trusted sandboxed communication interface.Figure 4 is a block diagram of another example embedded computing system that includes HWA threads without a privileged generator.Fig. 5 is a block diagram of an example implementation of the IPC interface shown in Figs. 3 and 4.Fig. 6 is a flowchart of an implementation of a method for exchanging communications between a HWA thread user and a multi-HWA function controller.Fig. 7 is a flowchart of an implementation manner of a method for classifying message requests according to the capabilities of a specific HWA of the destination domain.Although certain implementations will be described in conjunction with the illustrative implementations shown herein, the present invention is not limited to those implementations. On the contrary, all alternative forms, modifications and equivalents are included in the spirit and scope of the present invention as defined by the claims. In the drawings that are not drawn to scale, throughout the specification and in the drawings, for components and elements having the same structure, the same reference numerals are used, and for those having the same unquoted reference numerals and Components and components with similar functions and structures are indicated by quoted reference numerals.detailed descriptionThis article discloses various example implementations to improve the security, privacy, and virtualization of domain-specific hardware accelerators (HWA) in embedded computing systems. In one or more implementations, the embedded computing system includes a multi-HWA function controller that facilitates one or more HWA threaded users with one or more domain-specific HWA (eg, visual HWA ) Communication between. The embedded computing system creates a trusted sandboxed communication interface that independently transmits the message request from the HWA thread user to the multi-HWA function controller. A "trusted" communication interface is an interface in which the source device of a communication message is confirmed to be allowed to send messages through the specific communication interface (only a predefined source device is allowed to send messages through a given communication interface). Sandboxing means that the embedded computing system isolates each communication interface from each other. In this way, the confidentiality and/or system failure affecting one HWA thread user (for example, the host CPU) does not affect another HWA thread user (for example, a digital signal processor (DSP)). The trusted sandboxed communication interface also transmits the privilege credential information requested for each message to the multi-HWA function controller to prevent confidentiality intrusion such as fraud.After obtaining the message request, the multi-HWA function controller schedules and allocates hardware threads for the message request to execute on the specific HWA of the destination domain. As part of the scheduling operation, the multi-HWA function controller performs an intelligent scheduling operation, which classifies the message request into multiple classes (referred to as hardware-assisted classes) according to the capabilities of the specific HWA of the destination domain. For example, if the destination domain-specific HWA includes a privilege generator, the multi-HWA function controller classifies the message request for the destination domain-specific HWA into a class representing a domain-specific HWA with privilege credential information checking capabilities. For destination domain-specific HWAs without a privilege generator, the multi-HWA function controller can classify the associated message request as instructing other hardware components (for example, input/output (IO) memory management unit (MMU)) to assist in checking privilege credential information Of different classes. In some cases, when the embedded computing system cannot check the associated privilege credential information, the multi-HWA function controller may classify the message request into another class. In one or more implementations, the multi-HWA function controller can also convert between different address space sizes (for example, from a 64-bit address space to a 32-bit address space) to additionally adapt to the ability to change Domain-specific HWA (for example, the old domain-specific HWA).As used herein, the term "programmable accelerator" refers to a custom hardware device that is programmable to perform specific operations (eg, processing, calculation, function, or task). Programmable accelerators are different from general-purpose processors (e.g., central processing units (CPU)) built to perform conventional computing operations. Generally, programmable accelerators perform specified operations faster than software running on standard or general-purpose processors. Examples of programmable accelerators dedicated to performing specific operations include graphics processing units (GPU), digital signal processors (DSP), vector processors, floating point processing units (FPU), application-specific integrated circuits (ASICs), embedded processors (For example, Universal Serial Bus (USB) controller) and domain specific HWA.For the purposes of this disclosure, the term "domain-specific HWA" refers to a specific type of programmable accelerator with customized hardware units and pipelines designed to perform tasks that fall within a specific domain. Compared with other types of programmable accelerators (such as GPU, DSP, and vector processors), domain-specific HWA provides relatively less computational flexibility, but when performing tasks belonging to a specific domain, domain-specific HWA is more efficient in power and performance Aspect has higher efficiency. A domain-specific HWA contains one or more HWA threads, where each HWA thread represents a hardware thread that receives and executes one or more tasks associated with a given domain. As hardware threads, HWA threads are different from software threads generated when software applications run on an operating system (OS). Domain specific HWA can execute HWA threads in a serial and/or parallel manner. Examples of domains include imaging domain, video domain, visual domain, radar domain, deep learning domain, and display domain. Examples of domain-specific HWAs include visual preprocessing accelerator (VPAC), digital media preprocessing accelerator (DMPAC), video processing engine (VPE), and image and video accelerator (IVA) (eg, video encoder and decoder).Illustrative hardware and use casesFigure 1 is a simplified block diagram of an embedded computing system 100 according to various implementations. Using FIG. 1 as an example, the embedded computing system 100 is a multi-processor system-on-chip (SOC) designed to support computer vision processing in a camera-based advanced driver assistance system. The embedded computing system 100 includes a general purpose processor (GPP) 102, a digital signal processor (DSP) 104, a vision processor 106, and a domain-specific HWA 112 coupled via a high-speed interconnect 122. The GPP 102 hosts a high-level operating system (HLOS), which provides control operations for one or more software applications running on the embedded computing system 100. For example, HLOS controls the scheduling of various tasks, which are generated when a software application program runs on the embedded computing system 100. The DSP 104 provides support for real-time computer vision processing such as object detection and classification. Although FIG. 1 shows that the embedded computing system 100 includes a single GPP 102 and a single DSP 104, other embodiments of the embedded computing system 100 may have coupling to one or more domain-specific HWA 112 and one or more visual Multiple GPPs 102 and/or multiple DSPs 104 of the processor 106.In one or more implementations, the domain-specific HWA 112 is a VPAC that communicates with the vision processor 106. The VPAC includes one or more HWA threads configured to perform various visual preprocessing operations on incoming camera images and/or image sensor information. For example, VPAC includes four HWA threads, an embedded hardware thread scheduler, and an embedded shared memory, which all communicate with each other when performing visual domain tasks. Each HWA thread is set to perform specific visual domain tasks, such as lens distortion correction operations, image zoom operations, noise filter operations, and/or other vision-specific image processing operations. The storage block in the shared memory acts as a buffer to store the data block processed by the HWA thread. In FIG. 1, the vision processor 106 is a vector processor customized for computer vision processing (such as gradient calculation, direction merging, and histogram normalization by using the output of VPAC).The embedded computing system 100 also includes a direct memory access (DMA) component 108, a camera capture component 110 coupled to the camera 124, a display management component 114, and an on-chip random access memory (RAM) 116 (for example, a non-transitory computer readable medium) As well as various input/output (I/O) peripherals 120, they are all coupled to the processor and domain-specific HWA 112 via interconnect 122. The RAM 116 may store part or all of the instructions (software, firmware) described herein for execution by the processor. In addition, the embedded computing system 100 includes a safety component 118 that includes safety-related functions to enable compliance with automotive safety requirements. This function can include support for CRC (cyclic redundancy check) of data, clock comparator for drift detection, error signaling, windowed watchdog timer, and damage and failure of embedded computing system 100 Self-test.Although FIG. 1 shows a specific implementation of the embedded computing system 100, the present disclosure is not limited to the specific implementation shown in FIG. For example, FIG. 1 may not show all the components found in the embedded computing system 100, and may include other components known to those of ordinary skill in the art according to the use case of the embedded computing system 100. For example, the embedded computing system 100 may also include other programmable accelerator components not shown in FIG. 1 that are beneficial for certain use cases. Additionally or in addition, even though FIG. 1 shows that one or more components in the embedded computing system 100 are separate components, other implementations may combine the components into a single component. The use and discussion of Figure 1 are merely examples for ease of description and explanation.Multi-HWA function controller and trusted sandboxed communication interfaceFIG. 2 is a high-level block diagram of an example embedded computing system 200 including a multi-HWA function controller 214. 2 shows the multi-HWA function controller 214 and one or more HWA thread users (also referred to as HWA thread user equipment, and includes, for example, the host CPU 202A, host CPU 202B, and DSP 204) and one or more domain specific HWA (visual domain HWA 210, video domain HWA 212) docking. In one or more implementations, the multi-HWA function controller 214 is a microcontroller unit (MCU) subsystem that supports communication between HWA thread users 202A, 202B, 204 and domain-specific HWAs 208, 210, and 212 . The MCU subsystem includes one or more MCU processors and embedded memories to control and manage HWA threads between one or more domain-specific HWAs. Due to scalability, design and development costs, and loss of chip area, the MCU subsystem can preferably manage communications with multiple domain-specific HWAs. For example, the MCU subsystem provides flexibility by being able to assign any HWA thread within a domain-specific HWA to any HWA thread user. The MCU subsystem can also be expanded by updating the MCU firmware with revised or new policy settings (for example, when the number of virtual machines (VM) that the MCU subsystem needs to manage changes).HWA thread user representatives offload one or more tasks to one or more domain-specific HWA underlying hardware resources. In FIG. 2, the host CPUs 202A and 202B and the DSP 204 represent HWA thread users who send message requests to the visual domain HWA 208, the display domain HWA 210, and/or the video domain HWA 212. In the example, the visual domain HWA 208 is restricted to performing visual domain tasks; the display domain HWA 210 is restricted to performing display domain tasks; and the video domain HWA 212 is restricted to performing visual domain tasks. In other words, compared with general-purpose processors (such as host CPU 202A and 202B) and/or other types of programmable accelerators (such as DSP 204), visual domain HWA 208, display domain HWA 210, and video domain HWA 212 are flexible in processing Sexually restricted. However, compared with the host CPUs 202A and 202B and the DSP 204, the visual domain HWA 208, the display domain HWA 210, and the video domain HWA 212 are more efficient in performing each of their respective domain tasks.In order to improve operation efficiency (for example, power consumption efficiency and/or performance efficiency), HWA thread users request to offload domain tasks to their respective domain-specific HWAs by sending messages. Each message request usually contains commands that indicate domain tasks that can be performed by a domain-specific HWA. For example, a virtual machine (VM) runs a software application through the host CPU 202A to generate a set of visual domain tasks. Although the host CPU 202A has the ability to execute and process visual domain tasks, the host CPU 202A offloads the visual domain task group to the visual domain HWA 208 to obtain operational efficiency. By offloading domain tasks, the amount of time and/or power consumption required for the visual domain HWA 208 to complete the execution of the visual domain task group is relatively less than the amount of time and/or power consumption that the host CPU 202A has already processed the visual domain task group.The multi-HWA function controller 214 manages and controls the message request sent between the HWA thread user and the domain-specific HWA. In one or more implementations, in order to enhance security and confidentiality, the embedded computing system 200 creates a trusted sandboxed communication interface that safely transmits message requests from the HWA thread user to the multi-HWA function control器214. The trusted sandboxed communication interface serves as a confidential interface for separating and filtering data from non-designated HWA thread users. In other words, the trusted sandboxed communication interface controls whether the underlying hardware resource (for example, the host CPU 202A, 202B or the DSP 204) is a trusted source with permission to transmit the message request to the multi-HWA function controller 214. For example, if the trusted sandboxed communication interface is set to only identify the host DSP 204 as a trusted source, the trusted sandboxed communication interface will not transfer messages received from the host CPU 202A and/or 202B. The request is transmitted to the multi-HWA function controller 214. Having a separate trusted sandboxed communication interface limits the impact of system failures and/or privacy intrusions. The trusted sandboxed communication interface also provides the multi-HWA function controller 214 with privileged credential information requested for each message to provide an additional layer of security to prevent malicious attacks such as spoofing.After receiving the message request, the multi-HWA function controller 214 schedules and allocates HWA threads to execute the message request. The multi-HWA function controller 214 can schedule message requests to different domain-specific HWAs. Using FIG. 2 as an example, the host CPU 202A can generate a message request including a group of visual domain tasks, a second message request including a group of display domain tasks, and a third message request including a group of video domain tasks. The multi-HWA function controller 214 receives three different message requests through one or more trusted sandboxed communication interfaces, and then assigns each message request to the HWA thread based on the type of domain task. In other words, the multi-HWA function controller 214 does not allocate HWA threads that are incompatible with domain tasks associated with other domains or cannot process domain tasks associated with other domains. For example, the multi-HWA function controller 214 allocates at least one of the visual HWA threads 216A-216D to execute the visual domain task group, allocates at least one of the display HWA threads 218A and 218B to execute the display domain task group, and allocates the video HWA thread 220A And at least one of 220B to perform the video domain task group. When HWA threads become available, the multi-HWA function controller 214 allocates compatible HWA threads to execute the message request. When the compatible HWA thread is busy, the multi-HWA function controller 214 may temporarily push the message request to one or more different queues to wait for the compatible HWA thread to become available.In one or more implementations, as part of the scheduling operation, the multi-HWA function controller 214 performs smart scheduling operations to consider the capabilities of the destination domain-specific HWA. In one or more implementations, the multi-HWA function controller 214 categorizes each domain-specific HWA according to the capabilities of HWA threads within the domain-specific HWA. Using FIG. 2 as an example, after the multi-HWA function controller 214 schedules one of the visual HWA threads 216A-216D to process the message request, the multi-HWA function controller 214 determines whether the visual HWA threads 216A-216D belong to include privileges for dynamic processing The class of the HWA thread of the privilege generator of credential information. If the visual HWA threads 216A-216D include a privilege generator, the multi-HWA function controller 214 may replay the privilege credential information obtained from the trusted sandboxed communication interface to the assigned visual HWA threads 216A-216D. The multi-HWA function controller 214 also provides the privilege configuration information to the IO MMU (not shown in FIG. 2) to check the privilege credential information. If the assigned vision thread belongs to a class that cannot process the privileged credential information, but can be assisted by the IO MMU, the data output from the visual domain HWA 208 is rerouted to the IO MMU to confirm the privileged credential information.When determining the HWA thread class, the intelligent scheduling operation of the multi-HWA function controller 214 also supports hardware virtualization and/or address space size conversion. In one or more implementations, HWA thread users (e.g., host CPU 202A) can host one or more virtualized computing systems (e.g., VMs). Due to hardware virtualization, the message request sent from the HWA thread user may include a command to write a specific virtualized destination address. To support hardware virtualization, the multi-HWA function controller 214 converts the virtualized destination address into a physical address. When the domain-specific HWA utilizes different address space sizes, the multi-HWA function controller 214 may also perform address space size conversion. For example, the address information received by the multiple HWA function controller 214 may utilize a 64-bit address space. However, domain-specific HWA can utilize 32-bit address space. As part of the smart scheduling operation, the multiple HWA function controller 214 converts address information from a 64-bit address space to a lower address space (e.g., a 32-bit address space).MCU subsystem and IPC interface3 is a block diagram of an example embedded computing system 300 that includes an MCU subsystem 328 as an example of a multi-HWA function controller and an IPC interface 320 as an example of a trusted sandboxed communication interface. The IPC interface 320 is an example of a communication interface. This example includes one IPC interface 320 for each device, such as one IPC interface 320 for the host CPU 202A, one IPC interface 320 for the host CPU 202B, and one IPC interface 320 for the DSO 204. Each IPC interface 320 communicatively couples its respective devices 202A, 202B, and 204 to the MCU subsystem 328. Each IPC interface 320 provides a processor-independent application program interface (API) for communicating with processing components. For example, the IPC interface 320 can be used for communication between processors in a multi-processor environment (for example, between cores), communication with other hardware threads on the same processor (for example, between processes), and communication with peripheral devices. Communication (for example, between devices). Generally, as a software API, the IPC interface 320 utilizes one or more processing resources, such as a multi-processor heap, a multi-processor linked list, and a message queue, to facilitate communication between processing components.In FIG. 3, the embedded computing system 300 creates an IPC interface 320 between the MCU subsystem 328 and each virtual computing system (eg, VM or virtual container) running on the HWA thread user. For example, the embedded computing system 300 allocates one IPC interface 320 to transmit message requests between the VM 302A and the MCU subsystem 328, and allocates another IPC interface 320 to transmit message requests between the VM 302B and the MCU subsystem 328. The embedded computing system 300 also creates an IPC interface 320 between the DSP 204 and the MCU subsystem 328. The VMs 302A and 302B each run a separate high-level OS (HLOS) in the embedded computing system 300. For the purpose of this disclosure, HLOS means an embedded OS that is the same or similar to an OS used in non-embedded environments such as desktop computers and smart phones. Referring to FIG. 3 as an example, VMs 302A and 302B can run the same type of HLOS (for example, both run Android™ (Android) OS) or different types of HLOS (for example, VM 302A runs LinuxTMOS, and VM 302B runs AndroidTMOS).Create a separate and isolated IPC interface for the DSP 204 and each virtual computing system (eg, VM or virtual container) running on the host CPU 202A and 202B to enhance security and confidentiality by isolating faults and/or privacy intrusions Sex. For example, in FIG. 3, the DSP 204 runs a real-time operating system (RTOS) 304 that provides features such as threads, semaphores, and interrupts. Compared with HLOS, RTOS can provide relatively fast interrupt response with lower memory cost. In advanced driver assistance system applications, by using RTOS, the DSP 204 can manage automobile safety functions (for example, emergency braking) by processing real-time data from one or more sensors (for example, cameras). If other HWA thread users (for example, the host CPU 202A) suffer system failures or privacy intrusions, the IPC interface 320 assigned to the DSP 204 is isolated and separated from other IPC interfaces 320, so the car security features managed by the DSP 204 are not affected. influences. The IPC interface 320 will be discussed in more detail with reference to FIG. 5 later in this disclosure.FIG. 3 shows that the MCU subsystem 328 includes an engine 308 that configures the MCU subsystem 328 to pair with HWA threads in the visual domain HWA 208, the display domain HWA 210, and the video domain HWA 212. By pairing with different types of HWA threads, the engine 308 can control and manage different types of HWA threads, and is not limited to communicating with specific types of HWA threads. Using FIG. 3 as an example, after the MCU subsystem 328 receives the message request via the IPC interface 320, the engine 308 schedules the message request received from the DSP 204 and/or from the host CPU 202A and 202B, and forwards it to the visual domain HWA 208, display One or more of the HWA threads in the domain HWA 210 and/or the video domain HWA 212. In one or more implementations, the engine 308 is firmware that supports policy settings (such as the priority and access control of each thread) to support scheduling message requests and forwarding them to one or more HWA threads.The engine 308 can support priority-based queue services for each domain specific HWA (eg, visual domain HWA 208). As shown in FIG. 3, the MCU subsystem 328 includes a priority queue 306 that receives message requests from the IPC interface 320. Each priority queue 306 is set to receive message requests from one or more of the IPC interfaces 320. The priority queue 306 may be assigned different priorities according to the type of the HWA thread user sending the message request. For example, due to real-time constraints, the MCU subsystem 328 may assign a higher priority to the priority queue 306 that receives message requests from the DSP 204 than the priority queue 306 assigned to the host CPUs 202A and 202B. The engine 308 can also arrange the received message requests in each priority queue according to priority operations. As an example, the priority operation may place the message request in one of the priority queues 306 based on a first-in first-out (FIFO) operation. Other examples may use other priority assignment operations to sort message requests within a single priority queue 306. When the engine 308 extracts a message request from the priority queue 306 according to the priority, the engine 308 assigns the HWA thread identifier to the message request. The HWA thread identifier indicates which HWA thread will execute the message request. When the assigned HWA thread is busy, the engine 308 pushes the pending message request to the pending queue and waits until the assigned HWA thread is available for processing the message request. If the allocated HWA thread is already available or idle, the engine 308 schedules the message request for execution.The engine 308 can also perform smart scheduling operations to support multiple types of HWA threads. As previously discussed, the embedded computing system 300 may include domain-specific HWAs with different processing capabilities. Since domain-specific HWAs may have different capabilities, the engine 308 is configured to schedule message requests for different types of HWA threads. In order to support multiple types of HWA threads, the MCU subsystem 328 includes a privilege configuration engine 310 that sends privilege configuration information to the domain-specific HWA through the privilege generator 322 and/or supporting devices (such as IO MMU 314). The privilege configuration information includes policy information indicating the type of privilege level used to access certain parts of the memory 318. The privilege generator 322 and/or IO MMU 314 in the HWA thread uses the privilege configuration information to check the privilege credential information associated with each message request.Different types of HWA threads include HWA thread classes that can check privilege credential information. For example, IO-MMU 314 can be used to check privileged credential information. The first type of HWA thread identifies the HWA thread with the privilege generator 322 for dynamically processing privilege credential information (for example, the visual HWA thread 216A). If the allocated HWA thread includes the privilege generator 322, the engine 308 replays the privilege credential information obtained from the IPC interface 320 to the allocated HWA thread. The second type of HWA thread includes HWA threads that do not have the privilege generator 322, but can be assisted by other hardware components to check privilege credential information. For example, the IO MMU 314 shown in FIG. 3 can assist and check the privilege credential information obtained from the IPC interface 320. The third type of HWA thread represents an HWA thread that does not have a privilege generator and cannot use other hardware components to check privilege credential information. For the third type of HWA thread, the engine 308 may not be able to use the privileged credential information to perform additional security checks. In some implementations, the third type of HWA thread represents an HWA thread that supports hardware virtualization without checking privilege credential information.FIG. 3 depicts that the visual HWA thread 216A within the visual domain HWA 208 also includes the privilege generator 322 and the visual HWA thread 326. The privilege generator 322 supports determining whether the privilege credential information associated with the message request satisfies the privilege level of accessing the data and writing the data in the destination storage space. The privilege generator 322 evaluates privilege credential information, such as VM identifier, secure or non-secure mode identifier, user or supervisor mode identifier, and/or HWA thread user identifier (for example, host processor identifier) to determine visual Whether the HWA thread 326 should access the destination storage space in the memory 318. In one or more implementations, the privilege generator 322 includes an initiator confidentiality controller and a quality of service engine. The initiator privacy controller supports tracking and evaluation of privileged credential information via MMR settings, such as VM identifiers and channelized firewalls. When the visual HWA thread 326 executes a message request, the service quality engine supports priority-based policies via MMR settings. The visual HWA thread 326 represents a hardware thread that executes a message request after verifying the privilege credential information of all message requests. After executing the message request, the visual HWA thread 326 outputs the data to the memory 318.The engine 308 can also classify HWA threads according to the address space utilization. In one or more implementations, when the domain-specific HWA utilizes an address space size (eg, 64-bit HLOS) that is different from the address space size adopted by the hardware thread user, the engine 308 performs address space conversion. As part of the smart scheduling operation, when a message request is sent to certain HWA threads (eg, visual HWA thread 216A), the engine 308 converts address information from a larger address space to a smaller address space. For example, the visual domain HWA 208 includes a visual HWA thread 216A with an address expander 324 to support a larger address space (e.g., 64-bit HLOS). In FIG. 3, the address expander 324 allows the visual HWA thread 216A that utilizes a smaller address space (e.g., 32-bit address space) to be compatible with larger address spaces (e.g., 36-bit, 40-bit, and 48-bit address spaces). In one or more implementations, the address expander 324 performs regional address translation (RAT), supporting address translation from 32-bit to 36-bit, 40-bit, and/or 48-bit address spaces. The RAT supports multiple high address spaces, which can be mapped to the lower 32-bit address space via a memory mapped register (MMR) setting.After the HWA thread (for example, the visual HWA thread 216A) completes the execution of the message request, the HWA thread sends an interrupt completion notification back to the MCU subsystem 328. The MCU subsystem 328 includes an interrupt controller (INTC) 312 to receive and process interrupt completion notifications from one or more HWA threads. For each interrupt completion notification received by the INTC 312, the INTC 312 sends a confirmation message back to the HWA thread user to indicate the completion of the execution of the message request. The INTC 312 also informs the engine 308 that the HWA thread that sent the interrupt completion notification is now available to process the message request. Because one or more HWA threads are asynchronous hardware threads, INTC 312 may be beneficial.Figure 4 is a block diagram of another example embedded computing system 400 that includes HWA threads without a privilege generator. The embedded computing system 400 is similar to the embedded computing system 300 shown in FIG. 3, except that the visual HWA thread 216A does not include a privilege generator. As shown in FIG. 4, because the visual HWA thread 216A cannot check the privilege credential information for the message request, the MCU subsystem 328 provides instructions to the visual HWA thread 216A to reroute the output data to the IO MMU 314 for processing. When the IO MMU 314 receives output data from the visual HWA thread 216A, the IO MMU 314 checks the privilege credential information against the privilege configuration information received from the privilege configuration engine 310. If the IO MMU 314 determines that the message request is from a trusted source and has the necessary privilege credentials, the IO MMU 314 stores the output data to the destination memory address in the memory 318.FIG. 5 is a block diagram of an example implementation of the IPC interface 320 shown in FIGS. 3 and 4. As previously mentioned, the IPC interface 320 facilitates communication between the host CPU 202A and the MCU subsystem 328. As shown in FIG. 5, the host CPU 202A creates and runs a VM 302A with HLOS. When the host CPU 202A sends a message request 510 to the domain-specific HWA of the VM 302A, the firewall 502 processes the message request 510. The firewall 502 has a setting that allows hardware access to the IPC interface 320 based on a hardware resource identifier (for example, a CPU identifier). In other words, in order to isolate the IPC interface 320 from other IPC interfaces 320 that transmit message requests from other HWA thread users, the firewall 502 blocks and filters out data from other HWA thread users (for example, CPU 202B).After the message request 510 passes through the firewall 502, the message request 510 encounters the first hardware agent 504, and the first hardware agent 504 writes the message request 510 and the privilege credential information 512 for the message request 510 into the IPC queue 506. The message request 510 may include destination HWA thread information, one or more commands to be executed, and a destination memory address (for example, input/output (IO) buffer address) to store output data from the destination domain specific HWA. Privileged credential information 512 includes sub-attributes, such as an identifier for a virtual computing system (eg, VM or virtual container), an indication about whether the message request is associated with a secure mode or a non-secure mode and/or a user mode or a supervisor mode , And HWA thread user identifier (for example, the identifier of the host CPU 202A or 202B). Subsequently, the second hardware agent 508 reads the message request 510 and the privileged credential information 512 from the IPC queue 506, and transmits both the message request 510 and the privileged credential information 512 to the MCU subsystem 328. In one or more implementations, the IPC queue 506 represents a FIFO buffer, where the second hardware agent 508 reads the message requests 510 based on the order in which the first hardware agent 504 writes the message requests 510 into the IPC queue 506. Other implementations may use other types of buffers to implement the IPC queue 506.FIG. 6 is a flowchart of an implementation manner of a method 600 for exchanging communications between a HWA thread user and a multi-HWA function controller. The method 600 can be implemented by the MCU subsystem 328 and the IPC interface 320 as referenced in FIGS. 3 to 5. Specifically, the method 600 creates an IPC interface 320 for each virtual computing system hosted by the HWA thread user to facilitate communication between the HWA thread user and the MCU subsystem. Although FIG. 6 describes the use of the MCU subsystem 328 and the IPC interface 320, other implementations can use other types of multi-HWA function controllers and trusted sandboxed communication interfaces. In addition, even though FIG. 6 shows that the blocks of the method 600 are implemented in sequential operations, the method 600 is not limited to this order of operations, but other implementations of the method 600 may have one or more blocks implemented in parallel operations.The method 600 begins at block 602, creating a trusted sandboxed IPC interface to facilitate communication between the HWA thread user and the MCU subsystem communicating with the requested domain-specific HWA. In one or more implementations, the method 600 creates a separate IPC interface for each virtual computing system running on the HWA thread user. Create a separate and isolated IPC interface to prevent system failure or privacy intrusion from affecting other HWA thread users. Then, the method 600 moves to block 604. At block 604, the method 600 allows the HWA thread user to access the message request and provide the message request to the created trusted sandboxed IPC interface. As an example, the method 600 may use a firewall to filter out message requests from other non-designated HWA thread users.The method 600 may move to block 606 to store the message request and privilege credential information in the buffer of the trusted sandboxed IPC interface. Then, the method 600 continues to block 608 and receives the message request and privilege credential information from the trusted sandboxed IPC interface. The method 600 moves to block 610 to determine whether the HWA thread of the domain-specific HWA is available for execution. If the HWA thread is not available, the message request is pushed to the pending HWA thread waiting to be available. Otherwise, at block 612, when the allocated HWA thread is not available, the method 600 provides the message request and the privilege credential information to the queue in the MCU subsystem. The method 600 moves to block 614 and dispatches a message request to send it from the MCU subsystem to the domain specific HWA when the HWA thread is available.FIG. 7 is a flowchart of an implementation of the method 700. The method 700 classifies the message request according to the capability of the destination domain specific HWA. The method 700 can be implemented by the multi-HWA function controller 214 or the MCU subsystem 328 as referenced in FIGS. 2 to 5. Recall that as part of the scheduling operation of the multi-HWA function controller, the multi-HWA function controller organizes message requests into classes according to the capabilities of the domain-specific HWA that will execute the message request. By categorizing message requests by method 700, method 700 can schedule message requests for various domain-specific HWAs, where each domain-specific HWA includes one or more HWA threads. Similar to FIG. 6, although FIG. 7 shows that the blocks of the method 700 are implemented in sequential operations, the method 700 is not limited to this order of operations, but other implementations of the method 700 may have one or more blocks implemented in parallel operations. .The method 700 begins at block 702 by determining whether the HWA thread assigned to execute the message request supports privileged credential verification. In one or more implementations, the execution of the HWA thread by the privilege generator supports privilege credential verification, as previously discussed with reference to FIG. 3. If the method 700 determines that the allocated HWA thread supports privileged credential verification, the method 700 moves to block 704 to replay the privileged credential information captured by the trusted sandboxed IPC interface to the allocated HWA thread. After that, the method 700 moves to block 716 and sends a message request to the assigned HWA thread for execution.Returning to block 702, if the method 700 determines that the assigned HWA thread does not support privileged credential verification, the method 700 moves to block 706 and determines whether hardware assistance via the IO MMU is available. In one or more implementations, in addition to the domain-specific HWA (eg, IO MMU), the multi-HWA function controller provides privileged configuration information to other hardware components. Providing privileged configuration information allows the IO MMU or other hardware components to check the privileged credential information associated with the message request. If the method 700 determines that hardware assistance is available, the method 700 moves to block 708 and provides instructions to cause the specific domain HWA to reroute the output to the hardware assistance component (eg, IO MMU). Alternatively, if the method 700 determines that no hardware assistance is available, the method 700 may move to block 710 to translate the destination virtual address to a physical address. At block 710, the method 700 does not verify or check the privileged credential information requested for the message.After block 708 or 710, the method 700 then moves to block 712 and determines whether the physical destination address needs to be converted to another address space size. As mentioned earlier, some HWA threads can use address expanders to support the address capabilities of one or more OS systems (such as 64-bit OS systems) that utilize a larger address space. Because the address space used by the HWA thread is different from the address used by the user of the HWA thread, the method 700 determines whether to convert to another address space size. If it is necessary to convert the physical address to the target address space size and replay the privileged credential information captured by the trusted sandboxed IPC interface to the assigned HWA thread, the method 700 moves to block 714. After that, the method 700 moves to block 716 and sends a message request to the assigned HWA thread for execution. Alternatively, if address space translation is not required, the method 700 moves to block 716.Although several implementations have been provided in the present disclosure, it should be understood that the disclosed system and method may be embodied in many other specific forms without departing from the spirit or scope of the present disclosure. The examples herein should be considered illustrative rather than restrictive, and are not intended to be limited to the details given herein. For example, various elements or components may be combined or integrated in another system, or certain features may be omitted or not implemented.In addition, without departing from the scope of the present disclosure, the technologies, systems, subsystems, and methods described and shown in a discrete or separate manner in various implementations can be combined with other systems, modules, technologies, or methods. integrated. Other items shown or discussed as being coupled or directly coupled or communicating with each other may be indirectly coupled or communicating in an electrical, mechanical, or other manner through some interface, device, or intermediate component. |
Methods and systems are provided for protecting a circuit design of an integrated circuit. A logic circuit to be replaced is identified in at least a portion of the circuit design. Logic circuitry in a circuit design is replaced with a bitstream and configurable circuitry including memory circuitry. A converted circuit design including configurable circuitry is generated for an integrated circuit. When the bit stream is stored in a memory circuit in the configurable circuit, the configurable circuit in the converted circuit design performs a logic function of the logic circuit. |
1. A method for protecting a circuit design of an integrated circuit, the method comprising:identifying logic circuits to be replaced in at least a portion of the circuit design;replacing said logic circuits in said circuit design with bitstreams and configurable circuits including memory circuits; andgenerating a transformed circuit design of the integrated circuit including the configurable circuit, wherein when the bitstream is stored in the memory circuit in the configurable circuit, the transformed circuit design The configurable circuit in performs the logic function of the logic circuit.2. The method of claim 1, wherein the configurable circuitry comprises look-up table circuitry.3. The method of claim 1, wherein the configurable circuit comprises a programmable finite state machine circuit.4. The method of any one of claims 1-3, wherein identifying the logic circuit to replace in at least a portion of the circuit design further comprises: adjusting the logic circuit based on the complexity of the circuit design. The number of said logic circuits identified for replacement in said circuit design.5. The method of any one of claims 1-3, further comprising:evaluating metrics against the transformed circuit design to determine whether the transformed circuit design satisfies the metrics; andIf it is determined that the transformed circuit design does not implement the metric, then using the constraints defined by the metric, replace additional bits in the transformed circuit design with additional bitstreams and additional configurable circuitry including additional memory circuitry. logic circuits to generate a revised converted circuit design of the integrated circuit.6. The method of claim 5, wherein evaluating the metric for the transformed circuit design to determine whether the transformed circuit design satisfies the metric further comprises: evaluating the transformed circuit design Whether the maximum amount of additional circuitry allowed for the converted design is exceeded.7. The method of claim 5, wherein evaluating the metric against the transformed circuit design to determine whether the transformed circuit design satisfies the metric further comprises: determining whether a known inverse The bitstream is reverse engineered to engineering techniques or known attacks.8. The method of any one of claims 1-3, further comprising:Synthesis, place and route are performed on the converted circuit design on the integrated circuit.9. The method of any one of claims 1-3, further comprising:The bit stream is stored in the memory circuit in the integrated circuit to configure the configurable circuit to perform the logic function of the logic circuit.10. A non-transitory computer-readable storage medium comprising instructions stored thereon for causing a computer to perform a method for protecting a circuit design of an integrated circuit, the method comprising:identifying logic circuits performing logic functions in at least a portion of the circuit design to be replaced; andreplacing the logic circuits in the circuit design with a bitstream and a configurable circuit to generate a transformed circuit design of the integrated circuit, wherein when the bitstream is stored in a memory circuit of the configurable circuit , the configurable circuit performs the logic function of the logic circuit, wherein the converted circuit design includes the configurable circuit.11. The non-transitory computer readable storage medium of claim 10, wherein the configurable circuitry comprises look-up table circuitry.12. The non-transitory computer readable storage medium of claim 10, wherein the configurable circuitry comprises programmable finite state machine circuitry.13. The non-transitory computer-readable storage medium of any one of claims 10-12, wherein identifying the logic circuit performing the logic function to be replaced in at least a portion of the circuit design is further including adjusting the number of the logic circuits identified for replacement in the circuit design based on the complexity of the circuit design.14. The non-transitory computer readable storage medium of any one of claims 10-12, further comprising:evaluating metrics against the transformed circuit design to determine whether the transformed circuit design satisfies the metrics; andIf it is determined that the transformed circuit design does not satisfy the metric, then using the constraints defined by the metric, replace additional bits in the transformed circuit design with additional bitstreams and additional configurable circuitry including additional memory circuitry. logic circuits to generate a revised converted circuit design of the integrated circuit.15. The non-transitory computer readable storage medium of claim 14 , wherein evaluating the metric against the transformed circuit design to determine whether the transformed circuit design satisfies the metric further comprises: evaluating Whether the converted circuit design exceeds the maximum number of additional circuits allowed for the converted circuit design.16. A computer system configured to protect a circuit design of an integrated circuit, the computer system comprising:a logic circuit replacement tool for identifying logic circuits performing logic functions in at least a portion of the circuit design to be replaced,wherein the logic circuit replacement tool generates a converted circuit design of the integrated circuit by replacing the logic circuits in the circuit design with a bitstream and a configurable circuit, wherein when the bitstream is stored in When in the memory circuit in the configurable circuit, the configurable circuit performs the logic function of the logic circuit, wherein the converted circuit design includes the configurable circuit.17. The computer system of claim 16, wherein the configurable circuitry comprises programmable finite state machine circuitry.18. The computer system of claim 16, wherein the configurable circuitry comprises look-up table circuitry.19. The computer system of any one of claims 16-18, wherein the logic circuit replacement tool adjusts the logic identified for replacement in the circuit design based on the complexity of the circuit design. number of circuits.20. The computer system of any one of claims 16-18, wherein the logic circuit replacement tool evaluates metrics for the converted circuit design to determine whether the converted circuit design satisfies the metrics . |
System and method for logic circuit replacement with configurable circuitstechnical fieldThe present disclosure relates to electronic circuit systems and methods, and more particularly to systems and methods for replacing logic circuits with configurable circuits in circuit designs.Background techniqueTheft, reverse engineering and piracy of intellectual property against hardware electronic circuits is a major global problem. Therefore, there is a need to protect the design of electronic circuits before and after manufacture and distribution. However, cryptographic solutions are generally not used to protect the design flow of electronic circuits because some untrusted parties (eg, in the supply chain) may need to access details of the design of electronic circuits. Hardware obfuscation is a method that modifies the design of an electronic circuit to produce an obfuscated design that is substantially difficult to reverse engineer or reproduce. Traditional protection uses an obfuscator and a key that converts the original design into an obfuscated design. The functionality of the original design can be determined by applying the correct key to the obfuscated design.Description of drawingsFigure 1 shows an example of an obfuscator system that replaces logic circuits in the design of an integrated circuit with a bitstream of digital bits and configurable circuits in the integrated circuit programmed by the bitstream, according to an embodiment.FIG. 2 illustrates an example of operations that may be performed by the logic circuit replacement tool of FIG. 1 , according to an embodiment.Fig. 3 shows an example of a look-up table (LUT) circuit according to an embodiment.FIG. 4 illustrates a circuit system including an integrated circuit having a circuit design that has been protected by the logic circuit replacement tool of FIG. 1 , according to an embodiment.detailed descriptionAs discussed above, hardware obfuscation attempts to protect the design of an electronic integrated circuit (also referred to herein as a circuit design) by modifying the circuit design with a key to generate an obfuscated design that is difficult to reverse engineer without access to the key . However, an untrusted party may gain unauthorized access to the key, which enables the original design to be determined from the obfuscated design. Additionally, since an untrusted party may have access to the obfuscated design, an identified attacker may be able to implement an attack without access to the key that is able to discover the functionality of the original design from the obfuscated design.According to some embodiments disclosed herein, there is provided a method for converting an original circuit design into an electronic circuit by replacing logic circuits in the original circuit design of an electronic integrated circuit with a bitstream and a configurable circuit in the electronic integrated circuit programmed by the bitstream. Systems and methods for converted circuit design of integrated circuits. Bitstreams are not stored on integrated circuits. Instead, the bitstream is stored in a separate device and provided only to trusted parties. Bitstreams can be password protected. During operation of the integrated circuit, a bit stream is transferred to the integrated circuit and stored in a memory circuit in the configurable circuit. When a configurable circuit is programmed by a bitstream, the transformed circuit design can perform the same functions as the original circuit design.Because the bitstream is not stored in the integrated circuit, an attacker cannot learn the functionality of the original circuit design simply by having access to the integrated circuit. Anyone who has an integrated circuit but no bitstream cannot reconstruct the original circuit design or the functionality of the original circuit design. As an example, a facility that manufactures integrated circuits may possess a physical design of the integrated circuit, a netlist of the physical design, and test vectors for the physical design. However, with the logic alternative embodiments disclosed herein, the fabrication facility does not need to have access to the bitstream, since the bitstream is not required for the manufacture or testing of integrated circuits. Individuals at the fabrication facility cannot reverse engineer the functionality of the original circuit design without access to the bitstream.Figure 1 shows an example of an obfuscator system 100 that replaces logic circuits in a circuit design of an integrated circuit with a bitstream of digital bits and configurable circuits in the integrated circuit programmed by the bitstream, according to an embodiment . The obfuscator system 100 includes a logic circuit replacement tool 101 . Obfuscator system 100 may, for example, include one or more computer systems. The computer system in system 100 may include, for example, one or more processor circuits, storage/memory circuits, graphics processing circuits, programmable logic integrated circuits, input/output devices, and buses connecting these components together. Logic circuit replacement tool 101 may include computer hardware components and software tools implemented in one or more computer systems in obfuscator system 100 . As shown in FIG. 1 , an original circuit design (also referred to herein as an original design) is provided to obfuscator system 100 . An original design is a circuit design of at least a part (or all) of an electronic integrated circuit. The original design is provided to the logic circuit replacement tool 101 .The obfuscator system 100 attempts to hide the intent of the original design by transforming the original design using the logic circuit replacement tool 101 to generate a transformed design of the integrated circuit. Logic circuit replacement tool 101 converts the original design by replacing one or more logic circuits (eg, critical portions of the original design) in the original design with configurable circuits and a bitstream of digital bits. Configurable circuits may include memory circuits and logic circuits. The tool 101 generates a bitstream, which can be stored in a memory circuit and used to configure a configurable circuit such that the configurable circuit performs the logic functions of the logic circuit that was replaced in the original design. When the bitstream is stored in the memory circuit of the configurable circuit and used to configure the configurable circuit, the configurable circuit performs the same logic function as the logic circuit replaced in the original design. In the case where the bitstream is not stored in the memory circuit of the configurable circuit, the configurable circuit does not perform the logic function of the logic circuit that was replaced in the original design.Thus, when a bitstream is stored in a memory circuit in a configurable circuit and used to configure the configurable circuit, the logic circuit replacement tool 101 removes one or more logic circuits in the original design, and uses the removed Logic circuits A configurable circuit of the same logic function replaces the removed logic circuit. As examples, a configurable circuit may be a look-up table (LUT) that performs combinational logic functions or a programmable finite state machine circuit that may be configured to perform logic functions. Tool 101 can vary the number of logic circuits in the original design that are replaced with configurable circuits and bitstreams based on the complexity of the original design. In some exemplary embodiments, tool 101 may replace only a small portion (eg, 10-30%) of the original design with configurable circuits and bitstreams.Bitstreams can be password protected. The bitstream is provided only to trusted parties, preventing unauthorized access to the original design. Bitstreams were not originally stored in integrated circuits containing configurable circuits. Instead, the bitstream (eg, an encrypted version of the bitstream) is transferred to and stored in the external storage device 110, as shown in FIG. 1 . Only authorized parties having access to the bitstream can provide the bitstream from the storage device 110 to the integrated circuit for storage in the memory circuit in the configurable circuit.A party that has access to the integrated circuit but not the bitstream cannot reconstruct the original design. For example, an integrated circuit fabrication facility may have a physical circuit design, a netlist, and test vectors for the circuit design of the integrated circuit. With the embodiment of FIG. 1, the fabrication facility does not have access to the bitstream, since the bitstream is not required for the manufacture or testing of integrated circuits. In the absence of the bitstream, a potential attacker at the manufacturing facility cannot obtain the original design. According to some embodiments, obfuscator system 100 also includes additional tools that can perform hardware obfuscation on portions of the original design that were not replaced by tool 101 with configurable circuits and bitstreams.In some embodiments, an additional verification process may be performed after logic circuit replacement tool 101 generates the converted design to ensure that the functionality of the original design can be reproduced by applying the bitstream to the converted design. Obfuscator system 100 also evaluates the techniques performed by tool 101 against various attacks that attempt to reveal the original design from the converted design. Obfuscation system 100 may evaluate the transformed design with metrics that quantify the effectiveness of the transformation performed by tool 101 to generate the transformed design with respect to various malicious attacks.During operation, executable software (eg, software of the logic circuit replacement tool 101 ) runs on the processor(s) of the obfuscator system 100 . A database may be used to store data for the operation of system 100 . In general, software and data can be stored in non-transitory computer readable storage media (eg, tangible computer readable storage media). Software code may sometimes be referred to as software, data, program instructions, instructions or code. The non-transitory computer readable storage medium may include computer memory chips, nonvolatile memory such as nonvolatile random access memory (NVRAM), one or more hard drives (e.g., magnetic drives or solid state drives) , one or more removable flash drives or other removable media, compact discs (CDs), digital versatile discs (DVDs), Blu-ray discs (BDs), other optical media, and floppy disks, tapes, or any other suitable (multiple a) memory or storage device. Software stored on a non-transitory computer readable storage medium can be executed in the obfuscator system 100 . When the software of the obfuscator system 100 is installed, the storage device of the obfuscator system 100 has instructions and data that cause computing devices in the obfuscator system 100 to perform various methods (processes). When performing these processes, the computing device is configured to implement the functions of obfuscator system 100 .FIG. 2 illustrates an example of operations that may be performed by the logic circuit replacement tool 101 of FIG. 1 in accordance with an embodiment. In operation 201 , tool 101 receives a register transfer level (RTL) file for an original design of an integrated circuit. RTL files can represent the original design in a human-readable form. An integrated circuit may be, for example, an application specific integrated circuit (ASIC), a programmable logic integrated circuit such as a field programmable gate array (FPGA), a microprocessor integrated circuit, or a graphics processing unit. In operation 202, the tool 101 uses the RTL file to identify one or more logic circuits in the original design that can be replaced with configurable circuits and bitstreams including memory circuits. In some embodiments, tool 101 may identify thousands or millions of logic circuits in a portion of the original design that could be replaced with configurable circuits and bitstreams including memory circuits. In some embodiments, tool 101 may identify one or more logic circuits in the original design that have high complexity for replacement. High-complexity logic circuits in the original design are more difficult to reverse engineer than low-complexity logic circuits. Therefore, replacing high-complexity logic circuits in the original design with configurable circuits and bitstreams containing memory circuits can provide increased security against reverse engineering or attack. In some embodiments, the tool 101 may identify one or more combinational logic circuits in the original design in operation 202 that can be replaced with a lookup table and a bitstream configuring the lookup table.In operation 203, the logic circuit replacement tool 101 replaces the logic circuits in the original design identified in operation 202 with the bitstream and configurable circuits including memory circuits in the integrated circuit to generate a converted design. Tool 101 generates a modified RTL file for the converted design. In operation 203, the tool 101 removes the logic circuits identified in operation 202 from the original design, replaces the removed logic circuits with configurable circuits in the integrated circuit including memory circuits, and generates a bitstream, when the bit When the stream is stored in the memory circuit and used to configure the configurable circuit, the bitstream causes the configurable circuit to perform the same logic function as the logic circuit removed from the original design.As an example, in operation 203 the tool 101 may replace one or more combinational logic circuits (eg, which perform Boolean logic functions) in the original design with one or more look-up table (LUT) circuits in the integrated circuit. The LUT circuit has a memory circuit. When a LUT circuit is configured with a bitstream (ie, the bitstream is stored in a memory circuit in the LUT), the LUT circuit performs the same combinational logic function as the replaced combinational logic circuit in the original design.FIG. 3 shows an example of a look-up table (LUT) circuit 300 according to an embodiment. LUT circuit 300 is an example of a configurable circuit in an integrated circuit with a memory circuit that can be configured with a portion of a bitstream to execute a portion of the original design. LUT circuit 300 of FIG. 3 is a 3-input LUT with 8 memory circuits 301 - 308 (eg, random access memory) and multiplexer circuit 309 . In the example of FIG. 3, the 8 bits of the bitstream generated by tool 101 are stored in memory circuits 301-308 prior to integrated circuit operation. One of the bits of the bitstream is stored in each of the memory circuits 301-308. The three input signals INP are provided to the 3 selection inputs of the multiplexer circuit 309 . The input signal INP may be generated by other circuits in the integrated circuit during operation or may be generated externally. The input signal INP causes the multiplexer circuit 309 to select one or more of the bits stored in the memory circuits 301-308 as one or more output signals OUT of the multiplexer 309 . LUT 300 may be configured from 8 bits stored in memory circuits 301-308 to perform any of a number of 3-input combinational (eg, Boolean) logic functions in response to input signal INP.As another example, in operation 203 the tool 101 may replace one or more logic circuits in the original design with one or more programmable finite state machine (PFSM) circuits. PFSM circuits are configurable circuits with memory circuits. When the PFSM circuit is configured with a bitstream, the PFSM circuit can be configured to perform the same logic function as the replaced logic circuit in the original design. In some embodiments, a PFSM circuit may be functionally equivalent to a look-up table.Referring again to FIG. 2, in operation 204, the logic circuit replacement tool 101 performs constrained synthesis on the modified RTL file generated for the converted design in operation 203 to generate a gate level network for the converted design surface. In operation 204, the tool 101 may use any synthesis tool to create a gate-level netlist for the converted design using the modified RTL file. The constrained synthesis performed in operation 204 may not be a complete synthesis of the transformed design. In contrast, the constrained synthesis performed in operation 204 may be, for example, a pre-synthesis process that prepares the converted design to create a netlist based on the various metrics applied in operation 205 .In operation 205 , the logic circuit replacement tool 101 evaluates the metrics of the gate-level netlist generated in operation 204 for the converted design. Metrics can be selected to ensure that the converted design meets any desired standard. If in operation 205 the tool 101 determines that the gate-level netlist generated for the converted design in operation 204 does not implement one or more of the metrics, then the tool 101 performs operation 202 using the constraints defined by the unrealized metrics Additional iterations of -204 to generate a modified gate-level netlist. After the additional iterations of operations 202-204 have been performed, the logic circuit replacement tool 101 again performs operation 205 to determine whether the modified gate-level netlist generated in the second iteration of operation 204 satisfies the metric.As an example, tool 101 may determine in operation 205 whether the gate-level netlist of the converted design exceeds a metric indicative of a maximum additional circuit overhead allowed for the converted design. As a specific example, an integrated circuit may only have room for a limited amount of additional circuitry (eg, 5-10%) in the converted design compared to the original design. If the gate-level netlist of the converted design exceeds the maximum amount of additional circuitry allowed as defined by the metrics in operation 205, the tool 101 reduces the amount of additional circuitry in the converted design during additional iterations of operations 202-204, to generate a modified converted design with no more than the maximum amount of allowed additional circuitry defined by the metric.As another example, the logic circuit replacement tool 101 may perform a security analysis in operation 205 to determine whether the bitstream generated in operation 203 can be reverse engineered using known reverse engineering techniques or known attacks . If the tool 101 determines in operation 205 that the bitstream generated in operation 203 can be reverse engineered or violate any other security metric, then the tool 101 uses additional configurable circuits and bitstreams during additional iterations of operations 202-204. The additional bits in replace the additional logic in the original design to generate a modified converted design that satisfies the safety metric.After the metrics evaluated in operation 205 have been implemented, the logic circuit replacement tool 101 outputs the converted design in operation 206 . The transformed design output in operation 206 may be further obfuscated, for example, using other obfuscation tools. The transformed design generated by tool 101 using the operations of FIG. 2 may be used to create an integrated circuit with configurable circuits including memory circuits. As an example, synthesis, place and route may be performed for an integrated circuit using the converted design generated by tool 101 . The bitstream generated in operation 203 is provided only to trusted parties, so that the original design of the integrated circuit cannot be accessed by any untrusted parties.FIG. 4 illustrates a circuit system including an integrated circuit having a circuit design that has been protected by the logic circuit replacement tool 101 of FIG. 1 , according to an embodiment. The circuitry of FIG. 4 includes an integrated circuit (IC) 400 , an integrated circuit (IC) 410 and a memory device 110 . The integrated circuit 400 includes a decryption engine 401 , a bitstream buffer circuit 402 , a bitstream controller circuit 403 and a circuit design 404 that has been converted by the logic circuit replacement tool 101 of the system 100 . Circuit design 404 is implemented in a portion of IC 400 that includes configurable circuitry including memory circuitry. A configurable circuit may be configured by a bitstream generated by tool 101 to perform the logic function of at least a portion of the original design of IC 400 . The configurable circuit may be, for example, a LUT or a PFSM, as discussed above.In the embodiment of FIG. 4 , the bitstream generated by the logic circuit replacement tool 101 is encrypted using an encryption key and stored in the storage device 110 . The storage device 110 containing the encrypted bitstream is provided only to trusted parties intended to have access to the original design of the circuit design 404 of the IC 400 . In order for IC 400 to operate according to the original design, an encrypted bitstream is initially provided from storage device 110 to firmware repository 411 in IC 410 , as shown in FIG. 4 . The firmware repository 411 performs error correction on the encrypted bitstream received from the storage device 110 using an error correction code to generate an error-corrected encrypted bitstream 420 . The error-corrected encrypted bitstream 420 is then provided to the decryption engine 401 from the firmware repository 411 .The decryption engine 401 includes a key storage circuit unit 421 (for example, a non-volatile memory) and a decryption tool 422 . The encryption key is supplied to the key storage circuit unit 421 . The key storage circuit unit 421 supplies the received encryption key to the decryption tool 422 . Decryption tool 422 decrypts encrypted bitstream 420 using an encryption key to generate decrypted bitstream 431 . The decrypted bitstream 431 is provided to the bitstream buffer circuit 402 . Bitstream buffer circuit 402 buffers the digital bits in decrypted bitstream 431 to generate decrypted and buffered bitstream 432 .The decrypted and buffered bitstream 432 is provided to the bitstream controller circuit 403 . The bitstream controller circuit 403 loads the decrypted and buffered bitstream 432 into a memory circuit in a configurable circuit in the circuit design 404 as a bitstream 433 . After the decrypted and buffered bitstream 433 is loaded and stored in the memory circuit in the configurable circuit, the configurable circuit is configured by the bitstream 433 to execute the logic replaced by the tool 101 in the original design of the circuit design 404 of the IC 400 The logic function of the circuit.The following examples relate to further embodiments. Example 1 is a method for protecting a circuit design of an integrated circuit, the method comprising: identifying logic circuits in at least a portion of the circuit design to be replaced; replacing the logic circuits in the circuit design with bitstreams and configurable circuits including memory circuits a logic circuit; and a transformed circuit design that generates an integrated circuit including the configurable circuit, wherein the configurable circuit in the transformed circuit design executes the logic circuit when the bitstream is stored in a memory circuit in the configurable circuit logic function.In Example 2, the method of Example 1 may optionally include, wherein the configurable circuit includes a look-up table circuit.In Example 3, the method of any of Examples 1-2 can optionally include wherein the configurable circuit comprises a programmable finite state machine circuit.In Example 4, the method of any of Examples 1-3 can optionally include, wherein identifying the logic circuit to be replaced in at least a portion of the circuit design further includes: adjusting the logic circuit in the circuit design based on the complexity of the circuit design. The number of logic circuits identified as being replaced.In Example 5, the method of any of Examples 1-4 may optionally further include: evaluating the metric against the transformed circuit design to determine whether the transformed circuit design satisfies the metric; and if it is determined that the transformed circuit design does not implement the metric, then, using the constraints defined by the metric, replace the additional logic circuitry in the transformed circuit design with additional bitstreams and additional configurable circuitry including additional memory circuitry to generate a revised transformed circuit design for the integrated circuit .In Example 6, the method of Example 5 may optionally further include, wherein evaluating the metric against the transformed circuit design to determine whether the transformed circuit design satisfies the metric further includes: evaluating whether the transformed circuit design exceeds the The maximum number of additional circuits allowed by the design.In Example 7, the method of Example 5 may optionally include, wherein evaluating the metrics against the transformed circuit design to determine whether the transformed circuit design satisfies the metrics further includes: determining whether known reverse engineering techniques or Known attacks reverse engineer the bitstream.In Example 8, the method of any of Examples 1-7 may optionally further comprise: performing synthesis, place and route on the converted circuit design on the integrated circuit.In Example 9, the method of any one of Examples 1-8 may optionally further include: storing the bit stream in a memory circuit in the integrated circuit to configure the configurable circuit to perform a logic function of the logic circuit.Example 10 is a non-transitory computer-readable storage medium comprising instructions stored thereon for causing a computer to perform a method for securing a circuit design of an integrated circuit, the method comprising identifying at least a portion of the circuit design the logic circuits performing the logic functions in the circuit design to be replaced; and replacing the logic circuits in the circuit design with the bitstream and the configurable circuit to generate the converted circuit design of the integrated circuit, wherein when the bitstream is stored in the configurable circuit When in a memory circuit, the configurable circuit performs the logic function of the logic circuit, wherein the converted circuit design includes the configurable circuit.In Example 11, the non-transitory computer-readable storage medium of Example 10 can optionally include wherein the configurable circuitry includes look-up table circuitry.In Example 12, the non-transitory computer-readable storage medium of any of Examples 10-11 can optionally further include, wherein the configurable circuit comprises a programmable finite state machine circuit.In Example 13, the non-transitory computer-readable storage medium of any one of Examples 10-12 may optionally include, wherein identifying a logic circuit performing a logic function in at least a portion of the circuit design to be replaced further includes: The number of logic circuits identified for replacement in the circuit design is adjusted based on the complexity of the circuit design.In Example 14, the non-transitory computer-readable storage medium of any of Examples 10-13 may optionally further comprise: evaluating metrics against the transformed circuit design to determine whether the transformed circuit design satisfies the metrics; and if determining that the transformed circuit design does not satisfy the metric, then using the constraints defined by the metric to replace the additional logic circuitry in the transformed circuit design with additional bitstreams and additional configurable circuitry including additional memory circuitry to generate an integrated circuit design Revised converted circuit design.In Example 15, the non-transitory computer-readable storage medium of Example 14 may optionally further include, wherein evaluating the metrics for the transformed circuit design to determine whether the transformed circuit design satisfies the metrics further includes: evaluating the transformed circuit design Whether the circuit design exceeds the maximum amount of additional circuitry allowed for the converted circuit design.Example 16 is a computer system configured to protect a circuit design of an integrated circuit, the computer system comprising: a logic circuit replacement tool for identifying logic performing a logic function in at least a portion of the circuit design to be replaced circuits, wherein a logic circuit replacement tool generates a converted circuit design of an integrated circuit by replacing logic circuits in the circuit design with a bitstream and a configurable circuit, wherein when the bitstream is stored in a memory circuit in the configurable circuit When , the configurable circuit performs the logic function of the logic circuit, wherein the converted circuit design includes the configurable circuit.In Example 17, the computer system of Example 16 can optionally include wherein the configurable circuitry comprises programmable finite state machine circuitry.In Example 18, the computer system of any of Examples 16-17 can optionally include wherein the configurable circuitry includes look-up table circuitry.In Example 19, the computer system of any of Examples 16-18 can optionally further include, wherein the logic circuit replacement tool adjusts the number of logic circuits in the circuit design identified as being replaced based on the complexity of the circuit design .In Example 20, the computer system of any of Examples 16-19 can optionally further include, wherein the logic circuit replacement tool evaluates metrics for the transformed circuit design to determine whether the transformed circuit design satisfies the metrics.The foregoing description of the exemplary embodiments has been presented for purposes of illustration. The foregoing description is not intended to be exhaustive or to limit the examples disclosed herein. The foregoing is merely illustrative of the principles of the disclosure and various modifications can be made by those skilled in the art. The above-mentioned embodiments can be implemented individually or in any combination. |
An apparatus and method are described for an on-chip reliability controller. For example, one embodiment of a processor comprises: a set of one or more cores to execute instructions and process data;a reliability controller to perform one or more self-test/diagnostic operations, the reliability controller to aggregate reliability data resulting from the self-test/diagnostic operations; a reliability estimator integral to the reliability controller to use the aggregated reliability data to perform a probability analysis to determine reliability estimates for one or more components of the processor; and a control unit integral to the reliability controller to adjust one or more variables and/or circuitry related to operation of the processor responsive to the reliability estimates. |
1.A processor comprising:A collection of one or more cores for executing instructions and processing data;A reliability controller for performing one or more self-test/diagnostic operations, the reliability controller for aggregating the results from the self-test/diagnostic operations and from one or more distributed across the processor Reliability data collected by a sensor array, where the reliability data includes correlation with voltage, frequency, bias temperature instability (BTI), electromigration (EM) damage, gate oxide (GOX) readings, and/or thermal loading Data related to ionic injection (HCI) readings;a reliability estimator, as part of the reliability controller, for performing a probabilistic analysis using the aggregated reliability data to determine reliability estimates for one or more components of the processor; anda control unit, as part of the reliability controller, for adjusting one or more variables and/or circuitry related to the operation of the processor based on the performance/lifetime profile in response to the reliability estimate .2.The processor of claim 1, wherein the one or more variables include a frequency and/or voltage at which the one or more components of the processor operate.3.3. The processor of claim 2, wherein the control unit is to perform a self-aging operation in response to the reliability estimate.4.3. The processor of claim 2, wherein the control unit is to perform a self-healing operation in response to the reliability estimate.5.3. The processor of claim 2, wherein the control unit is to perform a self-healing operation in response to the reliability estimate.6.The processor of claim 1, wherein the probabilistic analysis comprises a Bayesian probability calculation performed on the aggregated data.7.The processor of claim 1, wherein the control unit is configured to adjust the one or more variables and/or circuitry based on a desired level of performance and/or reliability as indicated by customer requirements.8.A processing method comprising:perform one or more self-test/diagnostic operations on the processor;Aggregate reliability data generated by the self-test/diagnostic operations and received from one or more sensor arrays distributed throughout the processor, wherein the reliability data includes differences with voltage, frequency, bias temperature data on stability (BTI), electromigration (EM) damage, gate oxide (GOX) readings and/or hot carrier injection (HCI) readings;performing a probabilistic analysis using the aggregated reliability data to determine reliability estimates for one or more components of the processor; andIn response to the reliability estimate, one or more variables and/or circuitry related to the operation of the processor are adjusted based on a performance/lifetime profile.9.9. The processing method of claim 8, wherein the one or more variables include frequency and/or voltage at which the one or more components of the processor operate.10.10. The processing method of claim 9, wherein the processing method further comprises performing a self-aging operation in response to the reliability estimate.11.10. The processing method of claim 9, wherein the processing method further comprises performing a self-healing operation in response to the reliability estimate.12.10. The processing method of claim 9, wherein the processing method further comprises performing a self-healing operation in response to the reliability estimate.13.9. The processing method of claim 8, wherein the probabilistic analysis includes a Bayesian probability calculation performed on the aggregated data.14.8. The processing method of claim 8, wherein the processing method further comprises adjusting the one or more variables and/or circuitry based on a desired level of performance and/or reliability as indicated by customer requirements.15.A processing system comprising:memory, for storing instructions and data;a processor for executing the instructions and processing the data;A graphics processor for performing graphics operations in response to graphics instructions;a network interface for receiving and transmitting data over a network;an interface for receiving user input from a mouse or cursor control device, the processor executing the instructions and processing the data in response to the user input;The processor includes:A collection of one or more cores for executing instructions and processing data;A reliability controller for performing one or more self-test/diagnostic operations, the reliability controller for aggregating the results from the self-test/diagnostic operations and from one or more distributed across the processor Reliability data collected by a sensor array, where the reliability data includes correlation with voltage, frequency, bias temperature instability (BTI), electromigration (EM) damage, gate oxide (GOX) readings, and/or thermal loading Data related to ionic injection (HCI) readings;a reliability estimator, as part of the reliability controller, for performing a probabilistic analysis using the aggregated reliability data to determine reliability estimates for one or more components of the processor; anda control unit, as part of the reliability controller, for adjusting one or more variables and/or circuitry related to the operation of the processor based on the performance/lifetime profile in response to the reliability estimate .16.16. The processing system of claim 15, wherein the one or more variables include a frequency and/or voltage at which the one or more components of the processor operate.17.17. The processing system of claim 16, wherein the control unit is to perform a self-aging operation in response to the reliability estimate.18.17. The treatment system of claim 16, wherein the control unit is to perform a self-healing operation in response to the reliability estimate.19.17. The processing system of claim 16, wherein the control unit is to perform a self-healing operation in response to the reliability estimate.20.16. The processing system of claim 15, wherein the probabilistic analysis includes a Bayesian probability calculation performed on the aggregated data.21.16. The processing system of claim 15, wherein the control unit is configured to adjust the one or more variables and/or circuitry based on a desired level of performance and/or reliability as indicated by customer requirements.22.A processing device comprising:means for performing one or more self-test/diagnostic operations on the processor;means for aggregating reliability data generated by the self-test/diagnostic operations and received from one or more sensor arrays distributed throughout the processor, wherein the reliability data includes correlations with voltage, frequency, data on Bias Temperature Instability (BTI), Electromigration (EM) damage, Gate Oxide (GOX) readings and/or Hot Carrier Injection (HCI) readings;means for performing a probabilistic analysis using the aggregated reliability data to determine reliability estimates for one or more components of the processor; andfor adjusting one or more variables and/or components of circuitry related to operation of the processor based on the performance/lifetime profile in response to the reliability estimate.23.23. The processing device of claim 22, wherein the one or more variables include a frequency and/or voltage at which the one or more components of the processor operate.24.24. The processing device of claim 23, wherein the processing device further comprises means for performing a self-aging operation in response to the reliability estimate.25.24. The treatment device of claim 23, wherein the treatment device further comprises means for performing a self-healing operation in response to the reliability estimate.26.24. The processing device of claim 23, wherein the processing device further comprises means for performing a self-healing operation in response to the reliability estimate.27.23. The processing device of claim 22, wherein the probabilistic analysis includes a Bayesian probability calculation performed on the aggregated data.28.23. The processing device of claim 22, wherein the processing device further comprises a means for adjusting the one or more variables and/or circuitry based on a desired level of performance and/or reliability as indicated by customer requirements part.29.A computer-readable medium having stored thereon instructions that, when executed, cause a computing device to perform the processing method of any of claims 8-14. |
Apparatus and method for on-chip reliability controllertechnical fieldThe present invention generally relates to the field of computer processors. More particularly, the present invention relates to methods and apparatus for on-chip reliability controllers.Background technique1. processor microarchitectureAn instruction set or instruction set architecture (ISA) is the part of a computer architecture related to programming, including native data types, instructions, register shelf structures, addressing modes, memory architecture, interrupt and exception handling, and external input and output (I/O ). It should be noted that the term "instruction" herein generally refers to macroinstructions - which are instructions provided to the processor for execution - as opposed to microinstructions or microoperations - which are the result of the processor's decoder decoding the macroinstruction. Microinstructions or microoperations can be configured to instruct execution units on a processor to perform operations to implement logic associated with the macroinstructions.ISA differs from microarchitecture as a set of processor design techniques for implementing instruction sets. Processors with different microarchitectures can share a common instruction set. For example, Intel® Pentium 4 processors, Intel® CoreTM™ processors, and processors from Advanced Micro Devices, Inc. of Sunnyvale, Calif., implement nearly identical versions of x86 instructions set (of which some extensions have been taken from a newer version), but with a different internal design. For example, the same register architecture of an ISA can be implemented differently in different microarchitectures using well-known techniques, including dedicated physical registers, using register renaming mechanisms (eg, using a register alias table (RAT), reordering buffer (ROB) ) and the retirement register file) of one or more dynamically allocated physical registers. Unless otherwise specified, the phrases register architecture, register file, and register are used herein to refer to what is visible to the software/programmer and the way in which instructions specify registers. Where distinction is required, the adjectives "logical," "architectural," or "software-visible" will be used to designate registers/files within a register architecture, while different adjectives will be used to designate within a given microarchitecture registers (eg, physical registers, reorder buffers, retirement registers, register pools).2. Reliability identificationReliability qualification of processors has historically been applied to the entire process technology whereby field failures in the distribution of parts are limited to a certain tolerable level (eg, 500 DPM) over the life of the product. Next-generation scaling will present reliability challenges that require separation from traditional qualification methods. Techniques are needed to measure reliability health and compensate accordingly on a unit-level basis, referred to herein as on-chip reliability.Description of drawingsA better understanding of the present invention can be obtained from the following detailed description taken in conjunction with the accompanying drawings, wherein:1A and 1B are block diagrams illustrating a general vector friendly instruction format and instruction templates thereof according to embodiments of the present invention;2A-D are block diagrams illustrating exemplary specific vector friendly instruction formats in accordance with embodiments of the present invention;Figure 3 is a block diagram of a register architecture according to one embodiment of the invention; and4A is a block diagram illustrating both an exemplary in-order fetch, decode, retire pipeline and an exemplary register renaming, out-of-order issue/execution pipeline in accordance with an embodiment of the present invention;4B is a diagram illustrating both an exemplary embodiment of an in-order fetch, decode, retire core and an exemplary register renaming, out-of-order issue/execute architecture core to be included in a processor in accordance with an embodiment of the present invention. block diagram;5A is a block diagram of a single processor core and its connection to an on-die interconnect network;5B illustrates an expanded view of a portion of the processor core of FIG. 5A according to an embodiment of the present invention;6 is a block diagram of a single-core processor and a multi-core processor with integrated memory controller and graphics according to an embodiment of the present invention;Figure 7 illustrates a block diagram of a system according to one embodiment of the invention;8 illustrates a block diagram of a second system according to an embodiment of the present invention;Figure 9 illustrates a block diagram of a third system according to an embodiment of the present invention;10 illustrates a block diagram of a system on a chip (SoC) according to an embodiment of the present invention;11 illustrates a block diagram of converting binary instructions in a source instruction set to binary instructions in a target instruction set in contrast to the use of a software instruction converter, according to an embodiment of the present invention;12 illustrates an exemplary multi-core processor on which embodiments of the present invention may be implemented;Figure 13 illustrates a reliability controller according to one embodiment of the present invention; andFigure 14 illustrates a method according to one embodiment of the present invention.detailed descriptionIn the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the invention described below. However, it will be apparent to those skilled in the art that embodiments of the present invention may be practiced without some of these specific details. In other instances, well-known structures and devices are shown in block diagram form in order not to obscure the underlying principles of embodiments of the invention.Exemplary Processor Architecture and Data TypesAn instruction set includes one or more instruction formats. A given instruction format defines various fields (number of bits, bit positions) to specify, among other things, the operation to be performed (opcode) and the operands (on which the operation is to be performed). Some instruction formats are further broken down by the definition of instruction templates (or sub-formats). For example, an instruction template for a given instruction format may be defined to have different subsets of the fields of the instruction format (the fields included are typically in the same order, but at least some have different bit positions because fewer fields are included) and/or or is defined to have a given field interpreted differently. Thus, each instruction of the ISA is represented using a given instruction format (and, if defined, in a given one of the instruction templates for that instruction format) and includes fields for specifying operations and operands. For example, an exemplary ADD instruction has a specific opcode and instruction format that includes an opcode field for specifying that opcode and an operand field for selecting an operand (source 1/destination and source 2); and the instruction Occurrences of this ADD instruction in the stream will have specific content in the operand field that selects a specific operand. A collection of SIMD extensions (relating to Advanced Vector Extensions (AVX) (AVX1 and AVX2) and using the Vector Extensions (VEX) encoding scheme) have been published and/or published (see, for example, the Intel® 64 and IA-32 Architectures Software Developer's Manual ( Architectures Software Developers Manual), October 2011; and see Intel® Advanced Vector Extensions Programming Reference, June 2011).Exemplary Instruction FormatEmbodiments of the instructions described herein may be implemented in different formats. Additionally, exemplary systems, architectures, and pipelines are detailed below. Embodiments of the instructions may be executed on such systems, architectures and pipelines, but are not limited to those detailed.A. General vector friendly instruction formatA vector friendly instruction format is an instruction format suitable for use with vector instructions (eg the presence of certain fields specific to vector operations). While embodiments are described in which both vector and scalar operations are supported through a vector friendly instruction format, alternative embodiments use only vector operations in a vector friendly format.1A-1B are block diagrams illustrating a general vector friendly instruction format and instruction templates thereof according to embodiments of the present invention. FIG. 1A is a block diagram illustrating a generic vector friendly instruction format and its Class A instruction template according to an embodiment of the present invention; and FIG. 1B is a block diagram illustrating a generic vector friendly instruction format and its class A instruction template according to an embodiment of the present invention. Category B instruction template. Specifically, for the generic vector friendly instruction format 100, class A and class B instruction templates are defined, both of which include no memory access 105 instruction templates and memory access 120 instruction templates. The term "general" in the context of a vector friendly instruction format means that the instruction format is not tied to any particular instruction set.Although embodiments of the present invention will be described in which the vector friendly instruction format supports the following: 64-byte vector operands with 32-bit (4-byte) or 64-bit (8-byte) data element width (or size) length (or size) (and thus, a 64-byte vector consists of 16 doubleword-sized elements or alternatively 8 quadword-sized elements); with 16 bits (2 bytes) or 8 bits (1 byte) 64-byte vector operand length (or size) of data element width (or size); with 32 bits (4 bytes), 64 bits (8 bytes), 16 bits (2 bytes), or 8 bits ( 1 byte) 32-byte vector operand length (or size) of data element width (or size); and 32-bit (4-byte), 64-bit (8-byte), 16-bit (2-byte) , or a 16-byte vector operand length (or size) of 8-bit (1-byte) data element width (or size); but alternative embodiments may support more, less, or different data element widths ( For example, more, fewer, and/or different vector operand sizes (eg, 256-byte vector operands) of 128 bits (16-byte data element width).The category A instruction templates in Figure 1A include: 1) Within the no memory access 105 instruction template, the no memory access, full round control type operation 110 instruction template is shown, and the no memory access, data transformation type operation 115 instruction template; and 2) within the memory access 120 instruction template, the memory access, transient 125 instruction template, and the memory access, non-transient 130 instruction template are shown. The Class B instruction templates in Figure 1B include: 1) Within the no memory access 105 instruction template, the no memory access, write mask control, partial round control type operation 112 instruction template is shown, and the no memory access, write mask code control, vsize type operation 117 instruction template; and 2) within the memory access 120 instruction template, the memory access, writemask control 127 instruction template is shown.The generic vector friendly instruction format 100 includes the following fields, listed in order below, shown in FIGS. 1A-1B .Format field 140 - A specific value in this field (the instruction format identifier value) uniquely identifies the vector friendly instruction format, and thus the occurrence of an instruction in the vector friendly instruction format in the instruction stream. Thus, this field is optional in the sense that it is not required for instruction sets that only have the general vector friendly instruction format.Base Operation Field 142 - Its content identifies the various base operations.Register Index Field 144—its contents specify the location of the source and destination operands (either in registers or in memory), either directly or through address generation. These include a sufficient number of bits to select N registers from a PxQ (eg 32x512, 16x128, 32x1024, 64x1024) register file. Although in one embodiment N may be up to three source and one destination registers, alternative embodiments may support more or fewer source and destination registers (eg, may support up to two sources, where these One of the sources also acts as a destination; up to three sources can be supported, where one of the sources also acts as a destination; up to two sources and one destination can be supported).Modifier field 146—its content identifies the presence of instructions in the general vector instruction format that specify memory accesses and those that do not; that is, in the no memory access 105 instruction template and memory access 120 instructions Identify between templates. Memory access operations read and/or write to the memory hierarchy (in some cases where values in registers are used to specify source and/or destination addresses), while non-memory access operations do not (eg, source and destination are registers ). Although in one embodiment this field also selects between three different ways to perform memory address operations, alternative embodiments may support more, fewer, or different ways for performing memory address operations.Augmentation operation field 150 - its content identifies which of a number of different operations to be performed in addition to the base operation. This field is context specific. In one embodiment of the invention, this field is divided into a category field 168 , an alpha field 152 , and a beta field 154 . Augmenting the operation field 150 allows a general set of operations to be performed in a single instruction instead of 2, 3, or 4 instructions.Scale field 160—its contents allow scaling of the contents of the index field for memory address generation (eg, for address generation using 2 scale*index+base).Displacement field 162A—its content is used as part of memory address generation (eg, for address generation using 2 scaling*index+base+displacement).Displacement Factor Field 162B (note that the concatenation of Displacement Field 162A directly on Displacement Factor field 162B indicates that one or the other is used)—its contents are used as part of address generation; it specifies the size to be accessed through memory ( N) to scale the displacement factor—where N is the number of bytes in the memory access (eg, for address generation using 2 scale * index + base + scaled displacement). Redundant low-order bits are ignored, and therefore, the contents of the displacement factor field are multiplied by the total memory operand size (N) to generate the final displacement to be used in operating the effective address. The value of N is determined by the processor hardware at runtime based on the full opcode field 174 (described later herein) and the data manipulation field 154C. The displacement field 162A and the displacement factor field 162B are optional in the sense that they are not used for the no memory access 105 instruction template and/or different embodiments may implement only one or neither of the two.Data Element Width field 164—its content identifies which of the multiple data element widths is to be used (in some embodiments for all instructions; in other embodiments for only some of the instructions). This field is optional in the sense that it is not required if only one data element width is supported and/or if some aspect of the opcode is used to support the data element width.Writemask field 170 - its contents control, on a per data element position basis, whether that data element position in the destination vector operand reflects the results of the base operation and augmentation operation. Class A instruction templates support merge write masking, while class B instruction templates support both merge and zero write masking. When merging, the vector mask allows any set of elements in the destination to be protected from updating during the execution of any operation (specified by the base and augment operations); in another embodiment, the corresponding The mask bits have the old value of each element of the destination of 0. In contrast, when zeroing, a vector mask allows any set of elements in the destination to be zeroed during the execution of any operation (specified by the base and augment operations); in one embodiment, The element of the destination is set to 0 when the corresponding mask bit has a value of 0. A subset of this functionality is the ability to control the vector length of the operation being performed (that is, the span of the elements being modified, from first to last); however, the elements being modified are not necessarily is continuous. Thus, the write mask field 170 allows partial vector operations, including loads, stores, arithmetic, logic, and the like. Although an embodiment of the present invention is described in which the contents of the writemask field 170 select one of a plurality of writemask registers containing the writemask to be used (and thus the contents of the writemask field 170 indirectly identify to be executed masking), but alternative embodiments instead or additionally allow the contents of mask write field 170 to directly specify the masking to be performed.Literal field 172 - its content allows specification of the immediate. This field is optional in the sense that it does not exist in implementations of the general vector-friendly format that do not support immediates and in the sense that it does not exist in instructions that do not use immediates.Category field 168 - its content identifies between different categories of instructions. Referring to Figures 1A-B, the content of this field selects between Category A and Category B instructions. In FIGS. 1A-B, rounded squares are used to indicate specific values presented in fields (eg, category A 168A and category B 168B corresponding to category field 168 in FIGS. 1A-B).Instruction Templates for Category AIn the case of a category A non-memory access 105 instruction template, the alpha field 152 is interpreted as an RS field 152A, the content of which identifies which of the different augmentation operation types is to be performed (eg, round 152A.1 and data Transforms 152A.2 for no memory access, round type operations 110, and no memory access, data transform type operations 115 instruction templates are specified accordingly), while beta field 154 identifies which of the specified type of operations is to be performed. In the no memory access 105 instruction template, scale field 160, displacement field 162A, and displacement scale field 162B are absent.No memory access instruction template - full rounding control type operationIn the no memory access full rounding control type operation 110 instruction template, beta field 154 is interpreted as rounding control field 154A, the content of which provides static rounding. Although in the described embodiment of the present invention rounding control field 154A includes suppression of all floating point exception (SAE) fields 156 and rounding operation control field 158, alternative embodiments may support that both these concepts may be combined Encoded into the same field, or with only one or the other of these concepts/fields (eg, may have only round operation control field 158).SAE field 156—its content identifies whether exception reporting is disabled; when the content of SAE field 156 indicates that suppression is enabled, the given instruction does not report any kind of floating-point exception flag and does not raise any floating-point exception handler.Round Operation Control Field 158 - Its contents identify which of a set of rounding operations to perform (eg, round up, round down, round towards zero, and round to nearest). Thus, the round operation control field 158 allows for rounding mode changes on an instruction-by-instruction basis. In one embodiment of the invention where the processor includes a control register for specifying the rounding mode, the contents of the rounding operation control field 150 override that register value.No Memory Access Instruction Templates—Data Transformation Type OperationsIn the no memory access data transform type operation 115 instruction template, beta field 154 is interpreted as data transform field 154B, the content of which identifies which of a plurality of data transforms is to be performed (eg, no data transform, swizzle ),broadcast).In the case of a class A memory access 120 instruction template, the alpha field 152 is interpreted as an eviction hint field 152B, the content of which identifies which of the eviction hints is to be used (in FIG. 1A, transient 152B.1 and non-transient 152B.2 for memory access, transient 125 instruction templates, and memory access, non-transient 130 instruction templates are specified accordingly), while beta field 154 is interpreted as data manipulation field 154C, the contents of which identify multiple data manipulation operations (also Know which of the primitives is to be performed (eg, no manipulation; broadcast; up-conversion of source; and down-conversion of destination). The memory access 120 instruction template includes a scale field 160, and optionally a displacement field 162A or a displacement scale field 162B.Vector memory instructions perform vector loads from memory and vector stores to memory through translation support. As with conventional vector instructions, vector memory instructions transfer data from/to memory in a data-element-wise fashion, with the element actually being transferred being indicated by the contents of the vector mask selected as the write mask.Memory Access Instruction Templates - TemporaryTransient data is data that is likely to be reused fast enough to benefit from being cached. However, this is a hint, and different processors can implement it in different ways, including ignoring the hint entirely.Memory Access Instruction Templates - Non-TemporaryNon-transitory data is data that is unlikely to be reused fast enough to benefit from being cached in a tier 1 cache and should be given priority for eviction. However, this is a hint, and different processors can implement it in different ways, including ignoring the hint entirely.Instruction Templates for Category BIn the case of a class B instruction template, the alpha field 152 is interpreted as a writemask control (Z) field 152C, the content of which identifies whether the writemask controlled by the writemask field 170 should be merged or zeroed.In the case of a class B non-memory access 105 instruction template, the portion of beta field 154 is interpreted as RL field 157A, the content of which identifies which of the different augmentation operation types is to be performed (eg, rounding 157A.1 and vector length (VSIZE) 157A.2 for no memory access, write mask control, partial round control type operations 112 instruction templates, and no memory access, write mask control, VSIZE type operations 117 instruction templates are specified accordingly), And the remainder of the beta field 154 identifies which of the specified type of operations is to be performed. In the no memory access 105 instruction template, scale field 160, displacement field 162A, and displacement scale field 162B are absent.In no memory access, write mask control, partial round control type operation 110 instruction template, the remainder of beta field 154 is interpreted as round operation field 159A, and exception reporting is disabled (the given instruction does not report any kind of floating-point exception flag and does not invoke any floating-point exception handler).Round operation control field 159A—as with round operation control field 158, its contents identify which of a set of rounding operations to perform (eg, round up, round down, round towards zero, and round to nearest ). Thus, the round operation control field 159A allows for rounding mode changes on a per-instruction basis. In one embodiment of the invention where the processor includes a control register for specifying the rounding mode, the contents of the rounding operation control field 150 override that register value.In the no memory access, writemask control, VSIZE type operation 117 instruction template, the remainder of the beta field 154 is interpreted as a vector length field 159B, the content of which identifies which of multiple data vector lengths is to be executed (eg , 128, 256, or 512 bytes).In the case of a class B memory access 120 instruction template, the portion of beta field 154 is interpreted as broadcast field 157B, the content of which identifies whether a broadcast type data manipulation operation is to be performed, and the remainder of beta field 154 is interpreted is the vector length field 159B. The memory access 120 instruction template includes a scale field 160, and optionally a displacement field 162A or a displacement scale field 162B.With respect to the general vector friendly instruction format 100 , the complete opcode field 174 is shown, including the format field 140 , the base operation field 142 , and the data element width field 164 . Although one embodiment is shown in which the full opcode field 174 includes all of these fields, in embodiments that do not support all of these fields, the full opcode field 174 includes less than all of these fields. The full opcode field 174 provides an operation code (opcode).Augmentation operation field 150, data element width field 164, and writemask field 170 allow these features to be specified on a per-instruction basis in the general vector friendly instruction format.The combination of the writemask field and the data element width field creates typed instructions because they allow masks to be applied based on different data element widths.The various instruction templates established within Category A and Category B are beneficial in different contexts. In some embodiments of the invention, different processors or different cores within a processor may support only class A, only class B, or both classes. For example, high performance general purpose out-of-order cores intended for general purpose computing may support only category B, cores intended primarily for graphics and/or scientific (throughput) computing may support only category A, and cores intended for both Both classes may be supported (of course, some mix of cores with templates and instructions from both classes but not all templates and instructions from both classes is within the confines of this invention). Likewise, a single processor may include multiple cores, all of which support the same class or where different cores support different classes. For example, in a processor with separate graphics and general purpose cores, one of the graphics cores intended primarily for graphics and/or scientific computing may support only category A, while one or more of the general purpose cores may support Class B only high-performance general-purpose cores with out-of-order operation and register renaming intended for general-purpose computing. Another processor that does not have a separate graphics core may include one more general purpose in-order or out-of-order core that supports both class A and class B. Of course, features from one class may also be implemented in another class in different embodiments of the invention. Programs written in a high-level language will be translated (eg, just-in-time compiled or statically compiled) into a number of different runnable forms, including: 1) with instructions-only classes for execution that are supported by the target processor form; or 2) have alternative routines written using different combinations of all classes of instructions and have control flow code that selects the routine to run based on the instructions supported by the processor (which is currently running the code). flow code).B. Exemplary Specific Vector Friendly Instruction Format2 is a block diagram illustrating an exemplary specific vector friendly instruction format according to an embodiment of the present invention. FIG. 2 shows a specific vector friendly instruction format 200 that is specific in that it specifies the location, size, interpretation, and order of fields, and the meaning of the values of some of those fields. The specific vector friendly instruction format 200 may be used to extend the x86 instruction set, and thus some fields of the fields are similar or identical to those used in the existing x86 instruction set and its extensions (eg, AVX). This format is consistent with the prefix encoding field, true opcode byte field, MOD R/M field, SIB field, displacement field, and immediate field with extensions to the existing x86 instruction set. Fields from FIG. 1 to which the fields from FIG. 2 map are shown.It should be understood that although embodiments of the present invention are described for illustrative purposes with reference to a specific vector friendly instruction format 200 in the context of a general vector friendly instruction format 100, the present invention is not limited to specific vectors unless otherwise stated. Friendly Instruction Format 200. For example, the general vector friendly instruction format 100 contemplates multiple possible sizes for various fields, while the specific vector friendly instruction format 200 is shown as having fields of particular sizes. By way of particular example, although the data element width field 164 is shown as a bit field in the particular vector friendly instruction format 200, the invention is not so limited (that is, the generic vector friendly instruction format 100 contemplates the data element width other sizes of field 164).The generic vector friendly instruction format 100 includes the following fields shown in Figure 2A, listed below in order.EVEX prefix (bytes 0-3) 202-encoded in four bytes.Format Field 140 (EVEX Byte 0, Bits[7:0]) - The first byte (EVEX Byte 0) is the Format Field 140 and it contains 0x62 (used to identify the unique value in vector friendly directive format).The second-fourth bytes (EVEX bytes 1-3) include a number of bit fields that provide specific capabilities.REX field 205 (EVEX byte 1, bits [7-5])—consists of: EVEX.R bit field (EVEX byte 1, bits [7]—R), EVEX.X bit field (EVEX byte 1, bits [6]—X), and 157BEX bytes 1, bits [5]—B). EVEX.R, EVEX.X, and EVEX.B bitfields provide the same functionality as the corresponding VEX bitfields and are encoded using 1s complement form, ie ZMM0 is encoded as 1111B and ZMM15 is encoded as 0000B. The other fields of the instruction encode the lower three bits of the register index (rrr, xxx, and bbb) as known in the art, so that Rrrr, Xxxx, and Bbbb can be accessed by adding EVEX.R, EVEX.X, and EVEX. B to form.REX' field 110 - This is the first part of the REX' field 110 and is the upper 16 or lower 16 EVEX.R' bit field used to encode the extended 32 register set (EVEX byte 1, bit [4] -R'). In one embodiment of the invention, this bit is stored in a bit-reversed format, along with other bits as indicated below, to identify (in the well-known x8632-bit pattern) the BOUND instruction whose true opcode byte is 62 , but the value of 11 in the MOD field is not accepted in the MOD R/M field (described below); alternative embodiments of the present invention do not store this bit and the other bit indicated below in an inverted format. A value of 1 is used to encode the lower 16 registers. In other words, R'Rrrr is formed by combining EVEX.R', EVEX.R, and another RRR from other fields.Opcode Map Field 215 (EVEX Byte 1, Bits[3:0]—mmmm)—The leading opcode byte (0F, 0F 38, or 0F 3) implied by its content encoding.Data Element Width Field 164 (EVEX Byte 2, Bits[7]-W) - Denoted by the symbol EVEX.W. EVEX.W is used to define the granularity (size) of the data type (32-bit data elements or 64-bit data elements).EVEX.vvvv 220 (EVEX byte 2, bits[6:3]-vvvv) - The role of EVEX.vvvv may include the following: 1) EVEX.vvvv encodes the first specified in inverted (1s complement) form source register operand, and is valid for instructions with 2 or more source operands; 2) EVEX.vvvv encodes the destination register operand specified in 1s complement form for some vector shifts; or 3) EVEX.vvvv does not encode any operands, the field is reserved and should contain 1111b. Thus, EVEX.vvvv field 220 encodes the 4 low-order bits of the first source register specifier stored in inverted (1s complement) form. Depending on the instruction, additional different EVEX bit fields are used to extend the specifier size to 32 registers.EVEX.U 168 Category field (EVEX byte 2, bits[2]-U) - if EVEX.U=0, it indicates Category A or EVEX.U0; if EVEX.U=1, it indicates Category B or EVEX.U1.Prefix encoding field 225 (EVEX byte 2, bits[1:0]-pp)—Provides additional bits for the base operation field. In addition to providing support for legacy SSE instructions in the EVEX prefix format, this has the benefit of a compact SIMD prefix (instead of requiring bytes to represent the SIMD prefix, EVEX prefix requires only 2 bits). In one embodiment, to support legacy SSE instructions that use SIMD prefixes (66H, F2H, F3H) both in the legacy format and in the EVEX prefix format, these legacy SIMD prefixes are encoded into the SIMD prefix encoding field; and is expanded into legacy SIMD prefixes at runtime before the PLA provided to the decoder (so the PLA can run both legacy and EVEX formats of these legacy instructions without modification). While newer instructions can directly use the contents of the EVEX prefix encoding field as an opcode extension, some embodiments extend in a similar manner for consistency but allow for different meanings to be specified by these legacy SIMD prefixes. Alternative embodiments may redesign the PLA to support 2-bit SIMD prefix encoding, and thus require no extensions.Alpha field 152 (EVEX byte 3, bit[7]—EH; also known as EVEX.EH, EVEX.rs, EVEX.RL, EVEX.writemask control, and EVEX.N; also shown by alpha )—As described earlier, this field is context-specific.Beta field 154 (EVEX byte 3, bits [6:4] - SSS, also known as EVEX.s2-0, EVEX.r2-0, EVEX.rr1, EVEX.LL0, EVEX.LLB; also known by βββ shown)—As previously described, this field is context-specific.REX' field 110 - This is the remainder of the REX' field and is the upper 16 or lower 16 EVEX.V' bit field that can be used to encode the extended 32 register set (EVEX byte 3, bit[3] -V'). The bits are stored in a bit-reversed format. A value of 1 is used to encode the lower 16 registers. In other words, V'VVVV is formed by combining EVEX.V', EVEX.vvvv.Writemask field 170 (EVEX byte 3, bits[2:0]-kkk)—its content specifies the index of the register in the writemask register as previously described. In one embodiment of the invention, the specific value EVEX.kkk=000 has special behavior implying that no writemask is used for a particular instruction (this may include using a hardwired writemask to all registers or bypassing the masking hardware in a variety of ways).The true opcode field 230 (byte 4) is also known as the opcode byte. The part of the opcode is specified in this field.MOD R/M field 240 (byte 5) includes MOD field 242 , Reg field 244 , and R/M field 246 . As previously described, the contents of the MOD field 242 identify between memory access and non-memory access operations. The role of the Reg field 244 can be summarized in two contexts: encoding a destination register operand or source register operand, or being treated as an opcode extension and not being used to encode any instruction operands. The role of the R/M field 246 may include the following: encode an instruction operand that references a memory address, or encode a destination register operand or a source register operand.Scale, Index, Base (SIB) Byte (Byte 6)—As previously described, the contents of the scale field 150 are used for memory address generation. SIB.xxx 254 and SIB.bbb 256 - The contents of these fields were previously mentioned with respect to register indices Xxxx and Bbbb.Displacement Field 162A (Bytes 7-10) - Bytes 7-10 are Displacement Field 162A when MOD field 242 contains 10, and it works the same as legacy 32-bit displacement (disp32) and works at byte granularity.Displacement Factor Field 162B (Byte 7) - When the MOD field 242 contains 01, byte 7 is the displacement factor field 162B. The location of this field is the same as that of the legacy x86 instruction set 8-bit displacement (disp8), which works at byte granularity. Since disp8 is an extended token, it can only be addressed between -128 and 127 byte offsets; in terms of a 64-byte cache line, disp8 uses can be set to only four really useful values -128, - 8 bits for 64, 0, and 64; disp32 is used since larger ranges are often needed; however, disp32 requires 4 bytes. In contrast to disp8 and disp32, displacement factor field 162B is a reinterpretation of disp8; when displacement factor field 162B is used, the actual displacement is determined by the contents of the displacement factor field multiplied by the size (N) of the memory operand access. This type of displacement is called disp8*N. This reduces the average instruction length (a single byte for displacement but with a much larger range). Such compressed displacement is based on the assumption that the effective displacement is a multiple of the granularity of the memory access and thus the redundant low-order bits of the address offset need not be encoded. In other words, the displacement factor field 162B replaces the legacy x86 instruction set 8-bit displacement. Therefore, the displacement factor field 162B is encoded in the same way as the x86 instruction set 8-bit displacement (so no change in ModRM/SIB encoding rules), with the only exception that disp8 is overloaded to disp8*N. In other words, there is no change in encoding rules or encoding length, except in the interpretation of the displacement value by hardware (which requires scaling the displacement by the size of the memory operand to obtain a byte-wise address offset) .The immediate field 172 operates as previously described.full opcode fieldFigure 2B is a block diagram illustrating the fields of a particular vector friendly instruction format 200 that make up the complete opcode field 174 in accordance with one embodiment of the present invention. Specifically, full opcode field 174 includes format field 140 , base operation field 142 , and data element width (W) field 164 . Base operation field 142 includes prefix encoding field 225 , opcode mapping field 215 , and true opcode field 230 .register index fieldFIG. 2C is a block diagram illustrating the fields of a particular vector friendly instruction format 200 that make up the register index field 144 according to one embodiment of the present invention. Specifically, register index field 144 includes REX field 205 , REX' field 210 , MODR/M.reg field 244 , MODR/M.r/m field 246 , VVVV field 220 , xxx field 254 , and bbb field 256 .Augment operation fieldFIG. 2D is a block diagram illustrating the fields of the specific vector friendly instruction format 200 that make up the augmentation operation field 150 according to one embodiment of the present invention. When the category (U) field 168 contains 0, it symbolizes EVEX.U0 (category A 168A); when it contains 1, it symbolizes EVEX.U1 (category B 168B). When U=0 and MOD field 242 contains 11 (signifying no memory access operation), alpha field 152 (EVEX byte 3, bits [7]—EH) is interpreted as rs field 152A. When rs field 152A contains 1 (rounding 152A.1), beta field 1454 (EVEX byte 3, bits[6:4]-SSS) is interpreted as rounding control field 154A. Rounding control field 154A includes one-bit SAE field 156 and two-bit rounding operation field 158 . When the rs field 152A contains 0 (data transform 152A.2), the beta field 154 (EVEX byte 3, bits[6:4]-SSS) is interpreted as a three-bit data transform field 154B. When U=0 and MOD field 242 contains 00, 01, or 10 (signifying a memory access operation), alpha field 152 (EVEX byte 3, bit[7]—EH) is interpreted as eviction hint (EH) field 152B And the beta field 154 (EVEX byte 3, bits[6:4]-SSS) is interpreted as a three-bit data manipulation field 154C.When U=1, the alpha field 152 (EVEX byte 3, bits [7]—EH) is interpreted as a write mask control (Z) field 152C. When U=1 and MOD field 242 contains 11 (indicating no memory access operation), the portion of beta field 154 (EVEX byte 3, bits[4]-S0) is interpreted as RL field 157A; when it contains 1 ( When rounding 157A.1), the remainder of beta field 154 (EVEX byte 3, bits [6-5]-S2-1) is interpreted as rounding operation field 159A, while RL field 157A contains 0 (VSIZE 157.A2), the remainder of beta field 154 (EVEX byte 3, bits [6-5]-S2-1) is interpreted as vector length field 159B (EVEX byte 3, bits [6-5]- L1-0). When U=1 and MOD field 242 contains 00, 01, or 10 (symbolizing a memory access operation), beta field 154 (EVEX byte 3, bits[6:4]-SSS) is interpreted as vector length field 159B ( EVEX byte 3, bits [6-5]-L1-0) and broadcast field 157B (EVEX byte 3, bits [4]-B).C. Exemplary Register ArchitectureFIG. 3 is a block diagram of a register architecture 300 according to one embodiment of the invention. In the embodiment shown, there are 32 vector registers 310 that are 512 bits wide; these registers are referenced as zmm0 through zmm31. The lower order 256 bits of the lower 16 zmm registers are overlaid on registers ymm0-16. The lower order 128 bits of the lower 16 zmm registers (the lower order 128 bits of the ymm registers) are overlaid on registers xmm0-15. The specific vector friendly instruction format 200 operates on these overlaid register files as shown in the following table.In other words, the vector length field 159B selects between the maximum length and one or more other shorter lengths, where each such shorter length is half the length of the preceding length; and there is no instruction template with the vector length field 159B Operate on the maximum vector length. Further, in one embodiment, the Class B instruction templates of the particular vector friendly instruction format 200 operate on packed or scalar single/double precision floating point data and packed or scalar integer data. Scalar operations are operations performed on the lowest order data element positions in the zmm/ymm/xmm registers; higher order data element positions remain the same as they were prior to the instruction or are zeroed, depending on the embodiment.Writemask Registers 315 - In the illustrated embodiment, there are 8 writemask registers (k0 through k7), each 64 bits in size. In an alternative embodiment, the size of the write mask register 315 is 16 bits. As previously described, in one embodiment of the present invention, the vector mask register k0 cannot be used as a writemask; it selects a hardwire of 0xFFFF when the code that would normally indicate k0 is used for the writemask writemask, effectively disabling writemasking for that instruction.General Purpose Registers 325 - In the embodiment shown, there are sixteen 64-bit general purpose registers that are used in conjunction with the existing x86 addressing mode to address memory operands. These registers are referenced by the names RAX, RBX, RCX, RDX, RBP, RSI, RDI, RSP, and R8 through R15.Scalar floating-point stack register file (x87 stack) 345 on which the MMX packed integer flat register file 350 is aliased - in the embodiment shown, the x87 stack is used to use the x87 instruction set extension at 32/64/ Eight-element stack for performing scalar floating-point operations on 80-bit floating-point data; while MMX registers are used to perform operations on 64-bit packed integer data, and are also used for some operations performed between MMX and XMM registers to store the operands.Alternative embodiments of the present invention may use wider or narrower registers. Additionally, alternative embodiments of the present invention may use more, fewer, or different register files and registers.D. Exemplary Core Architecture, Processor, and Computer ArchitectureProcessor cores can be implemented in different ways, for different purposes, and in different processors. For example, implementations of such cores may include: 1) general-purpose in-order cores intended for general-purpose computing; 2) high-performance general-purpose out-of-order cores intended for general-purpose computing; 3) intended primarily for graphics and/or scientific ( dedicated cores for throughput) calculations. Implementations of different processors may include: 1) a CPU including one or more general purpose in-order cores intended for general purpose computing and/or one or more general purpose out-of-order cores intended for general purpose computing; and 2) including one or more general purpose out-of-order cores intended for general purpose computing A coprocessor of one or more dedicated cores primarily for graphics and/or science (throughput). Such different processors result in different computer system architectures, which may include: 1) a co-processor on a separate die from the CPU; 2) a co-processor on a separate die in the same package as the CPU; 3 ) co-processors on the same die as the CPU (in which case such co-processors are sometimes referred to as special-purpose logic, such as integrated graphics and/or scientific (throughput) logic, or as special-purpose logic core); and 4) a system-on-a-chip that may include the described CPU (sometimes referred to as an application core or application processor), the co-processor described above, and additional functionality on the same die. An exemplary core architecture is described next, followed by a description of an exemplary processor and computer architecture.4A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/run pipeline in accordance with an embodiment of the present invention. 4B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/run architecture core to be included in a processor in accordance with an embodiment of the present invention. The solid-line boxes in Figures 4A-B show in-order pipelines and in-order cores, while optional additions of dashed-line boxes show register renaming, out-of-order issue/run pipelines and cores. Given that the in-order aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described.In Figure 4A, processor pipeline 400 includes fetch stage 402, length decode stage 404, decode stage 406, allocate stage 408, rename stage 410, schedule (also known as dispatch or issue) stage 412, register read/memory read stage 414 , run stage 416 , write back/memory write stage 418 , exception handling stage 422 , and commit stage 424 .FIG. 4B shows processor core 490 including front end unit 430 coupled to run engine unit 450 , and both coupled to memory unit 470 . The cores 490 may be reduced instruction set computing (RISC) cores, complex instruction set computing (CISC) cores, very long instruction word (VLIW) cores, or a hybrid or alternative core type. As yet another option, the cores 490 may be special-purpose cores such as, for example, network or communication cores, compression engines, co-processor cores, general purpose computing graphics processing unit (GPGPU) cores, graphics cores, and the like.Front end unit 430 includes a branch prediction unit 432 coupled to an instruction cache unit 434, which is coupled to an instruction translation lookaside buffer (TLB) 436, which is coupled to an instruction fetch unit At 438 , the instruction fetch unit 438 is coupled to the decode unit 440 . Decode unit 440 (or decoder) may decode instructions and generate to output one or more micro-operations, microcode entry points, microinstructions, other instructions, or other control signals that are decoded from, or otherwise reflected in , or is derived from the original instruction. Decoding unit 440 may be implemented using a variety of different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), and the like. In one embodiment, core 490 includes a microcode ROM or another medium (eg, in decode unit 440 or otherwise within front end unit 430) that stores microcode for certain macroinstructions. Decode unit 440 is coupled to rename/distributor unit 452 in runtime unit 450 .Run engine unit 450 includes a rename/distributor unit 452 coupled to a retirement unit 454 and a set of one or more scheduler units 456 . Scheduler unit 456 represents any number of different schedulers, including reservation stations, central command windows, and the like. Scheduler unit 456 is coupled to physical register file unit 458 . Each of the physical register file units 458 represents one or more physical register files, different physical register files of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector Integer, vector floating point, state (eg, instruction pointer which is the address of the next instruction to be executed), etc. In one embodiment, the physical register file unit 458 includes a vector register unit, a writemask register unit, and a scalar register unit. These register units may provide architectural vector registers, vector mask registers, and general purpose registers. Physical register file unit 458 is overlaid by retirement unit 454 to illustrate the various ways in which register renaming and out-of-order operation can be implemented (eg, using reorder buffers and retirement register files; using future heaps, history buffers, and retire register files; use register maps and pools of registers; etc.). Retirement unit 454 and physical register file unit 458 are coupled to run cluster 460 . The run cluster 460 includes a set of one or more run units 462 and a set of one or more memory access units 464 . Execution unit 462 may perform various operations (eg, shift, add, subtract, multiply) and perform operations on various types of data (eg, scalar floating point, packed integer, packed floating point, vector integer, vector floating point) execute on. While some embodiments may include multiple operational units dedicated to a particular function or set of functions, other embodiments may include multiple operational units or only one operational unit that all perform all functions. Scheduler unit 456, physical register file unit 458, and run cluster 460 are shown as possibly complex, as some embodiments create separate pipelines for certain types of data/operations (eg, scalar integer pipelines, scalar floating Point/Packed Integer/Packed Float/Vector Integer/Vector Float Pipelines, and/or Memory Access Pipelines, each with their own scheduler unit, physical register file unit, and/or run cluster—and in separate In the case of a memory access pipeline, where only the run-only cluster of this pipeline has a memory access unit 464, some embodiments are implemented). It should also be understood that where individual pipelines are used, one or more of these pipelines may issue/run out-of-order and the rest in-order.The set of memory access units 464 is coupled to a memory unit 470 that includes a data TLB unit 472 coupled to a data cache unit 474 that is coupled to a level 2 (L2) cache unit 476 . In one exemplary embodiment, memory access unit 464 may include a load unit, a store address unit, and a store data unit, each of which is coupled to data TLB unit 472 in memory unit 470 . Instruction cache unit 434 is further coupled to level 2 (L2) cache unit 476 in memory unit 470 . L2 cache unit 476 is coupled to one or more other levels of cache and ultimately to main memory.By way of example, an exemplary register renaming, out-of-order issue/run core architecture may implement the following pipeline 400: 1) instruction fetch 438 performs fetch and length decode stages 402 and 404; 2) decode unit 440 performs decode stage 406; 3 ) rename/allocator unit 452 performs allocation phase 408 and rename phase 410; 4) scheduler unit 456 performs scheduling phase 412; 5) physical register file unit 458 and memory unit 470 perform register read/memory read phase 414; run Cluster 460 performs run phase 416; 6) memory unit 470 and physical register file unit 458 perform write back/memory write phase 418; 7) various units may be involved in exception handling phase 422; and 8) retirement unit 454 and physical The register file unit 458 performs the commit phase 424 .The core 490 may support one or more instruction sets (eg, x86 instruction set (with some extensions that have been added with newer versions); MIPS instruction set from MIPS Technologies of Sunnyvale, CA; ARM instruction set from ARM Holdings of Sunnyvale, CA set (with optional additional extensions such as NEON), including the instructions described in this article. In one embodiment, core 490 includes logic to support packed data instruction set extensions (eg, AVX1, AVX2), thus allowing operations used by many multimedia applications to be performed using packed data.It should be understood that cores may support multithreading (running two or more parallel collections of operations or threads), and may do so in a variety of ways, including time segmented multithreading, simultaneous Threads (in the case where a single physical core provides a logical core for each of the threads, that physical core is doing simultaneous multithreading), or a combination thereof (eg, time-segmented fetch and decode such as in Intel® Hyper-Threading Technology and subsequent simultaneous multithreading).Although register renaming is described in the context of out-of-order operation, it should be understood that register renaming can be used in an in-order architecture. While the illustrated embodiment of the processor also includes separate instruction and data cache units 434/474 and a shared L2 cache unit 476, alternative embodiments may have a single internal for both instruction and data A cache, such as, for example, a level 1 (L1) internal cache, or multiple levels of internal caches. In some embodiments, the system may include a combination of internal caches and external caches external to the core and/or processor. Alternatively, all caches may be external to the core and/or processor.5A-B show block diagrams of a more specific exemplary in-order core architecture where a core would be one of several logic blocks in a chip (including other cores of the same type and/or different types). The logic blocks communicate with some fixed function logic, memory I/O interfaces, and another necessary I/O logic, depending on the application, through a high bandwidth interconnect network (eg, a ring network).5A is a block diagram of a single processor core along with its connections to on-die interconnect network 502 and along with its local subset of level 2 (L2) cache 504, according to an embodiment of the present invention. In one embodiment, the instruction decoder 500 supports the x86 instruction set with packed data instruction set extensions. L1 cache 506 allows low latency access to cache memory into scalar and vector units. Although in one embodiment (to simplify the design), scalar unit 508 and vector unit 510 use separate sets of registers (scalar registers 512 and vector registers 514, respectively), and data transferred between them is written to memory and then read back from level 1 (L1) cache 506, but alternative embodiments of the present invention may use different means (eg, use a single set of registers or include allowing data to be transferred between the two register files communication paths that are not written and read back).The local subset of L2 cache 504 is part of the global L2 cache, which is divided into separate local subsets, one per processor core. Each processor core has a direct access path to its own local subset of L2 cache 504 . Data read by a processor core is stored in its L2 cache subset 504 and can be accessed quickly, in parallel with other processor cores accessing their own local L2 cache subset. Data written by a processor core is stored in its own L2 cache subset 504 and flushed from other subsets if necessary. The ring network ensures the consistency of the shared data. The ring network is bidirectional to allow agents such as processor cores, L2 caches, and other logic blocks to communicate with each other within the chip. Each ring data-path is 1012-bits wide per direction.5B is an expanded view of a portion of the processor core in FIG. 5A according to an embodiment of the present invention. FIG. 5B includes the L1 data cache 506A portion of the L1 cache 504 , as well as more details about the vector unit 510 and the vector registers 514 . Specifically, vector unit 510 is a 16-wide vector processing unit (VPU) (see 16-wide ALU 528) that executes one or more of integer, single-precision float, and double-precision float instructions. The VPU supports shuffling of register inputs by shuffling unit 520 , value conversion by value conversion units 522A-B, and duplication by copy unit 524 on memory inputs. Write mask register 526 allows vector writes of prediction results.6 is a block diagram of a processor 600 that may have more than one core, may have an integrated memory controller, and may have integrated graphics, according to an embodiment of the invention. The solid-line box in FIG. 6 shows the processor 600 with a single core 602A, a system agent 610, a set of one or more bus controller units 616, while the optional addition of the dashed-line box shows the processor 600 with multiple cores 602A -N, a set of one or more integrated memory controller units 614 in the system agent unit 610, and an alternative processor 600 for the special purpose logic 608.Thus, different implementations of processor 600 may include: 1) a CPU with dedicated logic 608 that is integrated graphics and/or scientific (throughput) logic (which may include one or more cores), and one or more Cores 602A-N with general purpose cores (eg, general purpose in-order cores, general purpose out-of-order cores, a combination of the two); 2) with a large number of cores that are intended primarily for graphics and/or science (throughput) and 3) a coprocessor with cores 602A-N that are a large number of general purpose ordered cores. Thus, processor 600 may be a general-purpose processor, a co-processor, or a special-purpose processor such as, for example, a network or communications processor, a compression engine, a graphics processor, a GPGPU (General Purpose Graphics Processing Unit), a high-throughput many integrated core ( MIC) coprocessors (including 30 or more cores), embedded processors, etc. A processor may be implemented on one or more chips. Processor 600 may be implemented on and/or part of one or more substrates using any of a number of processing technologies, such as, for example, BiCMOS, CMOS, or NMOS.The memory hierarchy includes one or more levels of in-core cache memory, a set or one or more of shared cache memory units 606 , and external memory (not shown) coupled to the set of integrated memory controller units 614 . The set of shared cache units 606 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, last level Cache memory (LLC), and/or combinations thereof. Although in one embodiment the ring-based interconnect unit 612 interconnects the integrated graphics logic 608, the set of shared cache units 606, and the system agent unit 610/integrated memory controller unit 614, alternative embodiments Any number of known techniques for interconnecting such cells may be used. In one embodiment, coherency between one or more cache units 606 and cores 602-A-N is maintained.In some embodiments, one or more of the cores 602A-N are multi-threaded capable. System agent 610 includes those components that coordinate and operate cores 602A-N. The system agent unit 610 may include, for example, a power control unit (PCU) and a display unit. The PCU may be or include the logic and components required to regulate the power states of the integrated graphics logic 608 and cores 602A-N. The display unit is used to drive one or more externally connected displays.The cores 602A-N may be homogeneous or heterogeneous with respect to architectural instruction sets; that is, two or more cores of the cores 602A-N may have the ability to run the same instruction set, while other cores may have the ability to run different instructions capability of a set or only a subset of that instruction set.7-10 are block diagrams of exemplary computer architectures. For laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video Other system designs and configurations known in the art of gaming devices, set-top boxes, microcontrollers, cellular telephones, portable media players, handheld devices, and various other electronic devices are also suitable. In general, a wide variety of systems or electronic devices capable of incorporating processors and/or other operating logic as disclosed herein are generally suitable.Referring now to FIG. 7, shown is a block diagram of a system 700 in accordance with one embodiment of the present invention. System 700 may include one or more processors 710 , 715 coupled to controller hub 720 . In one embodiment, controller hub 720 includes graphics memory controller hub (GMCH) 790 and input/output hub (IOH) 750 (which may be on separate chips); GMCH 790 includes memory 740 and coprocessor 745 are Memory and graphics controllers coupled to; IOH 750 couples input/output (I/O) devices 760 to GMCH 790. Alternatively, one or both of the memory and graphics controller are integrated within the processor (as described herein), the memory 740 and coprocessor 745 are directly coupled to the processor 710, and the processor with the IOH 750 Controller hub 720 in a single chip.The optional nature of the additional processor 715 is referred to in FIG. 7 with a broken line. Each processor 710 , 715 may include one or more of the processing cores described herein, and may be some version of processor 600 .Memory 740 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 720 communicates with the processors 710 , 715 via a multipoint bus such as a front side bus (FSB), a point-to-point interface such as a quick path interconnect (QPI), or similar connection 795 .In one embodiment, coprocessor 745 is a special purpose processor such as, for example, a high throughput MIC processor, network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, and the like. In one embodiment, the controller hub 720 may include an integrated graphics accelerator.There are many differences between the physical resources 710, 715 in terms of spectral energy including specifications including architectural, microarchitectural, thermal, power consumption characteristics, and the like.In one embodiment, processor 710 executes instructions that control general types of data processing operations. Embedded within the instructions may be coprocessor instructions. The processor 710 recognizes these coprocessor instructions as the type that should be executed by the attached coprocessor 745 . Accordingly, processor 710 issues these coprocessor instructions (or control signals representing coprocessor instructions) to coprocessor 745 over a coprocessor bus or other interconnect. Coprocessor 745 accepts and executes the received coprocessor instructions.Referring now to FIG. 8, shown is a block diagram of a first more specific exemplary system 800 in accordance with an embodiment of the present invention. As shown in FIG. 8 , the multiprocessor system 800 is a point-to-point interconnect system and includes a first processor 870 and a second processor 880 coupled via a point-to-point interconnect 850 . Each of processors 870 and 880 may be some version of processor 600 . In one embodiment of the invention, processors 870 and 880 are processors 710 and 715 , respectively, and coprocessor 838 is coprocessor 745 . In another embodiment, processors 870 and 880 are processor 710, co-processor 745, respectively.Processors 870 and 880 are shown to include integrated memory controller (IMC) units 872 and 882, respectively. Processor 870 also includes point-to-point (P-P) interfaces 876 and 878 as part of its bus controller unit; similarly, second processor 880 includes P-P interfaces 886 and 888 . Using P-P interface circuits 878 , 888 , processors 870 , 880 may exchange information via a point-to-point (P-P) interface 850 . As shown in FIG. 8, IMCs 872 and 882 couple the processors to respective memories (ie, memory 832 and memory 834), which may be portions of main memory locally attached to the respective processors.Using point-to-point interface circuits 876, 894, 886, 898, processors 870, 880 may exchange information with chipset 890 via respective P-P interfaces 852, 854, respectively. Chipset 890 may optionally exchange information with coprocessor 838 via high performance interface 839 . In one embodiment, coprocessor 838 is a special purpose processor such as, for example, a high throughput MIC processor, network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, etc.A shared cache (not shown) may be included in either processor or outside of both processors, in turn connected to the processors via the PP interconnect so that if the processors are placed in a low power mode, then Local cache information for either or both processors may be stored in a shared cache.Chipset 890 may be coupled to first bus 816 via interface 896 . In one embodiment, the first bus 816 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third-generation I/O interconnect bus, although the scope of the invention is not so limited .As shown in FIG. 8 , the various I/O devices 814 may be coupled to the first bus 816 along with a bus bridge 818 that couples the first bus 816 to the second bus 820 . In one embodiment, one such as a co-processor, a high-throughput MIC processor, a GPGPU, an accelerator (such as, for example, a graphics accelerator or a digital signal processing (DSP) unit), a field programmable gate array, or any other processor One or more additional processors 815 are coupled to the first bus 816 . In one embodiment, the second bus 820 may be a low pin count (LPC) bus. Various devices may be coupled to the second bus 820, including, for example, a keyboard and/or mouse 822, a communication device 827, and a storage unit 828, such as a hard drive or other mass storage device, which may include instructions/code and data 830 (in the one example). Further, audio I/O 824 may be coupled to second bus 820 . Note that other architectures are possible. For example, instead of the point-to-point architecture of FIG. 8, the system may implement a multi-drop bus or another such architecture.Referring now to FIG. 9, shown is a block diagram of a second more specific exemplary system 900 in accordance with an embodiment of the present invention. Like elements in FIGS. 8 and 9 have been labeled with like reference numerals, and certain aspects of FIG. 8 have been omitted from FIG. 9 in order to avoid obscuring other aspects of FIG. 9 .9 shows that processors 870, 880 may include integrated memory and I/O control logic ("CL") 872 and 882, respectively. Thus, the CLs 872, 882 include integrated memory controller units and include I/O control logic. 9 shows that not only memory 832, 834 is coupled to CL 872, 882, but I/O device 914 is also coupled to control logic 872, 882. Legacy I/O devices 915 are coupled to chipset 890 .Referring now to FIG. 10, shown is a block diagram of an SoC 1000 in accordance with an embodiment of the present invention. Similar elements in Figure 6 are marked with similar reference numerals. Again, the dotted box is an optional feature on more advanced SoCs. 10, the interconnect unit 1002 is coupled to: an application processor 1010, which includes a set of one or more cores 202A-N and a shared cache unit 606; a system proxy unit 610; a bus controller unit 616; Integrated memory controller unit 614; set or one or more of co-processors 1020, which may include integrated graphics logic, image processors, audio processors, and video processors; static random access memory (SRAM) unit 1030; a direct memory access (DMA) unit 1032; and a display unit 1040 for coupling to one or more external displays. In one embodiment, coprocessor 1020 includes a special purpose processor such as, for example, a network or communications processor, compression engine, GPGPU, high throughput MIC processor, embedded processor, and the like.Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementations. Embodiments of the present invention can be implemented as program code or a computer program running on a programmable system including at least one processor, a memory system (including volatile and nonvolatile memory and/or storage element), at least one input device, and at least one output device.Program code, such as code 830 shown in FIG. 8, may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices in a known manner. For the purposes of this application, a processing system includes any system having a processor such as, for example, a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.Program code may be implemented in a high-level procedural or object-oriented programming language to communicate with the processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited to the scope of any particular programming language. In any case, the language may be a compiled or interpreted language.One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium, the representative instructions representing various logic within a processor that, when read by a machine, cause the machine to make Logic for implementing the techniques described herein. Such representations (known as "IP cores") may be stored on tangible, machine-readable media and supplied to various customers or fabrication facilities for loading into fabrication machines that actually make logic or processors.Such machine-readable storage media may include, without limitation, non-transitory, tangible arrangements of articles manufactured or formed by a machine or apparatus, including storage media such as hard disks, including floppy disks, optical disks, compact disks, and only. read-only memory (CD-ROM), compact disc rewritable (CD-RW), and any other type of disc of magneto-optical disc), semiconductor devices (such as read only memory (ROM), such as dynamic random access memory (DRAM) ), Static Random Access Memory (SRAM), Random Access Memory (RAM), Erasable Programmable Read Only Memory (EPROM), Flash Memory, Electrically Erasable Programmable Read Only Memory (EEPROM), Phase Change memory (PCM), magnetic or optical cards, or any other type of medium suitable for storing electronic instructions).Accordingly, embodiments of the present invention also include non-transitory, tangible machine-readable media containing instructions or containing design data, such as a hardware description language (HDL), which defines the structures, circuits, devices described herein , processor and/or system characteristics. Such embodiments may also be referred to as program products.In some cases, an instruction converter may be used to convert instructions from a source instruction set to a target instruction set. For example, an instruction translator may translate (eg, using static binary translation, dynamic binary translation including dynamic compilation), warp, emulate, or otherwise convert an instruction into one or more other instructions to be processed by the core. Instruction translators are implemented in software, hardware, firmware, or a combination thereof. The instruction translator may be on-processor, off-processor, or partially on-processor and separate from the processor.Figure 11 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set in accordance with an embodiment of the present invention. In the illustrated embodiment, the instruction converter is a software instruction converter, although alternatively the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof. 11 shows that using an x86 compiler 1104, a program in a high-level language 1102 can be compiled to generate x86 binary code 1106, which can be run natively by a processor 1116 with at least one x86 instruction set core. Processor with at least one x86 instruction set core 1116 represents any processor capable of performing substantially the same functions as an Intel processor with at least one x86 instruction set core, by compatibly operating or otherwise processing (1 ) a substantial portion of the instruction set of an Intel x86 instruction set core, or (2) an object code version for an application or another software running on an Intel processor with at least one x86 instruction set core, in order to obtain Roughly the same result as an Intel processor with at least one x86 instruction set core. The x86 compiler 1104 represents a compiler operable to generate x86 binary code 1106 (eg, object code) that can be processed on a processor 1116 with at least one x86 instruction set core with or without additional linkage processing is run. Similarly, Figure 11 shows that using an alternative instruction set compiler 1108, a program in the high-level language 1102 can be compiled to generate alternative instruction set binary code 1110, which can be processed by a processor without at least one x86 instruction set core The processor 1114 (eg, a processor with a core running the MIPS instruction set of MIPS Technologies of Sunnyvale, CA and/or a core running the ARM instruction set of ARM Holdings of Sunnyvale, CA) to run natively. Instruction converter 1112 is used to convert x86 binary code 1106 into code that can be run natively by processor 1114 without an x86 instruction set core. This translated code is unlikely to be the same as the alternative instruction set binary code 1110, as an instruction translator capable of doing so would be difficult to make; however, the translated code would perform general operations and be composed of instructions from the alternative instruction set. Thus, instruction translator 1112 represents software, firmware, hardware, or a combination thereof that, through emulation, emulation, or any other process, allows a processor or another electronic device that does not have an x86 instruction set processor or core to run x86 binary code 1106 .Apparatus and method for on-chip reliability controllerAs mentioned, reliability qualification of processors has historically been applied to the entire process technology whereby field failures in the distribution of parts are limited to a certain tolerable level over the life of the product (e.g. , 500 DPM). Next-generation scaling presents reliability challenges that require a departure from traditional qualification methods. To address this problem, embodiments of the present invention introduce techniques for measuring reliability health and compensating accordingly on a unit-level basis, sometimes referred to herein as "on-chip reliability."FIG. 12 illustrates an exemplary processor architecture 1200 on which embodiments of the invention may be implemented, including a core logic region 1201 and a non-core logic region 1210 . Core logic region 1201 contains multiple cores 1201a-n, which may be multithreaded cores capable of executing multiple instruction streams simultaneously. Each core 1201a-n may contain well-known instruction pipeline components for performing out-of-order or in-order execution of instruction streams, including instruction fetch units, decode units, execution units, writeback/retirement units, general purpose vectors, and mask registers , branch prediction unit, translation lookaside buffer (TLB), and various cache levels including level 1 (L1) cache and level 2 (L2) cache. It should be noted, however, that the underlying principles of the present invention are not limited to any particular processor architecture.In the illustrated embodiment, interconnects 1205, such as point-to-point interconnects, communicatively couple cores 1201a-n to various components within non-core logic 1210, including caches 1220 (eg, L3 caches) shared by the cores , an integrated memory controller 1230 that provides access to system memory 1200, and one or more input/output (I/O) interfaces 1235 (eg, such as PCI Express or similar interfaces).In one embodiment of the invention, reliability controller 1250 uses sensor array 1260 distributed throughout processor 1200 to collect reliability data, aggregate and analyze the collected data, and responsively perform chip reliability-related operations One or more control functions 1251. As discussed below, these control functions include (but are not limited to) adjusting voltage and frequency, performing self-healing, self-healing, self-burn-in, and/or implementing self-binning/classification /Reliability determination. More specifically, in one embodiment, the reliability controller 1250 measures field degradation, periodically calculates updated conditional failure probabilities (eg, using Bayesian probability calculations) and performance metrics, and calculates the Life profiles, reclassification of field parts. This capability provided by reliability controller 1250 can be specifically tuned for ultra-high performance applications (eg, data centers) or ultra-high reliability applications (eg, life support systems) as dictated by customer requirements.13 illustrates additional details for one embodiment of a reliability controller 1250 that includes a compute engine 1330 with an aggregator 1332 for aggregating reliability data collected from reliability sensor arrays 1260 and for implementing self- The self-test/diagnostic module 1320 of the inspection process. As mentioned, in one embodiment, reliability sensor array 1260 is distributed to collect data from various areas of processor 1200 . By way of example and not limitation, this may include effects related to voltage, frequency, bias temperature instability (BTI), electromigration (EM) damage, gate oxide (GOX) readout and/or hot carrier injection (HCI) ) reading related data.In one embodiment, the self-test diagnostic module 1320 contains a test engine and/or test scripts to run diagnostic tests on various parts of the processor 1200, generating a specified set of conditions (eg, specific clock frequencies, voltage levels, etc.), And result data is collected from reliability sensor array 1260 .Aggregator 1332 combines all reliability data collected via self-test/diagnostic module 1320 and/or reliability sensor array 1260 and formats the data for analysis by reliability estimator 1334. In one embodiment, reliability estimator 1334 performs a Bayesian probability analysis to determine reliability measures for the processor and/or its various components. The results of the Bayesian analysis are used as input to the controller 1340, which responsively performs reliability-related control functions. As mentioned, the controller 1340 may perform the control functions of the controller based on the specific application for which the processor is used. For example, controller 1340 may be tuned for applications requiring high performance (eg, data centers), applications requiring high reliability (eg, life support systems in hospitals), or applications requiring different combinations of performance and reliability. In this manner, the reliability controller 1250 can configure the part to a desired performance/reliability point based on the customer's needs.In one embodiment, using the data provided by reliability estimator 1334, the controller may control the internal/external voltage and/or frequency (eg, Vcc) of the processor. Additionally, the controller 1340 may perform operations related to self-aging control, self-healing control, and self-healing control, as described in more detail below.In one embodiment, a self-aging cycle may be initiated in the field to initially force the system to die before launching mission critical applications in order to ensure fail-safe operation. These cycles can also be used within the feedback loop of reliability estimator 1334 to determine expected versus actual degradation that occurs during field self-aging cycles. These burn-in cycles can be achieved by integrating resistive heaters into the die level (eg, routing interconnects and/or active transistors) and/or controlling package-level cooling solutions.Electromigration (EM) damage primarily affects direct current (DC) lines. In one embodiment, it is repaired by the controller 1340 during the self-healing cycle by actuating the line in alternating current (AC) mode in the opposite polarity to typical operation. Bias temperature instability (BTI) is known to exhibit relaxation effects at AC. By operating in opposite gate-to-channel polarities, devices that typically operate at DC can be repaired in a special self-healing cycle. This repair effect is accelerated at elevated temperatures in the same way as damage. As such, in one embodiment, the controller and/or integrated structure for self-aging can be reused in this self-healing mode.In one embodiment, the controller 1340 performs a self-healing operation by adaptively switching on a new spare/recent part for an old failed part. For example, power and control signals may be dynamically re-routed and re-routed based on pinpoint detection of failures generated by the self-test/diagnostic module 1320 (eg, generated using a field-deployed adaptive test pattern).Using the reliability controller 1250, portions of binning and sorting testing can be integrated on-chip, which provides the ability to re-sort parts in the field. As mentioned, based on customer requirements, the reclassification algorithm can be tuned for various desired levels of performance and/or reliability (eg, ultra-high performance or ultra-high reliability). For example, parts that do not experience severe degradation can be up-binned in situ to increase performance. In contrast, parts that experience unexpectedly high levels of degradation can be down-binned to increase reliability.In any usage profile, the reliability measurement system may indicate via BIOS 1350 a quantification of reliability per unit. This may include, for example, to name a few, for threshold voltage changes, channel current degradation, graded electromigration structure array failures, gate oxide leakage/failure from area variation arrays, and overall static and dynamic leakage instruct.A method according to one embodiment of the invention is illustrated in FIG. 14 . The methods may be implemented within the context of the processor and system architectures described above, but are not limited to any particular architecture.At 1401, field self-test/diagnostics are performed on the chip. In one embodiment, the self-test/diagnostics include scripts that execute adaptive test patterns and collect results. At 1402, chip data is collected using an array of reliability sensors distributed in the area of the chip in which testing is performed. The data can be aggregated and used at 1403 to perform reliability estimation. As mentioned, in one embodiment, a Bayesian probability analysis may be performed to determine reliability measures for the chip and/or its various components. At 1404, various control functions and/or chip updates can be implemented in response to the reliability estimate. As discussed above, this may include adjusting voltage/frequency, self-aging operations, self-healing operations, and self-healing operations.In the foregoing specification, embodiments of the present invention have been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes can be made hereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. Accordingly, the specification and drawings are to be considered in an illustrative rather than a restrictive sense.Embodiments of the present invention may include the various steps that have been described above. The steps may be implemented in machine-executable instructions, which may be used to cause a general-purpose or special-purpose processor to perform the steps. Alternatively, the steps may be performed by specific hardware components that contain hardwired logic for performing the steps, or by any combination of programmed computer components and custom hardware components.As described herein, the instructions may relate to a specific configuration of hardware, such as hardware configured to perform certain operations or specific integration with predetermined functionality or software instructions stored in a memory embodied in a non-transitory computer-readable medium circuit (ASIC). Accordingly, the techniques shown in the figures can be implemented using code and/or data stored and executed on one or more electronic devices (eg, end stations, network elements, etc.). Such electronic devices use computer machine-readable media (such as non-transitory computer machine-readable storage media (eg, magnetic disks, optical disks, random access memory, read only memory, flash memory devices, phase change memory) and transient computer Machine-readable communication media (eg, electrical, optical, acoustic, or other forms of propagated signals - such as carrier waves, infrared signals, digital signals, etc.), storage and delivery (internally and/or with other electronic devices over a network) Code and data. In addition, such electronic devices typically include coupling to, for example, one or more storage devices (non-transitory machine-readable storage media), user input/output devices (eg, keyboards, touchscreens, and/or displays), and a network A collection of one or more processors that are connected to one or more other components. The coupling of the processors of this collection to other components is typically through one or more buses and bridges (also called bus controllers). Carrying a network The signal of the service and the storage device represent one or more machine-readable communication media and machine-readable storage media, respectively. Therefore, the storage device of a given electronic device generally stores code and/or data for use in one of the collection of electronic devices. or multiple processors. Of course, different combinations of software, firmware and/or hardware can be used to realize one or more parts of the embodiments of the present invention. Throughout this detailed description, for the purpose of explanation, it is stated that Many specific details are in order to provide a detailed understanding of the present invention. However, those skilled in the art will appreciate that the present invention can be practiced without some of these specific details. In some instances, well-known structures and function in order to avoid obscuring the subject matter of the present invention. Therefore, the scope and spirit of the present invention should be judged according to the following claims. |
An integrated circuit substrate of an aspect includes a plurality of exposed electrical contacts. The integrated circuit substrate also includes an inaccessible set of Physically Unclonable Function (PUF) cells to generate an inaccessible set of PUF bits that are not accessible through the exposed electrical contacts. The integrated circuit substrate also includes an accessible set of PUF cells to generate an accessible set of PUF bits that are accessible through the exposed electrical contacts. Other apparatus, methods, and systems are also disclosed. |
CLAIMS What is claimed is: 1. An integrated circuit substrate comprising: a plurality of exposed electrical contacts; an inaccessible set of Physically Unclonable Function (PUF) cells to generate an inaccessible set of PUF bits that are not accessible through the exposed electrical contacts; and an accessible set of PUF cells to generate an accessible set of PUF bits that are accessible through the exposed electrical contacts. 2. The integrated circuit substrate of claim 1 , further comprising logic to allow the accessible set of PUF bits to be accessible through the exposed electrical contacts, and wherein there is no logic to allow the inaccessible set of PUF bits to be accessible through the exposed electrical contacts. 3. The integrated circuit substrate of claim 1, wherein the inaccessible set of PUF bits are to be provided to security logic for use in security and the accessible set of PUF bits are not to be provided to the security logic for use in security. 4. The integrated circuit substrate of claim 1, further comprising: security logic; logic to provide the inaccessible set of PUF bits to the security logic, and wherein there is no logic to provide the accessible set of PUF bits to the security logic. 5. The integrated circuit substrate of claim 1, wherein the accessible set of PUF cells are within a region more enabled for debug than a region having the inaccessible set of PUF cells. 6. The integrated circuit substrate of claim 1, wherein the integrated circuit substrate comprises a wafer, wherein the inaccessible set of PUF cells is within a die, and wherein the accessible set of PUF cells within a cut-away region of the wafer that is to be removed during dicing. 7. The integrated circuit substrate of claim 1, wherein the integrated circuit substrate comprises a die, wherein the inaccessible and accessible sets of PUF cells are proximate one another on the die. 8. The integrated circuit substrate of claim 1, wherein the integrated circuit substrate comprises a die, and wherein the inaccessible and accessible sets of PUF cells are not proximate one another on the die. 9. The integrated circuit substrate of claim 1, wherein the exposed electrical contacts comprise at least one of pads, bumps, solder, and pins. 10. A method comprising: electrically coupling an integrated circuit test equipment with a plurality of exposed electrical contacts of an integrated circuit substrate; accessing, by the integrated circuit test equipment, a second set of PUF bits from a second set of PUF cells, through the exposed electrical contacts, wherein the integrated circuit substrate includes a first set of PUF cells to generate a first set of PUF bits, which are not accessible through the exposed electrical contacts. 11. The method of claim 10, further comprising: analyzing the second set of PUF bits to determine a characteristic of the second set of PUF cells; and inferring, based on the determined characteristic, a corresponding characteristic of the first set of PUF cells. 12. The method of claim 11, wherein the characteristic comprises at least one of a PUF bit error level and a PUF bit entropy level. 13. The method of claim 11, wherein analyzing comprises analyzing at least a hundred sets of PUF bits from at least a hundred different integrated circuit substrates. 14. The method of claim 10, wherein accessing comprises accessing the second set of PUF bits from the second set of PUF cells that are in a region more enabled for debug than a region having the first set of PUF cells. 15. The method of claim 10, further comprising removing the first set of PUF cells by dicing. 16. A system comprising: an interconnect; a processor coupled with the interconnect, the processor comprising: a plurality of exposed electrical contacts; an inaccessible set of PUF cells to generate an inaccessible set of PUF bits that are not accessible through the exposed electrical contacts; and an accessible set of PUF cells to generate an accessible set of PUF bits that are accessible through the exposed electrical contacts; a dynamic random access memory (DRAM) coupled with the interconnect; a network interface coupled with the interconnect, the network interface to transmit encrypted data, which has been encrypted with a secure key that is based on the inaccessible set of PUF bits, to a network. 17. The system of claim 16, wherein the accessible set of PUF cells are within a region more enabled for debug than a region having the inaccessible set of PUF cells. 18. The system of claim 16, wherein the accessible set of PUF bits are not to be provided to security logic. 19. An integrated circuit substrate comprising: a plurality of exposed electrical contacts; a first set of bit generation logic to generate a first inaccessible set of bits that are not accessible through the exposed electrical contacts; and a second set of bit generation logic to generate a second accessible set of bits that are accessible through the exposed electrical contacts, wherein it is impractical to replicate the first and second sets of bit generation logic, wherein the first and second sets of bits are to be substantially static, and wherein the first and second sets of bits are to have values that depend at least in part on process variations experienced during manufacture of the integrated circuit. 20. The integrated circuit substrate of claim 19, wherein the second set of bit generation logic is within a region that is more enabled for debug than a region having the first set of bit generation logic. 21. The integrated circuit substrate of claim 19, wherein the first inaccessible set of PUF bits are to be provided to security logic and the second accessible set of PUF bits are not to be provided to the security logic. |
INTEGRATED CIRCUITS HAVING ACCESSIBLE AND INACCESSIBLE PHYSICALLY UNCLONABLE FUNCTIONS BACKGROUND Field Embodiments relate to integrated circuits. In particular, embodiments relate to integrated circuits having Physically Unclonable Functions (PUFs). Background Information Computers, cell phones, multimedia content players, and various other types of electronic devices, are commonly used to handle sensitive or secure information (e.g., financial information, confidential documents, personal emails, digital rights protected content, etc.). Integrated circuits used in such electronic devices are commonly provisioned with one or more secrets, such as one or more secure keys, that are used to protect the sensitive or secure information. The secure keys may be used to protect the sensitive or secure information in various ways, such as through encryption/decryption, authentication, digital signatures, and other known cryptographic approaches. One way to provision the integrated circuits with the secure keys is to program or store the secure keys in fuses and/or memory (e.g., various types of read-only memory (ROM)) in a digital form. However, one drawback with such an approach is that the secure keys stored in the memory and/or fuses in digital form tend to be somewhat vulnerable to discovery. Although the secure keys generally cannot be read out directly, invasive attacks and/or reverse engineering may be used to obtain the secure keys. Allowing the secure keys to be obtained may breach, or at least contribute to breaching, the security of the sensitive information. Additionally, such provisioning of secret cryptographic keys oven means that they are exposed to some part of a manufacturer's key generation, device design, and manufacturing infrastructures. Physically Unclonable Functions (PUFs) provide an alternative to storing secure keys in memory and/or fuses in digital form. One advantage to the use of PUFs for security is that the PUFs tend to be significantly less vulnerable to discovery than the secure keys stored in memory and/or fuses in digital form. The PUFs may be used to generate PUF bits during runtime which may be used for security. The PUFs bits are typically characterized by a PUF bit error level and a PUF bit entropy level. BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS The invention may best be understood by referring to the following description and accompanying drawings that are used to illustrate embodiments of the invention. In the drawings: Figure 1 is a block diagram of an embodiment of an integrated circuit substrate having exposed and/or external electrical contacts, a first inaccessible set of PUF cells and a second accessible set of PUF cells. Figure 2 is a block diagram of an embodiment of a die having a first inaccessible set of PUF cells and a second accessible set of PUF cells in close proximity. Figure 3 is a block diagram of an embodiment of a die having a first inaccessible set of PUF cells and a second accessible set of PUF cells that are physically separated from one another on the die. Figure 4 is a block diagram of an embodiment of a wafer having first inaccessible sets of PUF cells each within a corresponding die and at least one second accessible set of PUF cells in a cut-away region that is to be removed during dicing. Figure 5 is a block diagram of an embodiment of an integrated circuit substrate showing that in some embodiments an accessible set of PUF cells and/or an accessible set of PUF bits may not be used for security in the integrated circuit substrate. Figure 6 is a block diagram of an embodiment of an integrated circuit substrate showing that in other embodiments an accessible set of PUF cells and/or an accessible set of PUF bits may be used for security in the integrated circuit substrate. Figure 7 is a block diagram of an example embodiment of an accessible set of PUF cells to generate an accessible set of PUF bits that are accessible through exposed and/or external electrical contacts. Figure 8 is a block flow diagram of an embodiment of a method of testing integrated circuits. Figure 9 is a block diagram of a PUF bit storage and analysis system coupled with a plurality of integrated circuit test equipment. Figure 10A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the invention. Figure 10B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments of the invention. Figure 11A is a block diagram of a single processor core, along with its connection to the on-die interconnect network and with its local subset of the Level 2 (L2) cache, according to embodiments of the invention. Figure 11B is an expanded view of part of the processor core in Figure 11 A according to embodiments of the invention. Figure 12 is a block diagram of a processor that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the invention. Figure 13 shown is a block diagram of a system 1300 in accordance with one embodiment of the present invention. Figure 14 shown is a block diagram of a first more specific exemplary system 1400 in accordance with an embodiment of the present invention. Figure 15 shown is a block diagram of a second more specific exemplary system 1500 in accordance with an embodiment of the present invention. Figure 16, shown is a block diagram of a SoC in accordance with an embodiment of the present invention. Figure 17 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention. DETAILED DESCRIPTION In the following description, numerous specific details, such as specific types of PUF cells, locations of PUF cells, logic partitioning/integration details, types and interrelationships of components, and the like, are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In other instances, well-known circuits, structures and techniques have not been shown in detail in order not to obscure the understanding of this description. Figure 1 is a block diagram of an embodiment of an integrated circuit substrate 100 having exposed and/or external electrical contacts 101, a first inaccessible set of Physically Unclonable Function (PUF) cells 102 to generate a first set of inaccessible PUF bits 103, and a second accessible set of PUF cells 104 to generate a second set of accessible PUF bits 105. The first inaccessible set of PUF cells and/or the first set of inaccessible PUF bits are inaccessible through the electrical contacts. The second accessible set of PUF cells and/or the second set of accessible PUF bits are accessible through the electrical contacts. The first and second sets of PUF cells 102, 104 may be any of a wide variety of different types of PUF cells known in the arts. PUFs are sometimes also known in the arts as physical one-way functions (POWFs). It tends to be difficult to place a precise circumference around all of the different types of devices, circuitry, and physical systems that are PUFs. This discussion is not intended, and should not be used, to exclude devices, circuitry, and physical systems that regarded to be PUFs. Most PUFs generally represent functions (e.g., they produce an output from an input), which are physical (e.g., integrated circuitry, structures or micro-structures, devices, materials, embodied in a physical medium, etc.), which are substantially hard to predict (for the particular intended use), and which are substantially unclonable. Substantially unclonable means that it would be extremely difficult (if not infeasible), even for the manufacturer of a given PUF, to manufacture a copy of the given PUF that would provide the same output for the same input, even using the same manufacturing process. This is largely due to the general nature of the PUFs and the uncontrollable process variations encountered during the manufacturing process. The first inaccessible set of PUF cells 102 may generate the first inaccessible set of PUF bits 103 as a response or output to a challenge or input. Likewise, the second accessible set of PUF cells 104 may generate the second accessible set of PUF bits 105 as a response or output to a challenge or input. Some types of PUF cells may not need a challenge or input but rather may provide or deliver readable values. By way of example, the challenge may include one or more electrical signals applied to the PUF cells. The PUF bits are not merely non-volatile bits programmed or stored in fuses or memory in a digital form, but rather may be generated during runtime, and may in some cases only exist when the integrated circuit is powered on. In this way, the first inaccessible set of PUF bits may be significantly less susceptible to discovery than non- volatile bits stored in fuses or memory. The particular binary values of the first and second sets of PUF bits generated by the first and second sets of PUF cells generally depend upon the physical characteristics of the corresponding PUF cells, which in turn depend on the particular manufacturing process used to manufacture the corresponding PUF cells, as well as the uncontrollable process variations encountered during the manufacturing process which are impractical to reproduce. For example, in the case of silicon PUF cells, the particular binary values of the PUF bits generated may depend upon parameters such as line widths of integrated circuits, dopant concentrations in semiconductor materials, or the like, which depend in an unpredictable way upon manufacturing process variations. In some embodiments, the first and second sets of PUF cells may represent silicon intrinsic PUF cells or more generally semiconductor intrinsic PUF cells. In some embodiments, the first and second sets of PUF cells may have been manufactured using a complementary metal oxide semiconductor (CMOS) manufacturing process that is also used to manufacture transistors of the integrated circuit. Examples of suitable types of PUFs include, but are not limited to, delay PUFs (e.g., intrinsic PUFs based on digital delay measurements), delay loop PUFs, memory PUFS (e.g., intrinsic PUFs based on settling state of digital memory elements), SRAM PUFs, cross-coupled PUFs, arbiter PUFs (e.g., PUFs based on MUXes and an arbiter), ring- oscillator PUFs, bistable ring PUFs, butterfly PUFs, latch PUFs, flip-flop PUFs, D-type flip-flop PUFs, coating PUFs, and additional semiconductor or CMOS PUFs known in the arts. As will be discussed further below, in some embodiments, the second accessible set of PUF bits may be analyzed in order to infer, estimate, or predict properties of the first inaccessible set of PUF bits. In one aspect, this may be done by a manufacturer as an indirect way to monitory the properties of the inaccessible PUF bits (e.g., to verify that the inaccessible PUF bits are sufficient for their intended use). That is, the accessible PUF bits may be used to indirectly debug or validate the inaccessible PUF bits. In such embodiments, it is generally beneficial if the first and second sets of PUF cells are similar (e.g., of a same type, design, and size). This generally helps to ensure that the properties of the second accessible set of PUF cells determined through analysis are relevant to those of the first inaccessible set of PUF cells. The number of the first inaccessible set of PUF cells may be any conventional or appropriate number without limitation to the scope of the invention. Commonly, in the case of a relatively highly secured general-purpose processor, there may be anywhere from hundreds to many thousands of the first inaccessible set of PUF cells. In various embodiments, there may be anywhere from tens, to hundreds, to several thousand of the second accessible set of PUF cells. When the second accessible PUF bits are analyzed to estimate properties, often a number ranging from about 128 to 1024, or from about 256 to 512, will be sufficient, although the scope of the invention is not limited to these particular numbers. Generally, the greater the number of the accessible PUF bits available for analysis the better the analysis results (at least to a point). Conversely, the fewer the accessible PUF bits the smaller the cost, area/footprint, and power consumption. Accordingly, there is a tradeoff between analysis accuracy and implementation cost such that the appropriate number generally depends upon the objectives of the particular implementation. Referring again to Figure 1, the first set of inaccessible PUF bits 102 may be used for security within the integrated circuit substrate. The first set of inaccessible PUF cells may provide the first set of inaccessible PUF bits 103 to security logic 107. In some embodiments, the security logic may include key generation and/or derivation logic to generate and/or derive one or more secrets or secure keys from the first set of inaccessible PUF bits 103 using cryptographic key generation and/or derivation algorithms. By way of example, the security logic may include, but is not limited to, a cryptographic module or circuit, a crypto-processor, a crypto-coprocessor, a trusted platform module, a security engine, a security controller, or the like. Referring again to Figure 1, the integrated circuit substrate also includes the set of exposed and/or external electrical contacts 101. In the illustrated embodiment, a first electrical contact 101-1 through an Nth electrical contact 101-N are shown, where N may be any appropriate number, often on the order ranging from tens to hundreds. Integrated circuit substrates generally include exposed and/or external electrical contacts to interact with external electrical signaling medium (e.g., a circuit board, component of an electrical device, manufacturing test equipment, etc.). Power is often delivered to the integrated circuit substrate through certain of the electrical contacts, and electrical signals are exchanged between the external electrical signaling medium and the integrated circuit substrate through other of the electrical contacts. The exposed and/or external electrical contacts are electrically coupled with the integrated circuitry of the integrated circuit substrate through interconnects of the integrated circuit substrate. The exposed and/or external electrical contacts are accessible from an outside of the integrated circuit substrate (e.g., reside on the outside surface of the integrated circuit substrate). By way of example, in various embodiments, the exposed or external electrical contacts may represent pads, bumps, solder material, pins, or other types of electrical contacts that are accessible from outside the integrated circuit or package and that are electrically coupled with the interconnects and/or integrated circuitry of the integrated circuit. When incorporated in a package, the exposed or external electrical contacts may be accessed through corresponding electrical contacts of the package. Also shown in the illustration is external equipment 110 (i.e., external to the integrated circuit substrate). In one aspect, the external equipment may represent integrated circuit test and/or debug equipment (e.g., a tester and prober) and/or other integrated circuit manufacturing equipment. During the manufacture of integrated circuits, it is common to test integrated circuits and integrated circuit packages at various stages of manufacture. This may be done for various purposes, such as, for example, to test or debug the integrated circuit substrate, to test for proper operation, to detect defects, to sort properly functioning integrated circuits from improperly functioning integrated circuits that are to be discarded or reworked, to program data based on testing into the integrated circuit, etc. The external equipment may be operable to couple with the exposed or external electrical contacts of the integrated circuit. For example, the external equipment may have a set of electrical probes that may be used to contact the electrical contacts of the integrated circuit substrate. The external equipment may exchange electrical signals with the integrated circuit substrate through the probes and electrical contacts according to a test pattern. For example, the integrated circuit test equipment may transmit electrical signals to the integrated circuit, and receive corresponding electrical signals in response, which may be analyzed as part of testing. With the aim of integrated circuit security in mind, there is a security risk posed by malicious or attacker external equipment. For example, an integrated circuit test and/or debug equipment at a manufacturing facility may be corrupted by employees secretly installing malicious software to obtain secrets, keys, or PUF bits. Moreover, attackers may create their own external equipment to attempt to access secrets, keys, or PUF bits through the external contacts. In some embodiments, the inaccessible PUF bits may also be unavailable inside the device to all but highly trusted and/or highly privileged logic. In such embodiments, the inaccessible PUF bits may not be accessible to untrusted or unprivileged software (e.g., user software or malicious software), such as, for example, inaccessible to all but the highest level of privileged software. Referring again to Figure 1, the integrated circuit substrate includes the first inaccessible set of PUF cells 102 to generate the first set of inaccessible PUF bits 103, and the second accessible set of PUF cells 104 to generate the second set of accessible PUF bits 105. The first inaccessible set of PUF cells and/or the first set of inaccessible PUF bits are inaccessible through the electrical contacts. The second accessible set of PUF cells and/or the second set of accessible PUF bits are accessible through the electrical contacts. In some embodiments, the integrated circuit substrate may omit or lack circuitry or other logic 109 to allow the first inaccessible set of PUF bits and/or the first inaccessible set of PUF cells to be accessible through the exposed and/or external electrical contacts. For example, there may be no lines, wires, or other interconnects and/or logic to allow the inaccessible set of PUF bits to be accessed through the contacts. In some embodiments, the integrated circuit design may not allow scan or debug of the inaccessible PUF bits, or at least may more highly restrict such scan or debug, which helps to render them inaccessible. In some embodiments, there may similarly be no lines, wires, or other interconnects and/or logic to allow an untrusted entity within the integrated circuit (e.g., application or other untrusted software) to access the inaccessible PUF Bits. In some cases, the inaccessible PUF bits may potentially be observable only as a result of a change in output of a sufficiently strong cryptographic function to which the PUF cells are input, but the cryptographic function may be sufficiently strong that the PUF bits for all practical purposes cannot be determined. This may prevent the external equipment from being able to read, obtain, or otherwise access the first inaccessible set of PUF bits and/or the first inaccessible set of PUF cells. Advantageously, preventing the external equipment from being able to access the first inaccessible set of PUF bits and/or the first inaccessible set of PUF cells may help to enhance the security of the integrated circuit substrate. If instead the external equipment were able to access the first set of PUF bits, there is an increased likelihood that the first set of inaccessible PUF bits, which as described above are used for security within the integrated circuit substrate, would be discovered by corrupted manufacturing test/debug equipment or attacker equipment. This could potentially compromise, or at least contribute to compromising, the security of the integrated circuit substrate. However, by preventing the external equipment from accessing the first inaccessible set of PUF cells and/or the first inaccessible set of PUF bits, such risks may be significantly reduced. An additional advantage is that the manufacturer may not be able to access and/or know the binary values of the first inaccessible set of PUF bits. This may help to reduce the responsibilities (e.g., the responsibilities to keep them secret) and/or liabilities (e.g., in the event they were discovered and made public) of the manufacturer. In contrast, in some embodiments, the integrated circuit substrate may include circuitry or other logic 108 to allow the second accessible set of PUF bits and/or the second accessible set of PUF cells to be accessible through the exposed and/or external electrical contacts. This may allow the external equipment to be able to read, obtain, or otherwise access the second accessible set of PUF bits and/or the second accessible set of PUF cells. For example, the second accessible set of PUF bits may be transmitted or provided from the integrated circuit to the external equipment over the exposed or external electrical contacts as electrical signals. In some embodiments, as will be explained further below, the second accessible set of PUF bits may be analyzed in conjunction with determining characteristics or attributes, such as, for example, a PUF bit error level and/or a PUF bit entropy level. The PUF bit entropy level may be determined through comparison of PUF bits from other different integrated circuits or integrated circuit substrates. In some embodiments, the analysis may be performed across multiple or potentially numerous different integrated circuits (e.g., at least one hundred, at least one thousand, tens of thousands, or even more). In some embodiments, the characteristics or attributes (e.g., the PUF bit error level and/or the PUF bit entropy level) of the first inaccessible set of PUF bits and/or the first inaccessible set of PUF cells may be inferred or estimated from characteristics or attributes of the second accessible set of PUF bits determined through the analysis. Since the first and second sets of PUF cells were manufactured on the same integrated circuit substrate, at the same time, and encountered substantially the same manufacturing process variations, they should have the same, or at least sufficiently similar, PUF cell and/or PUF bit characteristics or attributes. Advantageously, this may allow the characteristics or attributes of the first inaccessible set of PUF bits and/or the first inaccessible set of PUF cells to be estimated or inferred without needing to make them accessible or ever even needing to know these PUF bits. The estimates of the characteristics or attributes of the first inaccessible set of PUF bits and/or the first inaccessible set of PUF cells are useful for various purposes, such as, for example, to allow estimation, evaluation, or verification of the level of security, to assist with design or redesign of security related logic, for quality control purposes, to adjust the amount of control over process variation in the manufacturing process, etc. The integrated circuit substrate 100 may represent a wafer, a singulated die, or other integrated circuit substrate. In other embodiments the integrated circuit substrate may include a processor. In some embodiments, the processor may be a general-purpose processor. In other embodiments, the processor may be a special-purpose processor. Examples of suitable special- purpose processors include, but are not limited to, network processors, communications processors, cryptographic processors, graphics processors, co-processors, embedded processors, digital signal processors (DSPs), and controllers (e.g., microcontrollers), to name just a few examples. The processor may be any of various complex instruction set computing (CISC), reduced instruction set computing (RISC), very long instruction word (VLIW) processors, hybrids thereof, or other types of processors. In other embodiments, the integrated circuit substrate may include a chipset component. For example, the integrated circuit substrate may include an input/output controller, a memory controller, a graphics chip, or the like. Alternatively, the integrated circuit substrate may include other types of integrated circuits known in the arts (e.g., an Application Specific Integrated Circuit (ASIC), a System-on-Chip (SoC), etc.). In still further embodiments, the integrated circuit substrate may be replaced by a secure key card, smart card, or other type of apparatus or device for which security with PUFs is desired. Different embodiments of physically locating the first inaccessible and the second accessible sets of PUF cells are contemplated. Figures 2-4 illustrate several example embodiments, although the invention is not limited to these embodiments. These embodiments may be used in the integrated circuit 100 of Figure 1. Alternatively, these embodiments may be used in an entirely different integrated circuit. Moreover, the integrated circuit 100 of Figure 1 may use entirely different embodiments than those shown in these figures. Figure 2 is a block diagram of an embodiment of a die 200 having a first inaccessible set of PUF cells 202 and a second accessible set of PUF cells 204 in close proximity. In one embodiment, the inaccessible and accessible sets of PUF cells may be intermingled with one another within the same region of the die (e.g., at least some of the inaccessible set of PUF cells may be disposed between at least some of the accessible set of PUF cells). In another embodiment, the inaccessible and accessible sets of PUF cells may be located in adjacent or adjoining regions of the die. For example, the inaccessible set of PUF cells may be confined to a first region of the die and the second accessible set of PUF cells may be confined to a second region of the die and the first and second regions may overlap, may be adjacent to one another, or may adjoin one another. In yet another embodiment, the inaccessible and accessible sets of PUF cells may be located in proximate regions of the die. As used herein, proximate one another or located in proximate regions of the die means that both are within a region that is no more than a third a size of the die. Providing the inaccessible and accessible sets of PUF cells in the same, adjacent, or at least proximate regions of the die generally tends to make the PUF bit characteristics of the inaccessible and accessible sets of PUF cells relatively to one another. Figure 3 is a block diagram of an embodiment of a die 300 having a first inaccessible set of PUF cells 302 and a second accessible set of PUF cells 304 that are physically separated from and/or not proximate to one another. As used herein, the first inaccessible set of PUF cells 302 and the second accessible set of PUF cells 304 are not proximate when they are not both contained within region that is no more than a third a size of the die. Figure 4 is a block diagram of an embodiment of a wafer 400 having first inaccessible sets of PUF cells 402-1, 402-2 each within a corresponding die and at least one second accessible set of PUF cells 404-1 in a cut-away region 418 outside of the dice that is to be cut away or removed during dicing of the wafer. In the illustration, a first die 416-1 and a second die 416-2 are shown. The first die has within its die confines a first instance of a first inaccessible set of PUF cells 402- 1. Likewise, the second die has within its die confines a second instance of a first inaccessible set of PUF cells 402-2. In the illustrated embodiment, between the first and second die, which are adjacent to one another, is a second accessible set of PUF cells. A second accessible set of PUF bits from the second accessible set of PUF cells may be accessed during wafer testing, or otherwise prior to singulation or dicing of the wafer. Thereafter, the second accessible set of PUF cells may be cut away during singulation or dicing of the wafer and discarded. The second accessible set of PUF cells is in the cut-away region, for example between dicing lines, in a die street region, in a kerf region, etc. The second accessible set of PUF cells do not appear in a final packaged die to be used in an electronic device. In one embodiment, the wafer may include a single second accessible set of PUF cells to be used for the whole wafer. In another embodiment, the wafer may include two or more second accessible sets of PUF cells. As shown, the wafer may optionally include an additional second accessible set of PUF cells 404-2. In another embodiment, each die may have a corresponding second accessible set of PUF cells. Alternatively, there may be fewer second accessible sets of PUF cells than die, or a single second accessible set of PUF cells. In one aspect, in the case of a single second accessible set of PUF cells, it may be located in a central region of the wafer so as to be more relevant for die across the wafer. Figure 5 is a block diagram of an embodiment of an integrated circuit substrate 500 showing that in some embodiments an accessible set of PUF cells 504 and/or an accessible set of PUF bits 505 may not be used for security in the integrated circuit substrate. For example, there may be no lines, wires, or other interconnects and/or other logic to allow the accessible PUF bits to be provided to these security related components (e.g., there is no logical capability to access the accessible set of PUF bits and route them to the these security related components over internal interconnects, interfaces, buses, or the like). As previously mentioned, the accessible set of PUF cells and/or the accessible set of PUF bits are accessible through exposed or external electrical contacts of the integrated circuit. However, the integrated circuit substrate omits or lacks circuitry or other logic 520 to allow the accessible set of PUF bits to be used for security. The accessible set of PUF bits are not provided to security logic 107. Since the accessible PUF bits can be readily obtained by external equipment, this embodiment takes a more conservative approach in which the accessible PUF bits are not used for security (e.g., are not used as secure keys or to generate or derive secure keys). Figure 6 is a block diagram of an embodiment of an integrated circuit substrate 600 showing that in other embodiments an accessible set of PUF cells 604 and/or an accessible set of PUF bits 605 may be used for certain security in the integrated circuit substrate. As before, the accessible set of PUF cells and/or the accessible set of PUF bits are accessible through exposed or external electrical contacts of the integrated circuit. In this embodiment, the integrated circuit substrate includes circuitry or other logic 622 to allow the accessible set of PUF bits to be used for security. The circuitry or other logic allows the accessible set of PUF bits to be provided to security logic 107. Although the accessible PUF bits can be readily obtained by external equipment, they may be useful for some security related features. For example, they may be used as less important secrets or secure keys, or to generate less important secure keys. As another example, the accessible set of PUF bits may be combined with other information that is more difficult to know and then used as secrets or secure keys and/or to generate secrets or secure keys. By way of example, the accessible PUF bits may be combined with a portion of a set of inaccessible PUF bits and/or bits stored in ROM memory and/or fuses. It is to be appreciated that the components, features, and specific optional details described above for Figure 1 may also optionally apply to any one or more of Figures 2-6. Moreover, the components, features, and specific optional details described above for any one or more of Figures 2-6 may also optionally apply to Figure 1. Figure 7 is a block diagram of an example embodiment of an accessible set of PUF cells 704 to generate an accessible set of PUF bits that are accessible through exposed and/or external electrical contacts. In one embodiment, the accessible set of PUF cells 704 may be used as the second accessible set of PUF cells 104 of the integrated circuit substrate of Figure 1. Alternatively, the accessible set of PUF cells 704 may be used in an entirely different integrated circuit or substrate. Moreover, the integrated circuit substrate of Figure 1 may include an entirely different set of accessible PUF cells. The accessible set of PUF cells 704 include a first PUF cell 704-1, a second PUF cell 704-2, a third PUF cell 704-3, a fourth PUF cell 704-4, through an N* PUF cell 704-N, where N may be any desired number. In various embodiments, there may be anywhere from tens, to hundreds, to several thousand of the accessible set of PUF cells, although the scope of the invention is not limited to any particular number. Often from about 64 to 1024, or from about 128 to 512, will be sufficient, although the scope of the invention is not limited to these particular numbers. It is not required to use a number that is a power of two. In some embodiments, each of the PUF cells may be embedded within an integrated circuit substrate, for example including integrated circuitry or structures or devices formed of silicon and/or by a CMOS process. A challenge 724 (e.g., one or more electrical signals or other stimuli) is provided to the accessible set of PUF cells. The accessible set of PUF cells provides a set of PUF bits 705 as a response. In particular, in the illustration the PUF cells provide the set of PUF bits "0110...1", in this particular example. It is noted that some types of PUF cells may not require a challenge or response but rather may provide or deliver readable values. The PUF bits are provided to circuitry or other logic 708 that is operable to make the PUF bits accessible through the exposed and/or external electrical contacts. The response and/or the PUF bits generally tend to be substantially static. For example, when reading PUF bits from the PUF cells multiple times, typically a vast majority of the PUF bits tend to have the same binary value from one read to the next. Some PUF bits referred to as the "weaker" PUF bits may tend to flip or change binary value from one read to the next more frequently than others. For example, the aforementioned challenge may result in the PUF bits "0110...1," whereas a subsequent challenge may result in the PUF bits "011 L ..1." Notice that the underlined PUF bit has flipped from binary-0 to binary- 1 from one read to another. This represents a PUF bit error. When used for security, such PUF bit errors are generally undesirable, since they may cause very different secure keys to be generated and/or derived. Accordingly, it is often desirable to be able to estimate or quantify the PUF bit error level (e.g., in order to ensure that the error correction technique is sufficient). It is generally desirable also for the PUF bits and/or PUF cells of different integrated circuits or substrates to have sufficient entropy. Entropy measures the quality or level of randomness of generated PUF bits. When there is a high level of entropy, then the likelihood of an identical PUF bits from different sets of PUF cells is very low. For example, the PUF bits from a first set of PUF cells may be "01101," the PUF bits from a second set of PUF cells may be "10100," and the PUF bits from a third set of PUF cells may be "10111," as just one example. Notice that the sets of PUF bits are different. When there is a high level of entropy, there should be approximately equal likelihood of each bit having either a binary-0 or a binary- 1, such that given enough sets of PUF bits a string of PUF bits should span all of the possible binary values. When used for security, it is generally desirable for PUF bits to be at least reasonably entropic or random, since this helps to enhance the security. By way of example, it is possible that a manufacturing process may be so tightly controlled that there is insufficient variation to provide a desired level of entropy such that a given factor may dominate the bias of the PUF bits such that they all trend toward a common or systematic value (e.g., all trend toward "10111"). This may tend to make the PUF bits more vulnerable to attack. Accordingly, it is generally desirable to be able to estimate or quantify the PUF bit entropy level (e.g., in order to monitor the level of entropy or verify that there is a sufficient level of entropy, to increase the manufacturing process variation, to guide redesign of logic, etc.). It is contemplated that PUF bit entropy may tend to be inversely related to the maturity of a manufacturing process. For example, in the early days of a manufacturing process, when the process is relatively immature, the amount of process variation may tend to be relatively high, such that the level of PUF bit entropy may tend to be relatively higher. Over time, as the manufacturing process matures, the amount of process variation may tend to decrease (e.g., through continued efforts to tighten up the process), which in turn may tend to cause the level of PUF bit entropy to decrease. PUF bits produced by such mature manufacturing process may not have as much entropy as the PUF bits produced by the immature manufacturing processes for which the PUF bits were initially evaluated and/or designed. It is possible that at some point the manufacturing process may become too tightly controlled that there is insufficient process variation to provide the desired amount of PUF bit entropy. Advantageously, the approaches disclosed herein allow a manufacturer to evaluate the level of PUF bit entropy of manufactured integrated circuits, including over time as the manufacturing process matures, which may help to avoid a situation where the PUF bits have undesirably low entropy. This may help to ensure the security of the integrated circuits is maintained. Figure 8 is a block flow diagram of an embodiment of a method 800 of testing integrated circuits. In one aspect, the method may be performed on the integrated circuit substrate of Figure 1. Alternatively, the method may be performed on an entirely different integrated circuit substrate. Moreover, the integrated circuit substrate of Figure 1 may be tested by entirely different methods. The features and/or details described herein for an apparatus (e.g., Figures 1- 7) may also optionally pertain to the methods described herein (e.g., the method of Figure 8) which are performed by and/or with an apparatus. The method includes electrically coupling integrated circuit test equipment (e.g., a prober and tester) with a plurality of exposed electrical contacts of an integrated circuit substrate, at block 831. For example, electrical test probes of the integrated circuit test equipment (e.g., in a probe card) may be contacted with pads, bumps, or other electrical contacts of the integrated circuit. The integrated circuit test equipment accesses a second set of PUF bits from a second set of PUF cells of the integrated circuit substrate through the exposed electrical contacts, at block 832. For example, the second set of PUF bits may be read out through the exposed electrical contacts and the electrical test probes. The integrated circuit substrate also includes a first set of PUF cells to generate a first set of PUF bits that are not accessible through the exposed electrical contacts. In some embodiments, the second set of PUF bits may be accessed from a debug enabled region but the first set of PUF cells may be within a debug disabled region or at least a more restricted debug region. The second set of PUF bits are optionally analyzed, along with other sets of PUF bits, to determine a characteristic of the second set of PUF cells, at block 833. In some embodiments, PUF bits from at least a hundred, at least a thousand, or more different PUF cells or integrated circuits may be analyzed. In some embodiments, the characteristic may be one or more of a PUF bit error level and a PUF bit entropy level. A corresponding characteristic of the first set of PUF cells is optionally estimated or inferred, based on the determined characteristic for the second set of PUF cells, at block 834. Advantageously, the characteristic of the first set of PUF cells may be estimated or inferred without ever needing to know the first set of PUF bits. This helps to enhance the security provided by the first set of PUF cells and/or the first set of PUF bits, as well as helping to reduce the responsibilities and/or liabilities of the manufacturer. Figure 9 is a block diagram of a PUF bit storage and analysis system 940 coupled with, or otherwise in communication with, a plurality of integrated circuit test equipment 910-1 through 910-N. By way of example, the integrated circuit test equipment may represent potentially geographically distributed probers and testers or other equipment. Each of the integrated circuit test equipment tests multiple integrated circuit substrates 900. The integrated circuit substrates have device identifiers (IDs), such as unit level traceability (ULT) values. The integrated circuit test equipment provides device IDs and corresponding accessible PUF bits read from those devices to the PUF bit storage and analysis system. The PUF bit storage and analysis system includes a database 942. The database includes a PUF bit raw data database 944. By way of example, the PUF bit raw data database may store PUF bits read on one or potentially multiple reads each from a number of integrated circuit substrates having different device IDs. In some cases, PUF bits for hundreds, thousands, or more different integrated circuit substrates may be stored. If desired, PUF bits read under different conditions (temperature, voltage, etc.) may be stored. In some embodiments, the database may only store PUF bits read from accessible PUFs but not from inaccessible PUFs. As previously described above, the manufacturer does not need to know the values of the inaccessible PUF bits, and there are advantages to the manufacturer not knowing the values of the inaccessible PUF bits (e.g., to reduce the risk of a security breach and/or to limit the liabilities of the manufacturer). The PUF bit storage and analysis system includes an analysis module 948 coupled, or otherwise in communication, with the database. In the illustrated embodiment, the analysis module includes a PUF bit error analysis module 950 and a PUF bit entropy analysis module 952. The PUF bit error analysis module is operable to analyze some or all of the PUF bits from the database to determine a PUF bit error level. The PUF bit error level is determinable either from multiple reads of the same integrated circuit substrate (e.g., the same device ID), but generally will be determined based on reads of multiple, different integrated circuit substrates/device IDs. The PUF bit entropy analysis module is operable to analyze some or all of the PUF bits from the database to determine a PUF bit entropy level. The PUF bit entropy level is determinable from PUF bits from different devices. In one aspect, intra-distance and/or inter- distance metrics may be calculated. The intra-distance represents the distance between two responses when the same challenge is applied twice to the same PUF. The intra-distance metric may measure the Hamming distance between multiple reads of PUF bits on a single integrated circuit. The intra-distance may help to quantify the reliability of the PUF cells and the error rate of the PUF bits. The inter-distance represents the distance between two responses resulting from applying the same challenge to two different instances of a PUF. The inter-distance measures the Hamming distance between two measurements of PUF collected from different devices. Inter-distance assesses the uniqueness of PUF and generally should be reasonably close to half of the PUF length. The analysis module stores analysis results or statistics in a statistics database 946. As shown in the illustrated embodiment, analysis results or statistics may be generated for different dates in order to allow trends to be monitored or detected. A few illustrative examples of analysis results or statistics include, but are not limited to, PUF bit average error level for a given time frame, PUF bit maximum error level for a given time frame, PUF bit minimum error level for a given time frame, PUF bit entropy for a given time frame, PUF bit minimum and/or maximum entropy, etc. A user interface device 954 is also included to interface with a user. The user interface device may include one or more of a keyboard, a screen, a printer, a network connection, a mouse, a command line interface, etc. Exemplary Core Architectures, Processors, and Computer Architectures Processor cores may be implemented in different ways, for different purposes, and in different processors. For instance, implementations of such cores may include: 1) a general purpose in-order core intended for general-purpose computing; 2) a high performance general purpose out-of-order core intended for general-purpose computing; 3) a special purpose core intended primarily for graphics and/or scientific (throughput) computing. Implementations of different processors may include: 1) a CPU including one or more general purpose in-order cores intended for general-purpose computing and/or one or more general purpose out-of-order cores intended for general-purpose computing; and 2) a coprocessor including one or more special purpose cores intended primarily for graphics and/or scientific (throughput). Such different processors lead to different computer system architectures, which may include: 1) the coprocessor on a separate chip from the CPU; 2) the coprocessor on a separate die in the same package as a CPU; 3) the coprocessor on the same die as a CPU (in which case, such a coprocessor is sometimes referred to as special purpose logic, such as integrated graphics and/or scientific (throughput) logic, or as special purpose cores); and 4) a system on a chip that may include on the same die the described CPU (sometimes referred to as the application core(s) or application processor(s)), the above described coprocessor, and additional functionality. Exemplary core architectures are described next, followed by descriptions of exemplary processors and computer architectures. Exemplary Core Architectures In-order and out-of-order core block diagram Figure 10A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the invention. Figure 10B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments of the invention. The solid lined boxes in Figures 10A-B illustrate the in-order pipeline and in-order core, while the optional addition of the dashed lined boxes illustrates the register renaming, out-of-order issue/execution pipeline and core. Given that the in-order aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described. In Figure 10A, a processor pipeline 1000 includes a fetch stage 1002, a length decode stage 1004, a decode stage 1006, an allocation stage 1008, a renaming stage 1010, a scheduling (also known as a dispatch or issue) stage 1012, a register read/memory read stage 1014, an execute stage 1016, a write back/memory write stage 1018, an exception handling stage 1022, and a commit stage 1024. Figure 10B shows processor core 1090 including a front end unit 1030 coupled to an execution engine unit 1050, and both are coupled to a memory unit 1070. The core 1090 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VUW) core, or a hybrid or alternative core type. As yet another option, the core 1090 may be a special-purpose core, such as, for example, a network or communication core, compression engine, coprocessor core, general purpose computing graphics processing unit (GPGPU) core, graphics core, or the like. The front end unit 1030 includes a branch prediction unit 1032 coupled to an instruction cache unit 1034, which is coupled to an instruction translation lookaside buffer (TLB) 1036, which is coupled to an instruction fetch unit 1038, which is coupled to a decode unit 1040. The decode unit 1040 (or decoder) may decode instructions, and generate as an output one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decode unit 1040 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. In one embodiment, the core 1090 includes a microcode ROM or other medium that stores microcode for certain macroinstructions (e.g., in decode unit 1040 or otherwise within the front end unit 1030). The decode unit 1040 is coupled to a rename/allocator unit 1052 in the execution engine unit 1050. The execution engine unit 1050 includes the rename/allocator unit 1052 coupled to a retirement unit 1054 and a set of one or more scheduler unit(s) 1056. The scheduler unit(s) 1056 represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler unit(s) 1056 is coupled to the physical register file(s) unit(s) 1058. Each of the physical register file(s) units 1058 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point,, status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. In one embodiment, the physical register file(s) unit 1058 comprises a vector registers unit, a write mask registers unit, and a scalar registers unit. These register units may provide architectural vector registers, vector mask registers, and general purpose registers. The physical register file(s) unit(s) 1058 is overlapped by the retirement unit 1054 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.). The retirement unit 1054 and the physical register file(s) unit(s) 1058 are coupled to the execution cluster(s) 1060. The execution cluster(s) 1060 includes a set of one or more execution units 1062 and a set of one or more memory access units 1064. The execution units 1062 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 1056, physical register file(s) unit(s) 1058, and execution cluster(s) 1060 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster - and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of this pipeline has the memory access unit(s) 1064). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of- order issue/execution and the rest in-order. The set of memory access units 1064 is coupled to the memory unit 1070, which includes a data TLB unit 1072 coupled to a data cache unit 1074 coupled to a level 2 (L2) cache unit 1076. In one exemplary embodiment, the memory access units 1064 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 1072 in the memory unit 1070. The instruction cache unit 1034 is further coupled to a level 2 (L2) cache unit 1076 in the memory unit 1070. The L2 cache unit 1076 is coupled to one or more other levels of cache and eventually to a main memory. By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement the pipeline 1000 as follows: 1) the instruction fetch 1038 performs the fetch and length decoding stages 1002 and 1004; 2) the decode unit 1040 performs the decode stage 1006; 3) the rename/allocator unit 1052 performs the allocation stage 1008 and renaming stage 1010; 4) the scheduler unit(s) 1056 performs the schedule stage 1012; 5) the physical register file(s) unit(s) 1058 and the memory unit 1070 perform the register read/memory read stage 1014; the execution cluster 1060 perform the execute stage 1016; 6) the memory unit 1070 and the physical register file(s) unit(s) 1058 perform the write back/memory write stage 1018; 7) various units may be involved in the exception handling stage 1022; and 8) the retirement unit 1054 and the physical register file(s) unit(s) 1058 perform the commit stage 1024. The core 1090 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA), including the instruction(s) described herein. In one embodiment, the core 1090 includes logic to support a packed data instruction set extension (e.g., AVX1, AVX2), thereby allowing the operations used by many multimedia applications to be performed using packed data. It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology). While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor also includes separate instruction and data cache units 1034/1074 and a shared L2 cache unit 1076, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (LI) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be external to the core and/or the processor. Specific Exemplary In- Order Core Architecture Figures 11A-B illustrate a block diagram of a more specific exemplary in-order core architecture, which core would be one of several logic blocks (including other cores of the same type and/or different types) in a chip. The logic blocks communicate through a high-bandwidth interconnect network (e.g., a ring network) with some fixed function logic, memory I/O interfaces, and other necessary I/O logic, depending on the application. Figure 11 A is a block diagram of a single processor core, along with its connection to the on-die interconnect network 1102 and with its local subset of the Level 2 (L2) cache 1104, according to embodiments of the invention. In one embodiment, an instruction decoder 1100 supports the x86 instruction set with a packed data instruction set extension. An LI cache 1106 allows low-latency accesses to cache memory into the scalar and vector units. While in one embodiment (to simplify the design), a scalar unit 1108 and a vector unit 1110 use separate register sets (respectively, scalar registers 1112 and vector registers 1114) and data transferred between them is written to memory and then read back in from a level 1 (LI) cache 1106, alternative embodiments of the invention may use a different approach (e.g., use a single register set or include a communication path that allow data to be transferred between the two register files without being written and read back). The local subset of the L2 cache 1104 is part of a global L2 cache that is divided into separate local subsets, one per processor core. Each processor core has a direct access path to its own local subset of the L2 cache 1104. Data read by a processor core is stored in its L2 cache subset 1104 and can be accessed quickly, in parallel with other processor cores accessing their own local L2 cache subsets. Data written by a processor core is stored in its own L2 cache subset 1104 and is flushed from other subsets, if necessary. The ring network ensures coherency for shared data. The ring network is bi-directional to allow agents such as processor cores, L2 caches and other logic blocks to communicate with each other within the chip. Each ring datapath is 1012-bits wide per direction. Figure 1 IB is an expanded view of part of the processor core in Figure 11 A according to embodiments of the invention. Figure 11B includes an LI data cache 1106A part of the LI cache 1104, as well as more detail regarding the vector unit 1110 and the vector registers 1114. Specifically, the vector unit 1110 is a 16-wide vector processing unit (VPU) (see the 16-wide ALU 1128), which executes one or more of integer, single -precision float, and double-precision float instructions. The VPU supports swizzling the register inputs with swizzle unit 1120, numeric conversion with numeric convert units 1122A-B, and replication with replication unit 1124 on the memory input. Write mask registers 1126 allow predicating resulting vector writes. Processor with integrated memory controller and graphics Figure 12 is a block diagram of a processor 1200 that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the invention. The solid lined boxes in Figure 12 illustrate a processor 1200 with a single core 1202A, a system agent 1210, a set of one or more bus controller units 1216, while the optional addition of the dashed lined boxes illustrates an alternative processor 1200 with multiple cores 1202A-N, a set of one or more integrated memory controller unit(s) 1214 in the system agent unit 1210, and special purpose logic 1208. Thus, different implementations of the processor 1200 may include: 1) a CPU with the special purpose logic 1208 being integrated graphics and/or scientific (throughput) logic (which may include one or more cores), and the cores 1202A-N being one or more general purpose cores (e.g., general purpose in-order cores, general purpose out-of-order cores, a combination of the two); 2) a coprocessor with the cores 1202A-N being a large number of special purpose cores intended primarily for graphics and/or scientific (throughput); and 3) a coprocessor with the cores 1202A-N being a large number of general purpose in-order cores. Thus, the processor 1200 may be a general-purpose processor, coprocessor or special-purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor, GPGPU (general purpose graphics processing unit), a high- throughput many integrated core (MIC) coprocessor (including 30 or more cores), embedded processor, or the like. The processor may be implemented on one or more chips. The processor 1200 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS. The memory hierarchy includes one or more levels of cache within the cores, a set or one or more shared cache units 1206, and external memory (not shown) coupled to the set of integrated memory controller units 1214. The set of shared cache units 1206 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof. While in one embodiment a ring based interconnect unit 1212 interconnects the integrated graphics logic 1208, the set of shared cache units 1206, and the system agent unit 1210/integrated memory controller unit(s) 1214, alternative embodiments may use any number of well-known techniques for interconnecting such units. In one embodiment, coherency is maintained between one or more cache units 1206 and cores 1202-A-N. In some embodiments, one or more of the cores 1202A-N are capable of multi-threading. The system agent 1210 includes those components coordinating and operating cores 1202A-N. The system agent unit 1210 may include for example a power control unit (PCU) and a display unit. The PCU may be or include logic and components needed for regulating the power state of the cores 1202A-N and the integrated graphics logic 1208. The display unit is for driving one or more externally connected displays. The cores 1202A-N may be homogenous or heterogeneous in terms of architecture instruction set; that is, two or more of the cores 1202A-N may be capable of execution the same instruction set, while others may be capable of executing only a subset of that instruction set or a different instruction set. Exemplary Computer Architectures Figures 13-16 are block diagrams of exemplary computer architectures. Other system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, are also suitable. In general, a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable. Referring now to Figure 13, shown is a block diagram of a system 1300 in accordance with one embodiment of the present invention. The system 1300 may include one or more processors 1310, 1315, which are coupled to a controller hub 1320. In one embodiment the controller hub 1320 includes a graphics memory controller hub (GMCH) 1390 and an Input/Output Hub (IOH) 1350 (which may be on separate chips); the GMCH 1390 includes memory and graphics controllers to which are coupled memory 1340 and a coprocessor 1345; the IOH 1350 is couples input/output (I/O) devices 1360 to the GMCH 1390. Alternatively, one or both of the memory and graphics controllers are integrated within the processor (as described herein), the memory 1340 and the coprocessor 1345 are coupled directly to the processor 1310, and the controller hub 1320 in a single chip with the IOH 1350. The optional nature of additional processors 1315 is denoted in Figure 13 with broken lines. Each processor 1310, 1315 may include one or more of the processing cores described herein and may be some version of the processor 1200. The memory 1340 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 1320 communicates with the processor(s) 1310, 1315 via a multi-drop bus, such as a frontside bus (FSB), point-to-point interface such as QuickPath Interconnect (QPI), or similar connection 1395. In one embodiment, the coprocessor 1345 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like. In one embodiment, controller hub 1320 may include an integrated graphics accelerator. There can be a variety of differences between the physical resources 1310, 1315 in terms of a spectrum of metrics of merit including architectural, microarchitectural, thermal, power consumption characteristics, and the like. In one embodiment, the processor 1310 executes instructions that control data processing operations of a general type. Embedded within the instructions may be coprocessor instructions. The processor 1310 recognizes these coprocessor instructions as being of a type that should be executed by the attached coprocessor 1345. Accordingly, the processor 1310 issues these coprocessor instructions (or control signals representing coprocessor instructions) on a coprocessor bus or other interconnect, to coprocessor 1345. Coprocessor(s) 1345 accept and execute the received coprocessor instructions. Referring now to Figure 14, shown is a block diagram of a first more specific exemplary system 1400 in accordance with an embodiment of the present invention. As shown in Figure 14, multiprocessor system 1400 is a point-to-point interconnect system, and includes a first processor 1470 and a second processor 1480 coupled via a point-to-point interconnect 1450. Each of processors 1470 and 1480 may be some version of the processor 1200. In one embodiment of the invention, processors 1470 and 1480 are respectively processors 1310 and 1315, while coprocessor 1438 is coprocessor 1345. In another embodiment, processors 1470 and 1480 are respectively processor 1310 coprocessor 1345. Processors 1470 and 1480 are shown including integrated memory controller (IMC) units 1472 and 1482, respectively. Processor 1470 also includes as part of its bus controller units point-to-point (P-P) interfaces 1476 and 1478; similarly, second processor 1480 includes P-P interfaces 1486 and 1488. Processors 1470, 1480 may exchange information via a point-to-point (P-P) interface 1450 using P-P interface circuits 1478, 1488. As shown in Figure 14, IMCs 1472 and 1482 couple the processors to respective memories, namely a memory 1432 and a memory 1434, which may be portions of main memory locally attached to the respective processors. Processors 1470, 1480 may each exchange information with a chipset 1490 via individual P-P interfaces 1452, 1454 using point to point interface circuits 1476, 1494, 1486, 1498. Chipset 1490 may optionally exchange information with the coprocessor 1438 via a high-performance interface 1439. In one embodiment, the coprocessor 1438 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like. A shared cache (not shown) may be included in either processor or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode. Chipset 1490 may be coupled to a first bus 1416 via an interface 1496. In one embodiment, first bus 1416 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I O interconnect bus, although the scope of the present invention is not so limited. As shown in Figure 14, various I/O devices 1414 may be coupled to first bus 1416, along with a bus bridge 1418 which couples first bus 1416 to a second bus 1420. In one embodiment, one or more additional processor(s) 1415, such as coprocessors, high-throughput MIC processors, GPGPU' s, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processor, are coupled to first bus 1416. In one embodiment, second bus 1420 may be a low pin count (LPC) bus. Various devices may be coupled to a second bus 1420 including, for example, a keyboard and/or mouse 1422, communication devices 1427 and a storage unit 1428 such as a disk drive or other mass storage device which may include instructions/code and data 1430, in one embodiment. Further, an audio I/O 1424 may be coupled to the second bus 1420. Note that other architectures are possible. For example, instead of the point-to-point architecture of Figure 14, a system may implement a multi-drop bus or other such architecture. Referring now to Figure 15, shown is a block diagram of a second more specific exemplary system 1500 in accordance with an embodiment of the present invention. Like elements in Figures 14 and 15 bear like reference numerals, and certain aspects of Figure 14 have been omitted from Figure 15 in order to avoid obscuring other aspects of Figure 15. Figure 15 illustrates that the processors 1470, 1480 may include integrated memory and I/O control logic ("CL") 1472 and 1482, respectively. Thus, the CL 1472, 1482 include integrated memory controller units and include I/O control logic. Figure 15 illustrates that not only are the memories 1432, 1434 coupled to the CL 1472, 1482, but also that I/O devices 1514 are also coupled to the control logic 1472, 1482. Legacy I/O devices 1515 are coupled to the chipset 1490. Referring now to Figure 16, shown is a block diagram of a SoC 1600 in accordance with an embodiment of the present invention. Similar elements in Figure 12 bear like reference numerals. Also, dashed lined boxes are optional features on more advanced SoCs. In Figure 16, an interconnect unit(s) 1602 is coupled to: an application processor 1610 which includes a set of one or more cores 202A-N and shared cache unit(s) 1206; a system agent unit 1210; a bus controller unit(s) 1216; an integrated memory controller unit(s) 1214; a set or one or more coprocessors 1620 which may include integrated graphics logic, an image processor, an audio processor, and a video processor; an static random access memory (SRAM) unit 1630; a direct memory access (DMA) unit 1632; and a display unit 1640 for coupling to one or more external displays. In one embodiment, the coprocessor(s) 1620 include a special-purpose processor, such as, for example, a network or communication processor, compression engine, GPGPU, a high- throughput MIC processor, embedded processor, or the like. Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches. Embodiments of the invention may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Program code, such as code 1430 illustrated in Figure 14, may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor. The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language. One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores" may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor. Such machine-readable storage media may include, without limitation, non-transitory, tangible arrangements of articles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritable' s (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), phase change memory (PCM), magnetic or optical cards, or any other type of media suitable for storing electronic instructions. Accordingly, embodiments of the invention also include non-transitory, tangible machine-readable media containing instructions or containing design data, such as Hardware Description Language (HDL), which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also be referred to as program products. Emulation (including binary translation, code morphing, etc.) In some cases, an instruction converter may be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter may translate (e.g., using static binary translation, dynamic binary translation including dynamic compilation), morph, emulate, or otherwise convert an instruction to one or more other instructions to be processed by the core. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on processor, off processor, or part on and part off processor. Figure 17 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention. In the illustrated embodiment, the instruction converter is a software instruction converter, although alternatively the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof. Figure 17 shows a program in a high level language 1702 may be compiled using an x86 compiler 1704 to generate x86 binary code 1706 that may be natively executed by a processor with at least one x86 instruction set core 1716. The processor with at least one x86 instruction set core 1716 represents any processor that can perform substantially the same functions as an Intel processor with at least one x86 instruction set core by compatibly executing or otherwise processing (1) a substantial portion of the instruction set of the Intel x86 instruction set core or (2) object code versions of applications or other software targeted to run on an Intel processor with at least one x86 instruction set core, in order to achieve substantially the same result as an Intel processor with at least one x86 instruction set core. The x86 compiler 1704 represents a compiler that is operable to generate x86 binary code 1706 (e.g., object code) that can, with or without additional linkage processing, be executed on the processor with at least one x86 instruction set core 1716. Similarly, Figure 17 shows the program in the high level language 1702 may be compiled using an alternative instruction set compiler 1708 to generate alternative instruction set binary code 1710 that may be natively executed by a processor without at least one x86 instruction set core 1714 (e.g., a processor with cores that execute the MIPS instruction set of MIPS Technologies of Sunnyvale, CA and/or that execute the ARM instruction set of ARM Holdings of Sunnyvale, CA). The instruction converter 1712 is used to convert the x86 binary code 1706 into code that may be natively executed by the processor without an x86 instruction set core 1714. This converted code is not likely to be the same as the alternative instruction set binary code 1710 because an instruction converter capable of this is difficult to make; however, the converted code will accomplish the general operation and be made up of instructions from the alternative instruction set. Thus, the instruction converter 1712 represents software, firmware, hardware, or a combination thereof that, through emulation, simulation or any other process, allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute the x86 binary code 1706. In the description and claims, the terms "coupled" and "connected," along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other. Rather, in particular embodiments, "connected" may be used to indicate that two or more elements or components are in direct physical or electrical contact with each other. "Coupled" may mean that two or more elements are in direct physical or electrical contact. However, "coupled" may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other. In the description above, for the purposes of explanation, numerous specific details have been set forth in order to provide a thorough understanding of the embodiments of the invention. It will be apparent however, to one skilled in the art, that one or more other embodiments may be practiced without some of these specific details. The particular embodiments described are not provided to limit the invention but to illustrate it. The scope of the invention is not to be determined by the specific examples provided above but only by the claims below. In other instances, well-known circuits, structures, devices, and operations have been shown in block diagram form or without detail in order to avoid obscuring the understanding of the description. It will also be appreciated, by one skilled in the art, that modifications may be made to the embodiments disclosed herein, such as, for example, to the configurations, functions, and manner of operation of the embodiments. Where considered appropriate, reference numerals or terminal portions of reference numerals have been repeated among the figures to indicate corresponding or analogous elements, which may optionally have similar characteristics. Various operations and methods have been described. Some of the methods have been described in a basic form in the flow diagrams, but operations may optionally be added to and/or removed from the methods. In addition, while the flow diagrams show a particular order of the operations according to example embodiments, it is to be understood that that particular order is exemplary. Alternate embodiments may optionally perform the operations in different order, combine certain operations, overlap certain operations, etc. One or more embodiments include an article of manufacture (e.g., a computer program product) that includes a machine-accessible and/or machine-readable medium. The medium may include a mechanism that provides, for example stores or transmits, information in a form that is accessible and/or readable by the machine. The machine-accessible and/or machine-readable medium may provide, or have stored thereon, one or more or a sequence of instructions and/or data structures that if executed by a machine causes or results in the machine performing, and/or causes the machine to perform, one or more or a portion of the operations or methods or the techniques shown in the figures disclosed herein. In one embodiment, the machine-readable medium may include a tangible non-transitory machine-readable storage media. For example, the tangible non-transitory machine-readable storage media may include a floppy diskette, an optical storage medium, an optical disk, a CD- ROM, a magnetic disk, a magneto-optical disk, a read only memory (ROM), a programmable ROM (PROM), an erasable-and-programmable ROM (EPROM), an electrically-erasable-and- programmable ROM (EEPROM), a random access memory (RAM), a static-RAM (SRAM), a dynamic-RAM (DRAM), a Flash memory, a phase-change memory, or a combinations thereof. The tangible medium may include one or more solid or tangible physical materials, such as, for example, a semiconductor material, a phase change material, a magnetic material, etc. Examples of suitable machines include, but are not limited to, computer systems, desktops, laptops, notebooks, netbooks, nettops, Mobile Internet devices (MIDs), network devices, routers, switches, cellular phones, media players, and other electronic devices having one or more processors or other instruction execution devices. Such electronic devices typically include one or more processors coupled with one or more other components, such as one or more storage devices (non-transitory machine-readable storage media), user input/output devices (e.g., a keyboard, a touchscreen, and/or a display), and/or network connections. The coupling of the processors and other components is typically through one or more busses and bridges (also termed bus controllers). It should also be appreciated that reference throughout this specification to "one embodiment", "an embodiment", or "one or more embodiments", for example, means that a particular feature may be included in the practice of the invention. Similarly, it should be appreciated that in the description various features are sometimes grouped together in a single embodiment, Figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of various inventive aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects may lie in less than all features of a single disclosed embodiment. Thus, the claims following the Detailed Description are hereby expressly incorporated into this Detailed Description, with each claim standing on its own as a separate embodiment of the invention. The following clauses and/or examples pertain to further embodiments. Specifics in the clauses and/or examples may be used anywhere in one or more embodiments. In one embodiment, a first integrated circuit substrate includes a plurality of exposed electrical contacts. The first integrated circuit substrate also includes an inaccessible set of Physically Unclonable Function (PUF) cells to generate an inaccessible set of PUF bits that are not accessible through the exposed electrical contacts. The first integrated circuit substrate also includes an accessible set of PUF cells to generate an accessible set of PUF bits that are accessible through the exposed electrical contacts. Embodiments include any of the above first integrated circuit substrates further including logic to allow the accessible set of PUF bits to be accessible through the exposed electrical contacts, and where there is no logic to allow the inaccessible set of PUF bits to be accessible through the exposed electrical contacts. Embodiments include any of the above first integrated circuit substrates where the inaccessible set of PUF bits are to be provided to security logic for use in security and the accessible set of PUF bits are not to be provided to the security logic for use in security. Embodiments include any of the above first integrated circuit substrates further including: security logic; logic to provide the inaccessible set of PUF bits to the security logic, and where there is no logic to provide the accessible set of PUF bits to the security logic. Embodiments include any of the above first integrated circuit substrates where the accessible set of PUF cells are within a region more enabled for debug than a region having the inaccessible set of PUF cells. Embodiments include any of the above first integrated circuit substrates where the exposed electrical contacts comprise at least one of pads, bumps, solder, and pins. Embodiments include any of the above first integrated circuit substrates where the integrated circuit substrate includes a wafer, where the inaccessible set of PUF cells is within a die, and where the accessible set of PUF cells within a cut-away region of the wafer that is to be removed during dicing. Embodiments include the first integrated circuit substrate where the integrated circuit substrate includes a die, where the inaccessible and accessible sets of PUF cells are proximate one another on the die. Embodiments include the first integrated circuit substrate where the integrated circuit substrate includes a die, and where the inaccessible and accessible sets of PUF cells are not proximate one another on the die. In one embodiment, a first method includes electrically coupling integrated circuit test equipment with a plurality of exposed electrical contacts of an integrated circuit substrate. The first method also includes accessing, by the integrated circuit test equipment, a second set of PUF bits from a second set of PUF cells, through the exposed electrical contacts. The integrated circuit substrate includes a first set of PUF cells to generate a first set of PUF bits, which are not accessible through the exposed electrical contacts. Embodiments include the above first method further including: analyzing the second set of PUF bits to determine a characteristic of the second set of PUF cells; and inferring, based on the determined characteristic, a corresponding characteristic of the first set of PUF cells. Embodiments include the above first method where the characteristic includes at least one of a PUF bit error level and a PUF bit entropy level. Embodiments include any of the above first methods where analyzing includes analyzing at least a hundred sets of PUF bits from at least a hundred different integrated circuit substrates. Embodiments include any of the above first methods where accessing includes accessing the second set of PUF bits from the second set of PUF cells that are in a region more enabled for debug than a region having the first set of PUF cells. Embodiments include any of the above first methods further including removing the first set of PUF cells by dicing. In one embodiment, an apparatus is configured or operable to perform any of the above first methods. In one embodiment, a first system includes an interconnect and a processor coupled with the interconnect. The processor includes a plurality of exposed electrical contacts. The processor also includes an inaccessible set of PUF cells to generate an inaccessible set of PUF bits that are not accessible through the exposed electrical contacts. The processor also includes an accessible set of PUF cells to generate an accessible set of PUF bits that are accessible through the exposed electrical contacts. The system also includes a dynamic random access memory (DRAM) coupled with the interconnect. The system also includes a network interface coupled with the interconnect. The network interface is to transmit encrypted data, which has been encrypted with a secure key that is based on the inaccessible set of PUF bits, to a network. Embodiments include the first system in which the accessible set of PUF cells are within a region more enabled for debug than a region having the inaccessible set of PUF cells. Embodiments include either of the two above first systems where the accessible set of PUF bits are not to be provided to security logic. In one embodiment, a second integrated circuit substrate includes a plurality of exposed electrical contacts. The second integrated circuit substrate also includes a first set of bit generation logic to generate a first inaccessible set of bits that are not accessible through the exposed electrical contacts. The second integrated circuit substrate also includes a second set of bit generation logic to generate a second accessible set of bits that are accessible through the exposed electrical contacts. It is impractical to replicate the first and second sets of bit generation logic, the first and second sets of bits are to be substantially static, and the first and second sets of bits are to have values that depend at least in part on process variations experienced during manufacture of the integrated circuit. Embodiments include the second integrated circuit substrate in which the second set of bit generation logic is within a region that is more enabled for debug than a region having the first set of bit generation logic. Embodiments include either of the above two second integrated circuit substrates in which the first inaccessible set of PUF bits are to be provided to security logic and the second accessible set of PUF bits are not to be provided to the security logic. |
A method for applying a passivation layer selectively on an exposed silicon surface from a liquid phase solution supersaturated in silicon dioxide. The immersion is conducted at substantially atmospheric temperature and pressure and achieves an effective passivation layer in an abbreviated immersion time, and without subsequent heat treatment. In one embodiment, rapid coating of a wafer back side with silicon dioxide permits the use of a high-speed electroless process for plating the bond pad with a solder-enhancing material. In another embodiment, the walls of via holes and microvia holes in a silicon body may be passivated by immersion in the supersaturated solution prior to plugging the holes with conductive material. |
What is claimed is: 1. A method for forming a portion of a semiconductor device, comprising:forming at least one semiconductor device having an active surface and a back side surface, said at least one semiconductor device having at least one source, at least one drain, and at least one gate; forming conductive metallization on at least a portion of said active surface connected to one of said at least one source, said at least one drain, and said at least one gate, said conductive metallization including at least one bond pad; depositing a passivation layer over at least a portion of said active surface, said passivation layer having at least one bond pad opening therethrough to a portion of said conductive metallization; immersing said at least one semiconductor device in an aqueous solution comprising a hexafluoro acid of a semiconductor material and an oxide of said semiconductor material, said aqueous solution buffered for becoming supersaturated in said oxide; depositing said supersaturated oxide as a passivation layer on said back side surface of said at least one semiconductor device; and immersing said at least one semiconductor device in an electroless bath for plating a coating over at least a portion of said at least one bond pad, said coating including a solder wettable coating. 2. The method of claim 1, wherein said at least one semiconductor device includes a wafer having a plurality of semiconductor devices.3. The method of claim 1, wherein said passivation layer has a thickness of about 100 Å to about 500 Å.4. The method of claim 1, wherein said aqueous solution is buffered with boric acid.5. The method of claim 1, further comprising buffering said aqueous solution using aluminum.6. The method of claim 1, further comprising controlling a rate of deposition by varying a concentration of a buffer.7. The method of claim 1, wherein said aqueous solution comprises a hexafluorosilicic acid solution saturated in silicon dioxide, filtered to remove precipitated silicon dioxide, diluted with water, and supersaturated in silicon dioxide by the addition of boric acid.8. The method of claim 1, wherein said aqueous solution comprises a hexafluorosilicic acid solution which is diluted with water, saturated in silicon dioxide, filtered to remove precipitated silicon dioxide, and supersaturated in silicon dioxide by the addition of boric acid.9. The method of claim 7, further comprising controlling a rate of silicon dioxide deposition by varying a concentration of said boric acid.10. The method of claim 1, wherein an immersion time of said at least one semiconductor device in said aqueous solution is in the range of from about 1 minute to about 120 minutes.11. The method of claim 1, wherein said aqueous solution is maintained at a temperature in the range of about 10[deg.] C. to about 80[deg.] C.12. The method of claim 1, wherein said aqueous solution is maintained at a temperature in the range of about 20[deg.] C. to about 50[deg.] C.13. The method of claim 1, wherein said immersion of said at least one semiconductor device in said aqueous solution is conducted at substantially atmospheric pressure.14. The method of claim 1, wherein said aqueous solution comprises a hexafluorosilicic acid solution saturated in silicon dioxide, filtered to remove precipitated silicon dioxide, diluted with water, and supersaturated in silicon dioxide by the addition of aluminum.15. The method of claim 1, wherein said aqueous solution comprises a hexafluorosilicic acid solution which is diluted with water, saturated in silicon dioxide, filtered to remove precipitated silicon dioxide, and supersaturated in silicon dioxide by the addition of aluminum.16. A method for forming a passivating layer on a surface of at least one of a via and a microvia extending at least through a portion of a silicon member, comprising:immersing said silicon member in an aqueous solution comprising a hexafluoro acid of a semiconductor material and an oxide of said semiconductor material, said aqueous solution buffered to become supersaturated in said oxide; depositing said supersaturated oxide as a passivation layer on a silicon surface in at least one of said via and said microvia; and passivating said at least one of said via and said microvia without passivating a nonsilicon surface. 17. The method of claim 16, wherein said silicon member comprises one of an integrated circuit wafer, an integrated circuit semiconductor device, an interposer, and a carrier substrate.18. The method of claim 17, wherein said at least one of said via and said microvia have a land diameter of between about 25 [mu]m and about 600 [mu]m.19. A method for forming a semiconductor device, comprising:forming a semiconductor device having an active surface with at least one electronic device thereon, and a back side; forming conductive metallization on said active surface connected to a portion of said at least one electronic device, said conductive metallization including at least one pad site; depositing a passivation layer over said active surface, said passivation layer having pad openings therethrough communicating with said conductive metallization; immersing said semiconductor device in an aqueous solution comprising a hexafluoro acid of a semiconductor material and an oxide of said semiconductor material, said aqueous solution buffered to become supersaturated in said oxide; depositing said supersaturated oxide as a passivation layer on said back side of said semiconductor device; and immersing said semiconductor device in an electroless bath for plating a surface coating over said at least one pad site, said surface coating being solder wettable. 20. The method of claim 19, wherein each said immersion occurs while said semiconductor device is in wafer form.21. The method of claim 19, wherein said passivation layer includes a layer having a thickness in the range of about 100 Å to about 500 Å.22. The method of claim 19, further comprising buffering said aqueous solution using boric acid.23. The method of claim 19, further comprising buffering said aqueous solution using aluminum.24. The method of claim 19, further comprising varying a concentration of said aqueous solution by adding buffer to control the rate of deposition.25. The method of claim 24, further comprising varying a concentration of additional boric acid for controlling a rate of silicon dioxide deposition.26. The method of claim 19, wherein said aqueous solution comprises a hexafluorosilicic acid solution which is saturated in silicon dioxide by addition thereof, filtered to remove precipitated silicon dioxide, diluted with water, and supersaturated in silicon dioxide by the addition of boric acid.27. The method of claim 19, wherein said aqueous solution comprises a hexafluorosilicic acid solution which is diluted with water, saturated in silicon dioxide by the addition thereof, filtered to remove precipitated silicon dioxide, and supersaturated in silicon dioxide by the addition of boric acid.28. The method of claim 19, wherein an immersion time of said semiconductor device in said aqueous solution comprises a range of time from about 1 minute to about 120 minutes.29. The method of claim 19, wherein said aqueous solution comprises a solution maintained at a temperature having a range of about 10[deg.] C. to about 80[deg.] C. during said immersing therein.30. The method of claim 19, wherein said aqueous solution comprises a solution maintained at a temperature having a range of about 20[deg.] C. to about 50[deg.] C. during said immersing therein.31. The method of claim 19, wherein said immersing of said semiconductor device in said aqueous solution is conducted at substantially atmospheric pressure.32. The method of claim 19, wherein said aqueous solution comprises a hexafluorosilicic acid solution which is saturated in silicon dioxide, filtered to remove precipitated silicon dioxide, diluted with water, and supersaturated in silicon dioxide by the addition of aluminum.33. The method of claim 19, wherein said aqueous solution comprises a hexafluorosilicic acid solution which is diluted with water, saturated in silicon dioxide, filtered to remove precipitated silicon dioxide, and supersaturated in silicon dioxide by the addition of aluminum.34. A method for forming a passivating layer in one of via holes and microvia holes in a silicon member, comprising:forming at least one of said via holes and said microvia holes in said silicon member; immersing said silicon member in an aqueous solution comprising a hexafluoro acid of a semiconductor material and an oxide of said semiconductor material, said aqueous solution buffered to become supersaturated in said oxide, said silicon member immersed for a contact period at a temperature in the range of between about 0[deg.] C. and about 100[deg.] C. to deposit said supersaturated oxide as a passivation layer on a silicon surface in said at least one of said via holes and microvia holes; removing said silicon member from said aqueous solution; rinsing said silicon member; and passivating an inner surface of silicon while non-silicon surfaces are not passivated. 35. The method of claim 34, wherein said silicon member comprises one of an semiconductor wafer, a semiconductor device, an interposer, and a carrier substrate.36. The method of claim 34, wherein said at least one of said via holes and microvia holes has a land diameter in the range of between about 25 [mu]m and about 600 [mu]m. |
BACKGROUND OF THE INVENTION1. Field of the InventionThe present invention relates to methods for electrically isolating dielectric materials. More particularly, the invention pertains to methods for electrical isolation, i.e., passivation of exposed silicon such as occurs on the back side of a semiconductor (SC) wafer comprising semiconductor devices or dice of a DRAM, SRAM, or other semiconductor die configuration. The invention also pertains to passivation of apparatus such as carrier substrates, interposer substrates for flip-chip packaging, conductive interconnects for test packages, and the like.2. State of the ArtSilicon is a basic material from which a broad range of semiconductor devices is composed. Silicon is a semiconductor while its oxidation product, silicon dioxide, acts as a dielectric (insulating) material. Thus, silicon dioxide is one of the classical insulators used to electrically isolate silicon from conductive leads, specific functional devices in electronic apparatus, and the atmosphere. Other insulators that are used include a variety of organic and inorganic compounds.The manufacture of semiconductor devices is performed by forming a plurality of the functional devices on a wafer and subsequently separating each semiconductor device by cutting along a pattern of saw lines crisscrossing the wafer. The various processes for forming a semiconductor device such as a DRAM or SRAM device may be generally characterized as including crystal growth, bare wafer formation, surface preparation, oxidation/nitridation, heat treatment, patterning, layer deposition, doping, metallization, and packaging. Typically, each of these process includes several subprocesses.Layer deposition is generally preformed by one of several processes, such asphysical vapor deposition (PVD), chemical vapor deposition (CVD), sputtering, and electron-beam evaporation. In cases where the desired layer is to be an oxide of the base material, e.g., silicon dioxide, another common method includes the direct thermal oxidation of the existing silicon surface. Although the processes by which "growth" of a passivation layer on silicon by oxidizing the surface are highly developed, they have certain limitations. First, the rate of formation significantly slows as the layer thickness increases. Long times at high temperature are generally required to form thick layers, such as field oxides, surface passivation layers, and some masking oxide layers. Secondly, the growth rate is a function of wafer orientation. The <111> planes of a wafer have more silicon atoms than <100> planes, thus leading to faster formation of a SiO2 layer. In addition, other factors affecting growth rates include the types and concentrations of doping materials in the silicon and the presence of polysilicon or impurities. Differential oxidation causes the resulting SiO2 layer to have a stepped surface. It should be noted that the initial costs and operating costs of oxidation furnaces are high. Furthermore, a problem, which must be considered in thermal oxidation, is the formation of surface dislocations which may cause circuit problems.Another consideration relating to oxide growth is the inability to adequately passivate the lateral walls of a small via hole (such as a laser-formed microvia) of a multilayer device prior to deposition of a conductive material (such as tungsten) into the via hole. Present passivation methods for insulating the lateral walls of such small-diameter holes tend to produce uneven coverage, sometimes leading to either short circuits between the conductive via and the semiconductor or excessive filling of the via hole.In forming semiconductor devices, the electrically conductive bond pads on the active surface are grounded to the back side of the wafer. Unless neutralized by a passivation layer, the wafer back side has a net positive (+) charge.Bond pad formation typically includes applying a copper or aluminum base, then coating the base with another material so that wire bonds or conductive structures, such as solder, may be secured to the bond pad.In the case of bond pads to which bond wires are to be secured, copper is typically employed as the bond pad material. As copper forms a "slippery" oxide that is difficult to remove with a wire bonding capillary, nickel and gold adhesion layers are typically used. As copper alone will not initiate the adhesion of nickel thereto, a palladium "strike", or seed layer, is typically formed prior to conducting an electroless nickel-plating process.Efforts have been made to use more aggressive plating chemistries in order to speed the plating rate and create a higher density coating at lower cost. Such chemistries, e.g., palladium chloride in hydrochloric acid, greatly enhance the plating rate and plate density. However, unless the wafer back side is first passivated, copper pads which communicate with the silicon substrate (e.g., pads that communicate with active-device regions of transistors) and, thus, which may form a circuit directly through the silicon substrate, may be attacked by the plating chemistry and dissolve in as little as several minutes of exposure, resulting in damaged pads with performance anomalies. In addition, bath chemicals will be inordinately consumed. The use of sulfuric acid in the palladium electroless plating solution may curb such an attack of the bond pads to some extent, but does not completely resolve this problem. Once the palladium strike has been formed, nickel may be plated onto the copper and palladium by way of electroless deposition processes, then a gold layer may be formed by immersion plating processes.Aluminum is typically used as the base metal for bond pads that will receive solder balls or other discrete conductive elements. As aluminum is not itself solderable, adhesion layers are typically deposited onto aluminum bond pads. Again, nickel is often used as such an adhesion layer. Nickel does not, however, adhere well to aluminum. A zincating process, usually "double zinc", is typically used to facilitate adhesion of electrolessly deposited nickel to aluminum. If the back side of the silicon substrate upon which the bond pads are carried is not adequately passivated, the zincating process may etch the aluminum bond pads or deposit large zinc grains, which, in turn, adversely affects the subsequently deposited nickel layer.Moreover, in forming adhesion layers on both copper and aluminum bond pads, if the back side of the silicon substrate is not sufficiently passivated these nickel and gold layers may also be loosely deposited onto portions of the back side, which may result in the formation of particles in the plating baths, shortening the lives thereof and creating potential problems for downstream processes which are particle-sensitive, such as subsequent tape and probe processes.In the current state of the art, the general approach is to continue to use the more benign electroless plating method despite its overall cost.In an alternative approach, a back side coating such as a photoresist material is first applied to the wafer back side to cover the wafer's substrate material, e.g., silicon or germanium, and provide protection from a more aggressive plating chemistry. This method has further disadvantages in that the wafer is required to be removed from its work surface and inverted for resist application by a spin-on technique. Inversion and spin-on deposition require extra steps and equipment, are time consuming, and require forcibly clamped placement of the wafer's active surface on the flat surface of a vacuum hold-down tool, sometimes leading to physical damage to the semiconductor devices being formed on the wafer.In U.S. Pat. No. 6,022,814 of Mikoshiba et al., a method for forming a silicon dioxide layer is presented which includes the spin-coating of a resin compound having a Si-O, Si-O, O, or Si-N backbone. After application, the coated surface is heat-treated to set the resin, followed by heating at between 250[deg.] C. and the glass transition point (≈450-500[deg.] C.) for 3 to 4 hours to form silicon dioxide. This method for forming a silicon dioxide layer suffers from the spin-coating disadvantages listed above. In addition, it requires significant furnace exposure at elevated temperatures.It is desired to have a semiconductor manufacturing method to plate bond pads to achieve uniform high-quality, high-density pads. It is further desired to have such bond pad formation without wafer inversion, while minimizing chemical usage, minimizing time consumption, and reducing the use of high-cost equipment. In addition, it is desired to have a method to produce such semiconductor devices at an optimally high yield.In a paper of Antti J. Niskanen, published prior to November 2000 and entitled LIQUID PHASE DEPOSITION OF SILICON DIOXIDE, the author briefly summarizes tests to determine the possibility of liquid phase deposition of silicon dioxide. In a paper of Sampo Niskanen, dated November 2000 and titled DEVELOPMENT OF LIQUID PHASE DEPOSITION OF ZIRCONIUM OXIDE AND COMPARISON TO SILICON DIOXIDE, a summary is presented of tests comparing liquid phase depositions of zirconium dioxide and silicon dioxide to form thin films.BRIEF SUMMARY OF THE INVENTIONThe present invention is a method for selectively forming a dense layer of passivating oxide, e.g., silicon dioxide or zirconium oxide, onto an exposed semiconductor wafer surface, e.g., silicon or germanium. The layer is applied by submersion of the exposed semiconductor wafer surface in a liquid at low or ambient temperature. The passivation layer of easily controlled thickness may be formed in a limited amount of time. The method differs from conventional high-temperature thermal oxidation, chemical vapor deposition, and spin-on passivation methods, each of which requires sophisticated equipment and high manpower costs. The method does not require inversion of a semiconductor wafer such as required by prior art back side deposition using spin-on deposition, nor does it require protracted heating at a high temperature to cure the layer.The present invention is directed to the application of a silicon dioxide layer on an exposed silicon layer. The deposition is selective to silicon/silicon dioxide and may be performed by exposure to a liquid phase composition in a bath at room temperature or a temperature somewhat above room temperature, e.g., up to about 50[deg.] C. The liquid phase composition is supersaturated in silicon dioxide. The silicon dioxide deposition rate is not self-limiting, that is, it does not depend upon the layer thickness. The deposition rate may be readily controlled to provide repeatedly uniform layer thickness. Thus, for example, the method may be used to passivate the back side of a wafer or other semiconductor substrate without inverting the wafer or substrate. The method is specific to the base substrate, e.g., silicon and its oxide. Further exclusion of oxide from other surfaces may be assured by, for example, covering such surfaces with tape. Any silicon dioxide pre-existing on the silicon surface may be first removed or, alternatively, left in place for oxide deposit thereover.In one embodiment of the invention, a silicon wafer is formed for the purpose of producing a plurality of semiconductor devices, e.g., DRAM semiconductor devices, SRAM semiconductor devices or other types of semiconductor devices. After forming the semiconductor devices on the active surface of the wafer, covering the semiconductor devices with an insulative layer, and forming conductive traces on each semiconductor device, bond pads are formed to connect the traces (generally copper, aluminum, or alloys thereof) to external connectors (via wire bonds, for example). Prior to forming the bond pads by an electroless method, the wafer is submerged in a bath of supersaturated silicon dioxide at room temperature or somewhat higher, up to about 50[deg.] C., by which exposed silicon on the wafer back side becomes covered with a passivating layer of silicon dioxide.The exposed copper or aluminum metallization at each bond pad location may then be coated with immersion palladium or zinc, followed by an electroless nickel and, optionally, immersion gold, in a bath to form pads amenable to soldering or another joint with an external conductor. For example, the bond pads may then be coated with gold or solder-bonded to conductive wires. The overall process results in devices which are produced at reduced time and expense and which are more reliable than those currently produced.The liquid bath is supersaturated with respect to an oxide of a material which is capable of forming a hexafluoro salt. This includes, for example, silicon, zirconium, iron, and vanadium. In this invention, the primary passivation material of interest is silicon dioxide, but other oxides may also be used.The liquid bath of saturated silicon dioxide is formed by adding silicon dioxide and water to hexafluorosilicic acid H2SiF6. The solution may be diluted with water either before or after silicon dioxide is added to the saturation point. Boric acid (H3BO3) is then added to supersaturate the liquid in silicon dioxide. The silicon dioxide selectively precipitates onto the silicon/silicon oxide surface as a dense cohesive layer of uniform thickness.In addition to the application for enhancing bond pad plating, the method of the invention is applicable to the passivation of vias and microvias and to forming other layers on exposed silicon, such as on a semiconductor device wafer, interposer wafer, etc.BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGSThe nature of the present invention as well as other embodiments thereof may be more clearly understood by reference to the following detailed description of the invention, to the appended claims, and to the several drawings herein, wherein:FIGS. 1A-1D are schematic cross-sectional representations through a semiconductor device, the back side of which is depicted with a passivation layer applied in accordance with a method of the present invention;FIGS. 2A-2C are schematic cross-sectional representations through another semiconductor device, the back side of which is depicted with a passivation layer applied in accordance with teachings of the present invention;FIG. 3 is a schematic representation of a method for forming a passivation layer on silicon in accordance with the present invention and subsequent plating of bond pads of a semiconductor device;FIG. 4 is a representation of an exemplary embodiment of a method for forming a supersaturated silicon dioxide solution for passivating a silicon surface in accordance with a method of the present invention; andFIG. 5 is a representation of the processes of another embodiment of forming a supersaturated silicon dioxide solution for passivating a silicon surface in accordance with a method of the present invention.DETAILED DESCRIPTION OF THE INVENTIONWith reference to FIGS. 1A-1D and 2A-2C, different types of bond pads 12, 12' that are typically used in semiconductor devices 10, 10' are depicted. In FIG. 1A, a semiconductor device which includes copper bond pads 12 is shown. Copper bond pads 12 are typically formed on a passivation layer 14 of semiconductor device 10 and communicate with underlying, integrated circuitry of semiconductor device 10. By way of example, one or more generally downwardly extending conductive vias 16 may establish electrical communication with conductive traces 18, or "runners", that, in turn, electrically communicate with the integrated circuitry of semiconductor device 10. For example, runners 18 may lead to contact plugs 20 that provide a conductive link between runners 18 and a conductively doped silicon active-device region 22 of a transistor 24 of semiconductor device 10.In order to use a rapid electroless plating method to plate bond pads 12, 12' without incurring damage to the semiconductor device 10, it is necessary to insulate the net positive (+) charge on the semiconductor device's back side 32. FIG. 1A also shows a back side passivation layer 34 of silicon dioxide that has been formed in accordance with teachings of the present invention. In FIG. 1A, a back side passivation layer 34 of silicon dioxide has been formed by a method of this invention to electrically insulate the back side 32, preventing etching of bond pad 12, as well as other possible damage to the circuits of semiconductor devices 10 by, for example, short circuiting during plating of bond pads 12.Once back side passivation layer 34 has been formed, a palladium activation layer 26 may be formed thereon, as depicted in FIG. 1B, such as by the aggressive, acid-accelerated electroless plating processes described previously herein. A layer 28 of electrolessly deposited nickel may then be formed on each bond pad 12, as shown in FIG. 1C, followed by a layer 29 of immersion plated gold, as illustrated in FIG. 1D. Following plating of bond pads 12, the upper surface of semiconductor device 10 may be further covered with a passivation and/or final package. It is understood that semiconductor device 10, as shown, is part of a multi-semiconductor device wafer containing a plurality, e.g., hundreds, of semiconductor devices, although the method is applicable to a single discrete semiconductor device as well.FIG. 2A illustrates a semiconductor device 10' with each of the features of semiconductor device 10 (FIG. 1A). In addition, semiconductor device 10' includes another passivation layer 15 which overlies passivation layer 14 and each bond pad 12 exposed therethrough, a redistributed bond pad 12' exposed through passivation layer 15, and a conductive redistribution trace 13 extending between passivation layer 14 and passivation layer 15 from bond pad 12 to its corresponding redistributed bond pad 12'. Redistributed bond pad 12' may be configured to receive a discrete conductive element (not shown), such as a solder ball, and, therefore, may be formed from aluminum or another material suitable for securing such a discrete conductive element. Semiconductor device 10' also includes a back side passivation layer 34 on a back side thereof.As shown in FIGS. 2B and 2C, a zincate process may be conducted on redistributed bond pad 12' (FIG. 2B) to form zinc grains 30 thereon, which facilitate adherence of an electrolessly deposited nickel layer 31 to redistributed bond pad 12' (FIG. 2C).Turning now to FIG. 3, one or more semiconductor wafers 40 are shown in a wafer carrier 42 for forming a back side silicon dioxide layer. First, at reference 46, the wafers 40 are immersed in a supersaturated silicon dioxide solution 44 to precipitate, i.e., deposit a dense passivation layer 34 (see FIGS. 1A and 2A) on the back side 32 of each wafer 40. The deposition is specific to exposed silicon (and its oxide), and substantially does not plate out on bond pads 12, 12' (FIGS. 1A and 2A, respectively) or on organic materials, such as photoresist. However, in the event that minute quantities of silicon dioxide are found to adhere to bond pads 12, 12', the latter may be precovered with tape to prevent deposition thereto. The back sides 32 of wafers 40 are shown as being in a vertical position during immersion. However, the wafer orientation appears to be irrelevant to deposition rate or layer properties in this process, as long as constant exposure to the solution 44 is maintained.In the submersion process 46, the following factors are controlled:a. The concentration of components in the supersaturated silicon dioxide solution 44 is controlled to provide sufficient silicon dioxide for the desired layer depth and insulative value. Inasmuch as deposition is specific to surfaces of silicon and its oxide, the required solution composition may be readily calculated.b. During submersion, the solution temperature generally may be between about room temperature and about 50[deg.] C. While the temperature may be even higher, e.g., up to about 90[deg.] C., there may be no reason to control the temperature at much above room temperature in most cases.c. The time of submersion is relatively short, typically on the order of about 1 minute to about 60 minutes, depending upon the particular application. Some applications may require longer immersion times to achieve the desired layer thickness. The deposition rate has been found to be independent of the layer thickness, but may attain a "steady-state" thickness upon long-term exposure.d. The pressure at which the exposure takes place is preferably atmospheric, or nearly so, requiring no special control.During submersion of the wafers 40, the supersaturated silicon dioxide solution 44 is preferably stirred or recycled to prevent local depletion of silicon dioxide and provide fresh solution for coating the silicon surfaces.Following formation of the back side passivation layer 34, the wafers 40 are extracted from the supersaturated silicon dioxide solution 44 and rinsed in rinsing apparatus 48 at reference 50. Solution remaining on the wafer surfaces is washed away including any hexafluorosilicic acid, unattached precipitated silicon dioxide, and stable complex-ion BF4<->.Optionally, the rinsed wafers 40 may be dried to prevent any dilution (though slight) of the subsequent plating solution 52. However, there is no need to heat-treat the wafers 40, such as is required by some layering processes.As shown in FIG. 3, the bond pads 12, 12' (FIGS. 1A and 2A) may then be plated with nickel or other metal in an electroless process, from plating solution 52, in a plating represented at reference 54. Of course, such an electroless plating process may include activation or other preparation of the surface of bond pads 12, 12', as explained previously herein (e.g., palladium activation of copper, zincating aluminum, etc.) A subsequent rinsing process 58 is conducted using rinsing apparatus 56 before subsequent manufacturing processes, e.g., attachment of wires and packaging, are performed. Each of the indicated processes may be comprised of several subprocesses.Alternative methods for forming the supersaturated silicon dioxide solution 44 are depicted in drawing FIGS. 3 and 4. As shown in the drawing figures, the aqueous reaction solution 44 comprises an acid fluoride salt of the desired oxide, whether silicon dioxide, zirconium oxide, etc., and the solution is supersaturated in the desired oxide by the addition of a buffer, e.g., boric acid. The reactions, which take place in the formation of solution 44, specific to silicon dioxide, are as follows:H2SiF6+2H2OSiO2+6HF (Reaction A)H3BO3+4HFBF4<->+H3O<+>+2H2O (Reaction B)It can be seen that in Reaction B, HF produced in Reaction A is consumed by the addition of boric ion to produce a stable complex ionic species BF4<-> (as well as hydronium ion H3O<+>), driving Reaction A to the right. The result is supersaturation of the solution with respect to SiO2, which deposits on the exposed silicon surface (and silicon dioxide surface).As shown in drawing FIG. 3, one method for making a supersaturated silicon dioxide solution is to first form an aqueous solution of hexafluorosilicic acid H2SiF6. The solution is formed at a generally high concentration, for example, about 20-50 weight percent H2SiF6. Silicon dioxide (SiO2) is then added whereby, at equilibrium, the solution is saturated with respect to the oxide and contains hydrofluoric acid.Any silicon dioxide which precipitates, together with any other solids, is then preferably removed from the solution 44 with, for example, a 0.2 [mu]m filter. The result is a substantially solid-free solution 44 saturated in silicon dioxide.The solution is then diluted with water. To this diluted solution 44 is added boric acid (H3BO3) at a concentration which will tie up the HF to supersaturate the solution in silicon dioxide. In this invention, a silicon-surfaced semiconductor device, wafer, interposer or other device is immersed in the supersaturated solution 44 for deposition of a silicon dioxide layer. Following completion of such deposition, the coated device is rinsed to remove extraneous materials and further processed to completion.An alternate method of the present invention is shown in FIG. 5, in which the initial concentrated hexafluorosilicic acid is first diluted with water prior to saturating with silicon dioxide. Following filtration, the method may follow substantially the same process flow shown in FIG. 4.The thickness of silicon dioxide layers which may be formed by the methods of the invention range up to about 100 nm in a single deposition. Typically, a desired layer thickness for passivating the back side of a semiconductor wafer may be about 100 to 500 Å (about 10 to about 50 nm), and other applications may use silicon dioxide layers of less than 100 Å thickness.It should be noted that in either of the foregoing methods of the present invention of FIGS. 4 and 5, aluminum may be substituted for boric acid. In this case, the aluminum reacts with HF to form AlF3, driving reaction A to the right to supersaturate the solution in silicon dioxide.Thus far, the invention has been described in terms of a passivation layer comprising silicon dioxide. Other oxides may be formed which will deposit onto an exposed silicon surface, having similar chemical routes. For example, the layer-forming solution may be configured to deposit the oxides of zirconium, titanium, vanadium and even iron.In another embodiment of the method of the present invention, a silicon dioxide-depositing solution may be formed by adding ammonia (NH3) to a hexafluorosilicic acid solution whereby the solution becomes supersaturated in silicon dioxide.In a first embodiment, already described, a passivating layer of silicon dioxide is formed on the back side of a semiconductor wafer, with many advantages over the prior art. The invention also encompasses the application of a passivating layer on the inner walls of a laser-formed via, on members such as carrier substrates, interposer substrates for flip-chip packaging, beneath interconnects for test packages, and the like. The method of the present invention is particularly useful for passivating vias and microvias such as made by lasers through silicon. The method of the present invention deposits a uniform layer of oxide on the silicon surfaces of the via hole, without covering metallization to which the via hole may extend. Previous methods tend to produce uneven deposition so that, in order to assure complete coverage, the layer must in some places be much thicker than desired. The uneven coverage also unduly limited the diameter of microvia holes. Use of the present invention avoids these problems, enabling uniform thin coatings within vias or microvias, formed easily, without prolonged exposure, and without covering nonsilicon surfaces.Although the foregoing description contains many specifics, these should not be construed as limiting the scope of the present invention, but merely as providing illustrations of some exemplary embodiments. Similarly, other embodiments of the invention may be devised which do not depart from the spirit or scope of the present invention. Features from different embodiments may be employed in combination. The scope of the invention is, therefore, indicated and limited only by the appended claims and their legal equivalents, rather than by the foregoing description. All additions, deletions, and modifications to the invention, as disclosed herein, which fall within the meaning and scope of the claims are to be embraced thereby. |
Certain aspects of a method and system for protecting data during mobile communication may comprise a mobile multimedia processor that decrypts an encrypted algorithm in hardware within the mobile multimedia processor. The mobile multimedia processor may be adapted to utilize the decrypted algorithm to decrypt data in software. The mobile multimedia processor may be adapted to decrypt instructions for the encrypted algorithm as the instructions enter an instruction cache. The mobile multimedia processor may be adapted to protect the decrypted data by performing a hash operation of the decrypted data and check a result of the hash operation. |
1. A method for protecting data during mobile communication, the method comprising:decrypting an encrypted algorithm in hardware of a multimedia mobile processor; andutilizing said decrypted algorithm to decrypt data handled by said mobile multimedia processor in software.2. The method according to claim 1, further comprising decrypting instructions for said algorithm as said instructions enter an instruction cache.3. The method according to claim 1, further comprising:protecting said decrypted data by performing a hash operation of said decrypted data; andchecking a result of said hash operation.4. The method according to claim 1, further comprising storing a decryption key to said encrypted algorithm in write-only mode in said hardware of said mobile multimedia processor.5. The method according to claim 4, further comprising utilizing said stored decryption key to decrypt said encrypted algorithm in said hardware of said mobile multimedia processor.6. A machine-readable storage having stored thereon, a computer program having at least one code section for protecting data during mobile communication, the at least one code section being executable by a machine for causing the machine to perform steps comprising:decrypting an encrypted algorithm in hardware of a multimedia mobile processor; andutilizing said decrypted algorithm to decrypt data handled by said mobile multimedia processor in software.7. The machine-readable storage according to claim 6, further comprising code for decrypting instructions for said algorithm as said instructions enter an instruction cache.8. A system for protecting data during mobile communication, the system comprising:a mobile multimedia processor that decrypts an encrypted algorithm in hardware of said mobile multimedia processor; andsaid mobile multimedia processor utilizes said decrypted algorithm to decrypt data handled by said mobile multimedia processor in software.9. The system according to claim 8, wherein said mobile multimedia processor decrypts instructions for said algorithm as said instructions enter an instruction cache.10. The system according to claim 8, wherein:said mobile multimedia processor that protects said decrypted data by performing a hash operation of said decrypted data; andsaid mobile multimedia processor that checks a result of said hash operation. |
CROSS-REFERENCE TO RELATED APPLICATIONS/INCORPORATION BY REFERENCEThis application makes reference to, claims priority to, and claims the benefit of United States Provisional Patent Application Serial No. 60/652439 (Attorney Docket No. 16433US01), filed on February 12, 2005.This application makes reference to United States Application Serial No. (Attorney Docket No. 16435US02) filed on even date herewith.The above stated application is hereby incorporated herein by reference in its entirety.FIELD OF THE INVENTIONCertain embodiments of the invention relate to mobile multimedia processors. More specifically, certain embodiments of the invention relate to a method and system for digital rights management in a mobile multimedia processor.BACKGROUND OF THE INVENTIONMobile communications have changed the way people communicate and mobile phones have been transformed from a luxury item to an essential part of every day life. The use of mobile phones today is dictated by social situations, rather than hampered by location or technology. While voice connections fulfill the basic need to communicate, and mobile voice connections continue to filter even further into the fabric of every day life, various integrated mobile multimedia applications, utilizing the mobile Internet, is the next step in the mobile communication revolution.Third generation (3G) cellular networks offering various high speed access technologies and mobile telephones that have been specifically designed to utilize these technologies, fulfill demands for integrated multimedia applications supporting TV and audio applications utilizing advanced compression standards, high-resolution gaming applications, musical interfaces, peripheral interface support, etc. The processing requirements are being increased as chip designers take advantage of compression and higher bandwidths to transmit more information. For example, 3G wireless applications support bit rates from 384 kilobits (Kbits)/second to 2 megabits (Mbits)/second, allowing chip designers to provide wireless systems with multimedia capabilities, superior quality, reduced interference, and a wider coverage area.As mobile multimedia services grow in popularity and usage, factors such as power consumption, cost efficient optimization of network capacity and quality of service (QoS) will become even more essential to cellular operators than it is today. These factors may be achieved with careful network planning and operation, improvements in transmission methods, and advances in receiver techniques and chip integration solutions. To this end, carriers need technologies that will allow them to increase downlink throughput for the mobile multimedia applications support and, in turn, offer advanced QoS capabilities and speeds for consumers of mobile multimedia application services. Currently, mobile multimedia processors don't fully exploit system-on-a-chip (SOC) integration for advanced total system solution for today's mobile handsets. For example, conventional mobile processors may utilize a plurality of hardware accelerators to enable a variety of multimedia applications, which significantly increases power consumption, implementation complexity, mobile processor real estate, and ultimately terminal size. The content owners may insist on digital rights management (DRM) and the algorithm or parts of it may have to be kept secret. Nevertheless, periodic updates and modifications may be required.Further limitations and disadvantages of conventional and traditional approaches will become apparent to one of skill in the art, through comparison of such systems with some aspects of the present invention as set forth in the remainder of the present application with reference to the drawings.BRIEF SUMMARY OF THE INVENTIONA system and/or method is provided for digital right management in a mobile multimedia processor, substantially as shown in and/or described in connection with at least one of the figures, as set forth more completely in the claims.According to an aspect of the invention, a method for protecting data during mobile communication is provided, the method comprising:decrypting an encrypted algorithm in hardware of a multimedia mobile processor; andutilizing said decrypted algorithm to decrypt data handled by said mobile multimedia processor in software.Advantageously, the method further comprises decrypting instructions for said algorithm as said instructions enter an instruction cache.Advantageously, the method further comprises:protecting said decrypted data by performing a hash operation of said decrypted data; andchecking a result of said hash operation.Advantageously, the method further comprises storing a decryption key to said encrypted algorithm in write-only mode in said hardware of said mobile multimedia processor.Advantageously, the method further comprises utilizing said stored decryption key to decrypt said encrypted algorithm in said hardware of said mobile multimedia processor. Advantageously, the method further comprises obscuring a location of said stored decryption key.Advantageously, the method further comprises disabling at least one interrupt before said decryption of said encrypted algorithm.Advantageously, the method further comprises enabling at least one interrupt after said decryption of said encrypted algorithm.Advantageously, the method further comprises inserting an unused memory line between said decrypted data and said decrypted algorithm to prevent corruption of said decrypted data.According to an aspect of the invention, a machine-readable storage is provided having stored thereon, a computer program having at least one code section for protecting data during mobile communication, the at least one code section being executable by a machine for causing the machine to perform steps comprising:decrypting an encrypted algorithm in hardware of a multimedia mobile processor; andutilizing said decrypted algorithm to decrypt data handled by said mobile multimedia processor in software.Advantageously, the machine-readable storage further comprises code for decrypting instructions for said algorithm as said instructions enter an instruction cache.Advantageously, the machine-readable storage further comprises code for:protecting said decrypted data by performing a hash operation of said decrypted data; andchecking a result of said hash operation.Advantageously, the machine-readable storage further comprises code for storing a decryption key to said encrypted algorithm in write-only mode in said hardware of said mobile multimedia processor.Advantageously, the machine-readable storage further comprises code for utilizing said stored decryption key to decrypt said encrypted algorithm in said hardware of said mobile multimedia processor.Advantageously, the machine-readable storage further comprises code for obscuring a location of said stored decryption key.Advantageously, the machine-readable storage further comprises code for disabling at least one interrupt before said decryption of said encrypted algorithm.Advantageously, the machine-readable storage further comprises code for enabling at least one interrupt after said decryption of said encrypted algorithm.Advantageously, the machine-readable storage further comprises code for inserting an unused memory line between said decrypted data and said decrypted algorithm to prevent corruption of said decrypted data.According to an aspect of the invention, a system for protecting data during mobile communication is provided, the system comprising:a mobile multimedia processor that decrypts an encrypted algorithm in hardware of said mobile multimedia processor; andsaid mobile multimedia processor utilizes said decrypted algorithm to decrypt data handled by said mobile multimedia processor in software.Advantageously, said mobile multimedia processor decrypts instructions for said algorithm as said instructions enter an instruction cache.Advantageously, said mobile multimedia processor that protects said decrypted data by performing a hash operation of said decrypted data; andsaid mobile multimedia processor that checks a result of said hash operation.Advantageously, said mobile multimedia processor stores a decryption key to said encrypted algorithm in write-only mode in said hardware of said mobile multimedia processor.Advantageously, said mobile multimedia processor utilizes said stored decryption key to decrypt said encrypted algorithm in said hardware of said mobile multimedia processor.Advantageously, said mobile multimedia processor obscures a location of said stored decryption key.Advantageously, said mobile multimedia processor disables at least one interrupt before said decryption of said encrypted algorithm.Advantageously, said mobile multimedia processor enables at least one interrupt after said decryption of said encrypted algorithm.Advantageously, said mobile multimedia processor inserts an unused memory line between said decrypted data and said decrypted algorithm to prevent corruption of said decrypted data.These and other advantages, aspects and novel features of the present invention, as well as details of an illustrated embodiment thereof, will be more fully understood from the following description and drawings.BRIEF DESCRIPTION OF SEVERAL VIEWS OF THE DRAWINGSFIG. 1A is a block diagram of an exemplary mobile multimedia system, in accordance with an embodiment of the invention.FIG. 1B is a block diagram of an exemplary mobile multimedia processor, in accordance with an embodiment of the invention.FIG. 2 is a block diagram of an exemplary system for code decryption, in accordance with an embodiment of the invention.FIG. 3 is a block diagram of an exemplary system for code decryption, in accordance with an embodiment of the invention.FIG. 4 is a flow diagram illustrating program flow within a memory stack when an encrypted code is executed, in accordance with an embodiment of the invention.FIG. 5 is a flowchart illustrating exemplary steps for protecting data during mobile communication, in accordance with an embodiment of the invention.DETAILED DESCRIPTION OF THE INVENTIONIn accordance with an embodiment of the invention, a method and system for protecting data during mobile communication may comprise a mobile multimedia processor that decrypts an encrypted algorithm in hardware within the mobile multimedia processor. The mobile multimedia processor may be adapted to utilize the decrypted algorithm to decrypt data in software. The mobile multimedia processor may be adapted to decrypt instructions for the encrypted algorithm as the instructions enter an instruction cache within the mobile multimedia processor. The mobile multimedia processor may be adapted to protect the plain-text code by performing a hash operation of the plain-text code and check a result of the hash operation within the mobile multimedia processor. The use of encrypted code protects the plain-text code from modifications.FIG. 1A is a block diagram of an exemplary mobile multimedia system, in accordance with an embodiment of the invention. Referring to FIG. 1A, there is shown a mobile multimedia system 105 that comprises a mobile multimedia device 105a, a TV 101 h, a PC 101 k, an external camera 101 m, external memory 101 n, and external LCD display 101 p. The mobile multimedia device 105a may be a cellular telephone or other handheld communication device. The mobile multimedia device 105a may comprise a mobile multimedia processor (MMP) 101a, an antenna 101d, an audio block 101s, a radio frequency (RF) block 101e, a baseband processing block 101f, an LCD display 101 b, a keypad 101 c, and a camera 101 g.The MMP 101a may comprise suitable circuitry, logic, and/or code and may be adapted to perform video and/or multimedia processing for the mobile multimedia device 105a. The MMP 101a may further comprise a plurality of integrated interfaces, which may be utilized to support one or more external devices coupled to the mobile multimedia device 105a. For example, the MMP 101a may support connections to a TV 101 h, a PC 101 k, ah external camera 101 m, external memory 101 n, and an external LCD display 101 p.In operation, the mobile multimedia device may receive signals via the antenna 101d. Received signals may be processed by the RF block 101e and the RF signals may be converted to baseband by the baseband processing block 101f. Baseband signals may then be processed by the MMP 101 a. Audio and/or video signals may also be received via/ transmitted to the integrated camera 101g, the TV 101h, the PC 101 k, and/or the external camera 101m. During processing, the MMP 101a may utilize the external memory 101n for storing of processed data. Processed audio data may be communicated to the audio block 101s and processed video data may be communicated to the TV 101h, LCD 101 b or the external LCD 101p, for example. The keypad 101c may be utilized for communicating processing commands and/or other data, which may be required for audio or video data processing by the MMP 101a.FIG. 1B is a block diagram of an exemplary mobile multimedia processor, in accordance with an embodiment of the invention. Referring to FIG. 1B, the mobile multimedia processor 102 may comprise suitable logic, circuitry and/or code that may be adapted to perform video and/or multimedia processing for handheld multimedia products. For example, the mobile multimedia processor 102 may be designed and optimized for video record/playback, mobile TV and 3D mobile gaming, utilizing integrated peripherals and a video processing core. The mobile multimedia processor 102 may comprise a video processing core 103, RAM 104, an analog block 106, a direct memory access (DMA) controller 163, an audio interface (I/F) 142, a memory stick I/F 144, secure digital (SD) card I/F 146, JTAG I/F 148, TV output I/F 150, USB I/F 152, a camera I/F 154, a host I/F 129, and an integrated-integrated circuit (I<2> C) I/F 156. The mobile multimedia processor 102 may further comprise a serial peripheral interface (SPI) 157, a universal asynchronous receiver/transmitter (UART) I/F 159, general purpose input/output (GPIO) pins 164, a display controller 162, an external memory I/F 158, and a second external memory I/F 160.The video processing core 103 may comprise suitable circuitry, logic, and/or code and may be adapted to perform video processing of data. The RAM 104 may comprise suitable logic, circuitry and/or code that may be adapted to store on-chip data such as video data. In an exemplary embodiment of the invention, the RAM 104 may be adapted to store 10 Mbits of on-chip data, for example. The size of the on-chip RAM 104 may vary depending on cost or other factors such as chip size.The analog block 106 may comprise a switch mode power supply (SMPS) block and a phase locked loop (PLL) block. In addition, the analog block 106 may comprise an on-chip SMPS controller, which may be adapted to generate its core voltage. The core voltage may be software programmable according to, for example, speed demands on the mobile multimedia processor 102, allowing further control of power management.In an exemplary embodiment of the invention, the normal core operating range may be about 0.8 V - 1.2 V and may be reduced to about 0.6 V during hibernate mode. The analog block 106 may also comprise a plurality of PLL's that may be adapted to generate about 195 kHz - 200 MHz clocks, for example, for external devices. Other voltages and clock speeds may be utilized depending on the type of application. The mobile multimedia processor 102 may comprise a plurality of power modes of operation, for example, run, sleep, hibernate and power down. In accordance with an embodiment of the invention, the mobile multimedia processor 102 may comprise a bypass mode that may allow a host to access memory mapped peripherals in power down mode, for example. The mobile multimedia processor 102 may be adapted to directly control the display during normal operation in bypass mode. A host may be able to maintain the display while the mobile multimedia processor 102 is in standby mode.The audio block 108 may comprise suitable logic, circuitry and/or code that may be adapted to communicate with the mobile multimedia processor 102 via an inter-IC sound (I<2> S), pulse code modulation (PCM) or audio codec (AC'97) interface 142 or other suitable interface, for example. In the case of an AC'97 and/or an I<2> S interface, suitable audio controller, processor and/or circuitry may be adapted to provide AC'97 and/or I<2> S audio output respectively, in either master or slave mode. In the case of the PCM interface, a suitable audio controller, processor and/or circuitry may be adapted to allow input and output of telephony or high quality stereo audio. The PCM audio controller, processor and/or circuitry may comprise independent transmit and receive first in first out (FIFO) buffers and may use DMA to further reduce processor overhead. The audio block 108 may also comprise an audio in, audio out port and a speaker/microphone port (not illustrated in FIG. 1 B).The mobile multimedia device 100 may comprise at least one portable memory input/output (I/O) block. In this regard, the memorystick block 110 may comprise suitable logic, circuitry and/or code that may be adapted to communicate with the mobile multimedia processor 102 via a memorystick pro interface 144, for example. The SD card block 112 may comprise suitable logic, circuitry and/or code that may be adapted to communicate with the mobile multimedia processor 102 via a SD input/output (I/O) interface 146, for example. A multimedia card (MMC) may also be utilized to communicate with the mobile multimedia processor 102 via the SD input/output (I/O) interface 146, for example. The mobile multimedia device 100 may comprise other portable memory I/O blocks such an SD I/O card.The debug block 114 may comprise suitable logic, circuitry and/or code that may be adapted to communicate with the mobile multimedia processor 102 via a joint test action group (JTAG) interface 148, for example. The debug block 114 may be adapted to access the address space of the mobile multimedia processor 102 and may be adapted to perform boundary scan via an emulation interface. Other test access ports (TAPs) may be utilized. The phase alternate line (PAL)/ national television standards committee (NTSC) TV output I/F 150 may be utilized for communication with a TV, and the universal serial bus (USB) 1.1, or other variant thereof, slave port I/F 152 may be utilized for communications with a PC, for example. The cameras 120 and/or 122 may comprise suitable logic, circuitry and/or code that may be adapted to communicate with the mobile multimedia processor 102 via a multiformat raw CCIR 601 camera interface 154, for example. The camera I/F 154 may utilize windowing and sub-sampling functions, for example, to connect the mobile multimedia processor 102 to a mobile TV front end.The mobile multimedia processor 102 may also comprise a plurality of serial interfaces, such as the USB I/F 152, an inter-integrated circuit (I<2> C) master I/F 156, a serial peripheral interface (SPI) 157, and a universal asynchronous receiver/transmitter (UART) I/F 159 for Bluetooth or IrDA. The I<2> C master interface 156 may comprise suitable circuitry, logic, and/or code and may be adapted to control image sensors and may be a connected to smart batteries and other peripherals. The SPI master interface 157 may comprise suitable circuitry, logic, and/or code and may be utilized to control image sensors. Two chip selects may be provided, for example, to work in a polled mode with interrupts or via a DMA controller 163. Furthermore, the mobile multimedia processor 102 may comprise a plurality of general purpose I/O (GPIO) pins 164, which may be utilized for user defined I/O or to connect to the internal peripherals. The display controller 162 may comprise suitable circuitry, logic, and/or code and may be adapted to support multiple displays with XGA resolution, for example, and to handle 8/9/16/18/21-bit video data.The baseband flash memory 124 may be adapted to receive data from the mobile multimedia processor 102 via an 8/16 bit parallel host interface 129, for example. The host interface 129 may be adapted to provide two channels with independent address and data registers through which a host processor may read and/or write directly to the memory space of the mobile multimedia processor 102. The baseband processing block 126 may comprise suitable logic, circuitry and/or code that may be adapted to convert RF signals to baseband and communicate the baseband processed signals to the mobile multimedia processor 102 via the host interface 129, for example. The RF processing block 130 may comprise suitable logic, circuitry and/or code that may be adapted to receive signals via the antenna 132 and to communicate RF signals to the baseband processing block 126. The host interface 129 may comprise a dual software channel with a power efficient bypass mode.The main LCD 134 may be adapted to receive data from the mobile multimedia processor 102 via a display controller 162 and/or from a second external memory interface 160, for example. The display controller 162 may comprise suitable logic, circuitry and/or code and may be adapted to drive an internal TV out function or be connected to a range of LCD's. The display controller 162 may be adapted to support a range of screen buffer formats and may utilize direct memory access (DMA) to access the buffer directly and increase video processing efficiency of the video processing core 103. Both NTSC and PAL raster formats may be generated by the display controller 162 for driving the TV out. Other formats, for example SECAM, may also be supportedIn one embodiment of the invention, the display controller 162 may be adapted to support a plurality of displays, such as an interlaced display, for example a TV, and/or a non-interlaced display, such as an LCD. The display controller 162 may also recognize and communicate a display type to the DMA controller 163. In this regard, the DMA controller 163 may be fetch video data in an interlaced or non-interlaced fashion for communication to an interlaced or non-interlaced display coupled to the mobile multimedia processor 102 via the display controller 162.The substitute LCD 136 may comprise suitable logic, circuitry and/or code that may be adapted to communicate with the mobile multimedia processor 102 via a second external memory interface, for example. The mobile multimedia processor 102 may comprise a RGB external data bus. The mobile multimedia processor 102 may be adapted to scale image output with pixel level interpolation and a configurable refresh rate.The optional flash memory 138 may comprise suitable logic, circuitry and/or code that may be adapted to communicate with the mobile multimedia processor 102 via an external memory interface 158, for example. The optional SDRAM 140 may comprise suitable logic, circuitry and/or code that may be adapted to receive data from the mobile multimedia processor 102 via the external memory interface 158, for example. The external memory I/F 158 may be utilized by the mobile multimedia processor 102 to connect to external SDRAM 140, SRAM, Flash memory 138, and/or external peripherals, for example. Control and timing information for the SDRAM 140 and other asynchronous devices may be configurable by the mobile multimedia processor 102.The mobile multimedia processor 102 may further comprise a secondary memory interface 160 to connect to connect to memory-mapped LCD and external peripherals, for example. The secondary memory interface 160 may comprise suitable circuitry, logic, and/or code and may be utilized to connect the mobile multimedia processor 102 to slower devices without compromising the speed of external memory access. The secondary memory interface 160 may provide 16 data lines, for example, 6 chip select/address lines, and programmable bus timing for setup, access and hold times, for example. The mobile multimedia processor 102 may be adapted to provide support for NAND/NOR Flash including NAND boot and high speed direct memory access (DMA), for example.In operation, the mobile multimedia processor 102 may be adapted to support multiple display formats for displaying processed video data. For example, interlaced and/or non-interlaced external displays may be connected to the mobile multimedia processor 102 via the display controller 162. The display controller 162 may communicate the external display type to the DMA controller 163. The DMA controller 163 may then access the on-chip RAM 104 and may fetch processed video data in an interlaced or non-interlaced format, corresponding to the external display type.FIG. 2 is a block diagram of an exemplary system for code decryption, in accordance with an embodiment of the invention. Referring to FIG. 2, there is shown a memory block 202, an instruction fetch block 204, a decryption block 206, a decoder block 208, a status register 210, a read only memory (ROM) 212 and a decision block 214.The memory block 202 may comprise suitable logic, circuitry and/or code that may be adapted to store data and/or instructions for use. The memory block 202 may be coupled to the instruction fetch block 204. The instruction fetch block 204 may comprise suitable logic, circuitry and/or code that may be adapted to fetch instructions from the memory block 202 and may store instructions in the memory block 202 and/or the ROM 212. The decryption block 206 may comprise suitable logic, circuitry and/or code that may be adapted to receive instructions and/or data from the instruction fetch block 204 and the ROM 212. The decryption block 206 may be adapted to modify the order of data received and transmit a set of instructions and/or data to the decoder block 208. The decoder block 208 may comprise suitable logic, circuitry and/or code that may be adapted to receive data and/or instructions from the decryption block 206 and execute them.The status register 210 may comprise suitable logic, circuitry and/or code that may be adapted to receive, hold and/or transfer data and/or instructions to the ROM 212. The status register 210 may also be adapted to hold an address of a storage location or hold data that may be retrieved or sent to storage. The ROM 212 may comprise suitable logic, circuitry and/or code that may be adapted to receive a set of data and/or instructions from the instruction fetch block 204 and the status register 210. The ROM 212 may be adapted to store and/or transmit data to the decryption block 206. The decision block 214 may comprise suitable logic, circuitry and/or code that may be adapted to determine if a value of the status register 210 is greater than zero. If the value of the status register 210 is greater than zero, single-step debugging may be disabled.A plurality of bits, for example 3 bits, E2 - E0 may be added to the status register 210. If the value of these 3 bits is zero, the processor may work in a normal mode of operation. If the value of these 3 bits is non-zero, they may define one of 7 encryption modes. The code may be decrypted as it is fed into the instruction decoder block 208. As a result, plain-text code may not be visible in the memory block 202 or in trace buffers, for example. Single-step debugging may be disabled to prevent single stepping through the code. The operation may be tracked by any changes in the contents of the status register 210 instead of tracking the operation by the instructions executed, for example. There may be several cycles of the one-time pad within the encrypted code as the size of the ROM 212 may be limited. Code relocatability may be dependent on, for example, the number of low order address bits used in the pad. The protected digital rights management (DRM) encryption algorithm may be embedded along with a device key for CPRM. The encryption algorithm may also be adapted to move code around and add extra instructions to obscure the location of the device key and to protect against an attack, where a hacker may collect multiple copies of the device key. In accordance with an embodiment of the invention, the encryption algorithm may also be adapted to protect the code against tampering by performing a hash of itself and check the result during various points during operation.FIG. 3 is a block diagram of an exemplary system for code decryption, in accordance with an embodiment of the invention. Referring to FIG. 3, there is shown a command decryption block 302, a plurality of multiplexers MUX 304 and MUX 308, an instruction cache block 306, an instruction fetch block 310 and an instruction decoder block 312.The command decryption block 302 may comprise suitable logic, circuitry and/or code that may be adapted to receive data, for example, 256 byte data and/or instructions and decrypt the code and/or data during a secure mode of operation. The MUX 304 may comprise suitable logic, circuitry and/or code that may be adapted to select between an encrypted instruction and/or data and a decrypted instruction and/or data from the command decryption block 302. When the MUX 304 is enabled in secure mode, it may be adapted to select a signal from the command decryption block 302. The instruction cache block 306 may comprise suitable logic, circuitry and/or code that may be adapted to store instructions temporarily for immediate access by the instruction fetch block 310. The data stored in the instruction cache block 306 may be, for example, 256 byte wide data. The MUX 304 may comprise suitable logic, circuitry and/or code that may be adapted to select between instructions and/or data from the instruction cache block 306 and directly from the MUX 304. The instruction fetch block 310 may comprise suitable logic, circuitry and/or code that may be adapted to fetch instructions from the memory. The instruction decoder block 312 may comprise suitable logic, circuitry and/or code that may be adapted to receive data and/or instructions from the instruction fetch block 310 and decode the data and/or instructions.The command decryption block 302 may be adapted to handle a mixture of encrypted and plain text code. Interrupts may be handled by the plain text code. A plain text copy of the encrypted code may not be available as the instructions and/data stored in the instruction cache block 306 may be read only by the instruction decoder block 312. During code decryption, data lines to the instruction cache block 306 and/or the instruction decoder block 312 may be stalled until code decryption is finished. The code may be encrypted on a secure host and stored on a device. A key utilized to decrypt the code may be stored on a non-volatile RAM that may be written once and may be read by the command decryption block 302.FIG. 4 is a flow diagram illustrating program flow within a memory stack when an encrypted code is executed, in accordance with an embodiment of the invention. Referring to FIG. 4, there is shown a memory stack 400, a plain-text function 420, a jump2crypted function 422, a run_crypted function 424 and an encrypted function 426.The plain-text function 420 may comprise suitable logic and/or code that may be adapted to access an encrypted function 426. The jump2crypted function 422 may comprise suitable logic and/or code that may be adapted to switch on code decryption and call the run_crypted function 424. The jump2crypted function 422 may act as a wrapper for the run_crypted function 424. The run_crypted function 424 may comprise suitable logic and/or code that may be adapted to call the requested encrypted function 426. The encrypted function 426 may not be directly called from plain-text code, as the code decryption may not be switched on. If several different sections are utilized for encrypted code, each section may require its own jump2crypted function 422 and run_crypted function 424. The encrypted function 426 may not be adapted to call plain-text functions directly as the code decryption may not be switched off.In step 402, the plain-text function 420 may access an encrypted function 426 by calling the jump2crypted function 422. In step 404, the jump2crypted function 422, which acts as a wrapper for the run_crypted function 424, may switch on code decryption and call the run_crypted function 424. In step 406, the run_crypted function 424 may call the requested encrypted function 426. Encrypted functions may not be directly called from plain-text code, as the code decryption may not be switched on. In instances where several different sections are utilized for encrypted code, each section may be adapted to require its own jump2crypted function 422 and run_crypted function 424. Notwithstanding, in step 408, after execution of the encrypted function 426, control may return to the run_crypted function 424. In step 410, the run_crypted function 424 may switch off code decryption and return control to jump2crypted function 422. In step 412, the jump2crypted function 422 may return control to the calling plain-text function 420.In accordance with an embodiment of the invention, when a system is running under a secure mode, encrypted code that may be stored may be decrypted on the fly when executed. The code decryption utilized in a secure mode may be adapted to work on memory lines, for example, 32 byte wide memory lines. These memory lines may not contain data or plain-text code as it may result in incorrect code decryption during runtime. A tool, for example, a MetaWare tool may be utilized to separate plain-text code, encrypted code and data in the memory. A linker, for example, a MetaWare linker may be adapted to automatically allocate a required amount of memory for each type of code.In order to enable/disable code decryption in a controlled manner, the sections(s) of memory containing encrypted code may be entered using a real time interrupt (rti) instruction. An encrypted function may be adapted to directly call other encrypted functions but may not be able to call plain-text functions. Code encryption may be performed in sections, wherein sections may be either encrypted or in plain text. The code may be encrypted but the data may not be encrypted. Encrypted code and data may not be mixed within the same memory line, for example, for 32 consecutive addresses starting at multiples of 32 as the data may be adapted to change during program execution and alter the code decryption. A branch instruction or an if instruction may be utilized instead of a switch instruction as the code may not be decrypted properly when using a switch instruction because a lookup table that may be required may be stored in a data cache instead of an instruction cache. Interrupts may be disabled while running encrypted code, as they may not switch on/off the code decryption properly. The jump2crypted function 422 may comprise suitable logic and/or code that may be adapted to disable any interrupts while the run_crypted function 424 may comprise suitable logic and/or code that may be adapted to restore a previous state of operation.An array of constants within an encrypted code may not be encrypted, as they may be stored in a data register, for example. The array of constants may be replaced by a function, which may be adapted to return a value of the constant by accessing an index of the array of constants. The array of constants may then be encrypted by utilizing move or store commands to store immediate values of the constants.A linker, for example, a metaware linker may be utilized to align the code according to the secure mode requirements. A separate section may be defined in a top-level code file for any code that may be encrypted. For example, the following code section may be inserted in a command file, which may generate a special memory area, for example, .crypt of a required size for the encrypted code. The memory area .crypt may be address aligned and may be generated if there is encrypted code. An unused memory line, for example, a 32 byte memory line may be generated between the end of the plain-text code and the encrypted code to prevent corrupting the code decryption. For example, C code that may be stored in the memory area .crypt may be selected by inserting it between a pragma, #pragma code(".crypt) and #pragma code(). A linker toggle, for example, each_function_in_its_own_section may be switched off when compiling modules containing encrypted code for the pragmas to take effect. In order to check if an encrypted code has been moved to an encrypted section, a driver option, for example, -Hldopt=-m may be utilized, which may generate a memory map of all the sections. A program, for example, a C program encrypt_code.c may be utilized to encrypt the code. The program may require a start address, an end address of the encrypted code in the memory and the memory content in binary format as arguments for the program.FIG. 5 is a flowchart illustrating exemplary steps for protecting data during mobile communication, in accordance with an embodiment of the invention. Referring to FIG. 5, exemplary steps may start at step 502. In step 504, the decryption of the plain-text code may be either enabled or disabled. In step 506, a hash operation of the plain-text code or decrypted data may be performed and the result may be checked to determine if the plain-text code was modified. In step 508, the location of the decryption key may be obscured. In step 510, the decryption key may be stored in hardware in a write-only mode or encrypted part of the code. In step 512, the decryption key may be utilized to decrypt the algorithm. In step 514, the instructions may be decrypted as they enter an instruction cache. The mobile multimedia processor (MMP) 101a (FIG. 1A) may be adapted to decrypt instructions for the encrypted algorithm as the instructions enter an instruction cache, for example, instruction cache block 306 (FIG. 3). In step 516, the code decryption may be switched ON/OFF using at least one interrupt. In step 518, the data may be decrypted in software. Control then passes to end step 520.In accordance with an embodiment of the invention, a method and system for protecting data during mobile communication may comprise a mobile multimedia processor (MMP) 101a (FIG. 1A) that decrypts an encrypted algorithm in hardware. The mobile multimedia processor, for example, MMP 101a may be adapted to utilize the decrypted algorithm to decrypt data in software. The mobile multimedia processor (MMP) 101 a may be adapted to decrypt instructions for the encrypted algorithm as the instructions enter an instruction cache, for example, instruction cache block 306. The instruction cache block 306 (FIG. 3) may be adapted to store instructions temporarily for immediate access by the instruction fetch block 310. The data stored in the instruction cache block 306 may be256 bytes wide, for example. The mobile multimedia processor, for example, MMP 101a may be adapted to protect the decrypted data by performing a hash operation of the decrypted data and check a result of the hash operation.The mobile multimedia processor, for example, MMP 101a may be adapted to store a decryption key to the encrypted algorithm in write-only mode in hardware. The mobile multimedia processor, for example, MMP 101a may be adapted to utilize the stored decryption key to decrypt the encrypted algorithm in hardware. The mobile multimedia processor, for example, MMP 101 a may be adapted to modify instructions in the encrypted algorithm. The mobile multimedia processor, for example, MMP 101a may be adapted to obscure a location of the stored decryption or DRM key. The mobile multimedia processor, for example, MMP 101 a may be adapted to disable at least one interrupt before the decryption of the encrypted algorithm. The mobile multimedia processor, for example, MMP 101a may be adapted to enable at least one interrupt after the decryption of the encrypted algorithm. The mobile multimedia processor, for example, MMP 101a may be adapted to insert an unused memory line between the decrypted data and the decrypted algorithm to prevent corruption of the decrypted data.Accordingly, the present invention may be realized in hardware, software, or a combination of hardware and software. The present invention may be realized in a centralized fashion in at least one computer system, or in a distributed fashion where different elements are spread across several interconnected computer systems. Any kind of computer system or other apparatus adapted for carrying out the methods described herein is suited. A typical combination of hardware and software may be a general-purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein.The present invention may also be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which when loaded in a computer system is able to carry out these methods. Computer program in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form.While the present invention has been described with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the present invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the present invention without departing from its scope. Therefore, it is intended that the present invention not be limited to the particular embodiment disclosed, but that the present invention will include all embodiments falling within the scope of the appended claims. |
Self-aligned via and plug patterning using diagonal hardmasks for improved overlay in fabricating back end of line (BEOL) interconnects is described. In an example, a method of fabricating an interconnect structure for an integrated circuit involves forming a first hardmask layer above an interlayer dielectric layer disposed above a substrate. The first hardmask layer includes a plurality of first hardmask lines having a first grating in a first direction and comprising one or more sacrificial materials interleaved with the first grating. The method also involves forming a second hardmask layer above the first hardmask layer. The second hardmask layer includes a plurality of second hardmask lines having a second grating in a second direction, diagonal to the first direction. The method also involves, using the second hardmask layer as a mask, etching the first hardmask layer to form a patterned first hardmask layer. |
CLAIMSWhat is claimed is:1. An interconnect structure for an integrated circuit, the interconnect structure comprising: an interlayer dielectric layer disposed above a substrate; anda grating structure disposed above the interlayer dielectric layer and comprising co-planar alternating dielectric hardmask lines and conductive lines, wherein one or more of the conductive lines extends into the interlayer dielectric layer, and one or more of the conductive lines does not extend into the interlayer dielectric layer.2. The interconnect structure of claim 1 , wherein one of the one or more of the conductive lines that extends into the interlayer dielectric layer extends entirely through the interlayer dielectric layer to provide a conductive via to an underlying metallization layer disposed between the substrate and the interlayer dielectric layer.3. The interconnect structure of claim 1, wherein one of the one or more of the conductive lines that extends into the interlayer dielectric layer extends only partially into the interlayer dielectric layer to provide a conductive metal line for a metallization layer comprising the interlayer dielectric layer.4. The interconnect structure of claim 1 , wherein the grating structure is disposed on the interlayer dielectric layer.5. A method of fabricating an interconnect structure for an integrated circuit, the method comprising:forming a first hardmask layer above an interlayer dielectric layer disposed above a substrate, the first hardmask layer comprising a plurality of first hardmask lines having a first grating in a first direction and comprising one or more sacrificial materials interleaved with the first grating;forming a second hardmask layer above the first hardmask layer, the second hardmask layer comprising a plurality of second hardmask lines having a second grating in a second direction, diagonal to the first direction; andusing the second hardmask layer as a mask, etching the first hardmask layer to form apatterned first hardmask layer, the etching comprising removing a portion of the one or more sacrificial materials.6. The method of claim 5, wherein forming the first hardmask layer comprises forming the plurality of first hardmask lines using a pitch-halving or pitch-quartering patterning process relative to a minimum critical dimension (CD), and wherein forming the second hardmask layer comprises forming the plurality of second hardmask lines at the minimum CD.7. The method of claim 5, wherein forming the second hardmask layer comprises forming the plurality of second hardmask lines having the second grating 45 degrees to the first direction.The method of claim 5, further comprising:removing the second hardmask layer subsequent to etching the first hardmask layiThe method of claim 8, further comprising:subsequent to removing the second hardmask layer, forming a plurality of photobuckets in the patterned first hardmask;exposing, developing and removing fewer than all of the plurality of photobuckets to reveal portions of the interlayer dielectric layer;etching entirely through the revealed portions of the interlayer dielectric layer to form via openings; andforming metal vias in the via openings.10. The method of claim 8, further comprising:subsequent to removing the second hardmask layer, forming a plurality of photobuckets in the patterned first hardmask;exposing, developing and removing fewer than all of the plurality of photobuckets to reveal portions of the interlayer dielectric layer;etching only partially through the revealed portions of the interlayer dielectric layer to form trenches; andforming metal lines in the trenches.11. The method of claim 8, wherein the plurality of second hardmask lines comprises a carbon- based material, and wherein removing the second hardmask layer comprises using an ashing process.12. A method of fabricating an interconnect structure for an integrated circuit, the method comprising:forming a plurality of hardmask lines having a grating pattern above an interlayer dielectric layer disposed above a substrate;forming a first plurality of photobuckets interleaved with the plurality of hardmask lines, the first plurality of photobuckets corresponding to a first half of all possible via locations in a metallization layer of the interconnect structure;exposing, developing and removing fewer than all of the first plurality of photobuckets to reveal first portions of the interlayer dielectric layer; andetching entirely through the revealed first portions of the interlayer dielectric layer to form first via openings in the interlayer dielectric layer.13. The method of claim 12, further comprising:removing all remaining of the first plurality of photobuckets; and, subsequently, forming a second plurality of photobuckets interleaved with the plurality of hardmask lines, the second plurality of photobuckets corresponding to a second half of all possible via locations in the metallization layer of the interconnect structure;exposing, developing and removing fewer than all of the second plurality of photobuckets to reveal second portions of the interlayer dielectric layer; andetching entirely through the revealed second portions of the interlayer dielectric layer to form second via openings in the interlayer dielectric layer.14. The method of claim 13, further comprising:removing all remaining of the second plurality of photobuckets; and, subsequently, forming metal vias in the first and second via openings of the interlayer dielectric layi15. The method of claim 12, wherein forming the first plurality of photobuckets interleaved with the plurality of hardmask lines comprises forming each of the first plurality of photobuckets to have a nearest neighbor distance of a factor of the square root of two multiplied by a line width of the grating pattern of the plurality of hardmask lines.16. The method of claim 12, wherein exposing, developing and removing fewer than all of the first plurality of photobuckets comprises exposing to extreme ultra-violet (EUV) irradiation.17. A method of fabricating an interconnect structure for an integrated circuit, the method comprising:forming a plurality of hardmask lines having a grating pattern above an interlayer dielectric layer disposed above a substrate;forming a first plurality of photobuckets interleaved with the plurality of hardmask lines, the first plurality of photobuckets corresponding to a first half of all possible plug locations in a metallization layer of the interconnect structure;exposing, developing and removing fewer than all of the first plurality of photobuckets to reveal first portions of the interlayer dielectric layer; andetching only partially through the revealed first portions of the interlayer dielectric layer to form first trenches in the interlayer dielectric layer.18. The method of claim 17, further comprising:removing all remaining of the first plurality of photobuckets; and, subsequently,forming a second plurality of photobuckets interleaved with the plurality of hardmask lines, the second plurality of photobuckets corresponding to a second half of all possible plug locations in the metallization layer of the interconnect structure;exposing, developing and removing fewer than all of the second plurality of photobuckets to reveal second portions of the interlayer dielectric layer; andetching only partially through the revealed second portions of the interlayer dielectric layer to form second trenches in the interlayer dielectric layer.19. The method of claim 18, further comprising:removing all remaining of the second plurality of photobuckets; and, subsequently, forming metal lines in the first and second trenches of the interlayer dielectric layer.20. The method of claim 17, wherein forming the first plurality of photobuckets interleaved with the plurality of hardmask lines comprises forming each of the first plurality of photobuckets to have a nearest neighbor distance of a factor of the square root of two multiplied by a line width of the grating pattern of the plurality of hardmask lines.21. The method of claim 17, wherein exposing, developing and removing fewer than all of the first plurality of photobuckets comprises exposing to extreme ultra-violet (EUV) irradiation. |
DIAGONAL HARDMASKS FOR IMPROVED OVERLAY IN FABRICATING BACK END OF LINE (BEOL) INTERCONNECTSTECHNICAL FIELDEmbodiments of the invention are in the field of semiconductor structures and processing and, in particular, diagonal hardmasks for improved overlay in fabricating back end of line (BEOL) interconnects.BACKGROUNDFor the past several decades, the scaling of features in integrated circuits has been a driving force behind an ever-growing semiconductor industry. Scaling to smaller and smaller features enables increased densities of functional units on the limited real estate ofsemiconductor chips. For example, shrinking transistor size allows for the incorporation of an increased number of memory or logic devices on a chip, lending to the fabrication of products with increased capacity. The drive for ever-more capacity, however, is not without issue. The necessity to optimize the performance of each device becomes increasingly significant.Integrated circuits commonly include electrically conductive microelectronic structures, which are known in the arts as vias, to electrically connect metal lines or other interconnects above the vias to metal lines or other interconnects below the vias. Vias are typically formed by a lithographic process. Representatively, a photoresist layer may be spin coated over a dielectric layer, the photoresist layer may be exposed to patterned actinic radiation through a patterned mask, and then the exposed layer may be developed in order to form an opening in the photoresist layer. Next, an opening for the via may be etched in the dielectric layer by using the opening in the photoresist layer as an etch mask. This opening is referred to as a via opening. Finally, the via opening may be filled with one or more metals or other conductive materials to form the via.In the past, the sizes and the spacing of vias has progressively decreased, and it is expected that in the future the sizes and the spacing of the vias will continue to progressively decrease, for at least some types of integrated circuits (e.g., advanced microprocessors, chipset components, graphics chips, etc.). One measure of the size of the vias is the critical dimension of the via opening. One measure of the spacing of the vias is the via pitch. Via pitch represents the center-to-center distance between the closest adjacent vias.When patterning extremely small vias with extremely small pitches by such lithographic processes, several challenges present themselves, especially when the pitches are around 70 nanometers (nm) or less and/or when the critical dimensions of the via openings are around 35nm or less. One such challenge is that the overlay between the vias and the overlying interconnects, and the overlay between the vias and the underlying landing interconnects, generally need to be controlled to high tolerances on the order of a quarter of the via pitch. As via pitches scale ever smaller over time, the overlay tolerances tend to scale with them at an even greater rate than lithographic equipment is able to keep up.Another such challenge is that the critical dimensions of the via openings generally tend to scale faster than the resolution capabilities of the lithographic scanners. Shrink technologies exist to shrink the critical dimensions of the via openings. However, the shrink amount tends to be limited by the minimum via pitch, as well as by the ability of the shrink process to be sufficiently optical proximity correction (OPC) neutral, and to not significantly compromise line width roughness (LWR) and/or critical dimension uniformity (CDU).Yet another such challenge is that the LWR and/or CDU characteristics of photoresists generally need to improve as the critical dimensions of the via openings decrease in order to maintain the same overall fraction of the critical dimension budget. However, currently the LWR and/or CDU characteristics of most photoresists are not improving as rapidly as the critical dimensions of the via openings are decreasing.A further such challenge is that the extremely small via pitches generally tend to be below the resolution capabilities of even extreme ultraviolet (EUV) lithographic scanners. As a result, commonly two, three, or more different lithographic masks may be used, which tend to increase the costs. At some point, if pitches continue to decrease, it may not be possible, even with multiple masks, to print via openings for these extremely small pitches using EUV scanners.Thus, improvements are needed in the area of via manufacturing technologies.BRIEF DESCRIPTION OF THE DRAWINGSFigures 1A-1X illustrate portions of integrated circuit layers representing various operations in a method of self-aligned via and plug patterning using diagonal hardmasks, in accordance with an embodiment of the present invention, where:Figure 1 A illustrates a cross-sectional view of a starting structure following deposition, but prior to patterning, of a hardmask material layer formed on an interlayer dielectric (ILD) layer;Figure IB illustrates a cross-sectional view of the structure of Figure 1A following patterning of the hardmask layer by pitch doubling;Figure 1C illustrates a cross-sectional view of the structure of Figure IB following formation of a second patterned hardmask; Figure ID illustrates a cross-sectional view of the structure of Figure 1C following deposition of a hardmask cap layer;Figure IE illustrates an angled view of the structure of Figure ID following patterning of the hardmask cap layer;Figure IF illustrates an angled view and corresponding plan view of the structure ofFigure IE following further patterning of the first patterned hardmask, in accordance with an embodiment of the present invention;Figure 1G illustrates a plan view of the structure of Figure IF following removal of the hardmask cap layer and formation of a fourth hardmask layer, in accordance with an embodiment of the present invention;Figure 1H illustrates a plan view of the structure of Figure 1G following deposition and patterning of a first diagonal hardmask layer, in accordance with an embodiment of the present invention;Figure II illustrates a plan view of the structure of Figure 1H following removal of revealed regions of the fourth hardmask layer, in accordance with an embodiment of the present invention;Figure 1J illustrates a plan view of the structure of Figure II following removal of the first diagonal hardmask layer, in accordance with an embodiment of the present invention;Figure 1 K illustrates a plan view of the structure of Figure 1J following first plurality of photobucket formation, in accordance with an embodiment of the present invention;Figure 1L illustrates a plan view and corresponding cross-sectional view (taken along the a-a' axis) of the structure of Figure IK following photobucket exposure and development to form selected via locations, and subsequent via opening etch into the underlying ILD, in accordance with an embodiment of the present invention;Figure 1M illustrates a plan view and corresponding cross-sectional view (taken along the b-b' axis) of the structure of Figure 1L following removal of the remaining photobuckets and subsequent formation of a fifth hardmask material, in accordance with an embodiment of the present invention;Figure IN illustrates a plan view and corresponding cross-sectional view (taken along the c-c' axis) of the structure of Figure 1M following removal of the remaining regions of the fourth hardmask layer, in accordance with an embodiment of the present invention;Figure 10 illustrates a plan view and corresponding cross-sectional view (taken along the d-d' axis) of the structure of Figure IN following second plurality of photobucket formation, in accordance with an embodiment of the present invention; Figure IP illustrates a plan view and corresponding cross-sectional view (taken along the e-e' axis) of the structure of Figure 10 following photobucket exposure and development to form selected via locations, and subsequent via opening etch into the underlying ILD, in accordance with an embodiment of the present invention;Figure 1Q illustrates a plan view and corresponding cross-sectional view (taken along the f-f axis) of the structure of Figure IP following removal of the fifth hardmask material, trench etching, and subsequent sacrificial layer formation, in accordance with an embodiment of the present invention;Figure 1R illustrates a plan view of the structure of Figure 1Q following deposition and patterning of a second diagonal hardmask layer, in accordance with an embodiment of the present invention;Figure IS illustrates a plan view and corresponding cross-sectional view (taken along the g-g' axis) of the structure of Figure 1R following removal of revealed regions of the first patterned hardmask layer, removal of the second diagonal hardmask layer, and following third plurality of photobucket formation, in accordance with an embodiment of the present invention;Figure IT illustrates a plan view and corresponding cross-sectional view (taken along the h-h' axis) of the structure of Figure IS following plug location selection and trench etching;Figure 1U illustrates a plan view and corresponding cross-sectional view (taken along the i-i' axis) of the structure of Figure IT following removal of remaining third photobuckets and subsequent hardmask formation;Figure IV illustrates a plan view and corresponding cross-sectional view (taken along the j-j' axis) of the structure of Figure IV following first patterned hardmask removal and fourth plurality of photobucket formation;Figure 1 W illustrates a plan view and corresponding cross-sectional view (taken along the k-k' axis) of the structure of Figure IV following plug location selection and trench etching; andFigure IX illustrates a plan view and corresponding first cross-sectional view (taken along the 1- axis) and second cross-sectional view (taken along the m-m' axis) of the structure of Figure 1W following removal of remaining fourth photobuckets, hardmask material layer and sacrificial material, and subsequent metal fill.Figure 2 illustrates a computing device in accordance with one implementation of the invention.DESCRIPTION OF THE EMBODIMENTSSelf-aligned via and plug patterning using diagonal hardmasks for improved overlay in fabricating back end of line (BEOL) interconnects is described. In the following description, numerous specific details are set forth, such as specific integration and material regimes, in order to provide a thorough understanding of embodiments of the present invention. It will be apparent to one skilled in the art that embodiments of the present invention may be practiced without these specific details. In other instances, well-known features, such as integrated circuit design layouts, are not described in detail in order to not unnecessarily obscure embodiments of the present invention. Furthermore, it is to be understood that the various embodiments shown in the Figures are illustrative representations and are not necessarily drawn to scale.One or more embodiments described herein are directed to diagonal hardmask patterning for overlay improvements, particularly in the fabrication of back end of line (BEOL) features for semiconductor integrated circuits. Applications of patterning based on diagonal hardmasks may include, but need not be limited to, implementation in 193 nm immersion lithography, extreme ultraviolet (EUV) lithography, interconnect fabrication, overlay improvements, overlay budget, plug patterning, via patterning. Embodiments may be particularly useful for the self-aligned fabrication of BEOL structures at the 7nm node or smaller.In an embodiment, approaches described herein involve an integration scheme that tolerates increased via and plug overlay margin relative to existing approaches. In one such embodiment, all potential vias and plugs are pre-patterned and filled with resist. Subsequently, in a specific embodiment, EUV or 193 nm lithography is used to select certain of the via and plug locations for actual, ultimate, via and plug fabrication. In an embodiment, diagonal line patterning is used to increase nearest-neighbor distances resulting in an increase by a factor of the square root of two in overlay budget.More generally, one or more embodiments described herein are directed to subtractive approaches for self-aligned via and plug patterning, and structure resulting there from. In an embodiment, processes described herein enable realization of self-aligned metallization for back- end of line feature fabrication. Overlay problems anticipated for next generation via and plug patterning may be addressed by one or more approaches described herein.To provide context, current fabrication techniques for vias involves a "blind" process in which a via opening is patterned in a stack far above an ILD trench. The via opening pattern is then etched deep down into the trench. Overlay errors accumulate and can cause various problems, e.g., shorts to neighboring metal lines. In an example, patterning and aligning of features at less than approximately 50 nanometer pitch requires many reticles and critical alignment strategies that are otherwise extremely expensive for a semiconductor manufacturing process. In an embodiment, by contrast, approaches described herein enable fabrication of self- aligned plugs and/or vias, greatly simplifying the web of overlay errors, and leaving only one critical overlay step (Mx+1 grating). In an embodiment, then, offset due to conventional lithograph/dual damascene patterning that must otherwise be tolerated, is not a factor for the resulting structures described herein.In general, one or more embodiments are directed to an approach that employs a subtractive technique to form conductive vias and non-conductive spaces or interruptions between metals (referred to as "plugs"). Vias, by definition, are used to land on a previous layer metal pattern. In this vein, embodiments described herein enable a more robust interconnect fabrication scheme since alignment by lithography equipment is no longer relied on. Such an interconnect fabrication scheme can be used to save numerous alignment/exposures, can be used to improve electrical contact (e.g., by reducing via resistance), and can be used to reduce total process operations and processing time otherwise required for patterning such features using conventional approaches.More specifically, one or more embodiment described herein involves the use of a subtractive method to pre-form every via and plug using the trenches already etched. An additional operation is then used to select which of the vias and plugs to retain. Such operations can be illustrated using "photobuckets," although the selection process may also be performed using a more conventional resist expose and ILD backfill approach.In an aspect, a diagonal hardmask approach may be implemented. As an example, Figures 1A-1X illustrate portions of integrated circuit layers representing various operations in a method of self-aligned via and plug patterning using diagonal hardmasks, in accordance with an embodiment of the present invention. In each illustration at each described operation, cross- sectional and/or plan and/or angled views are shown. These views will be referred to herein as corresponding cross-sectional views, plan views and angled views.Figure 1 A illustrates a cross-sectional view of a starting structure 100 following deposition, but prior to patterning, of a first hardmask material layer 104 formed on an interlayer dielectric (ILD) layer 102, in accordance with an embodiment of the present invention. Referring to Figure 1 A, a patterned mask 106 has spacers 108 formed along sidewalls thereof, on or above the first hardmask material layer 104.Figure IB illustrates a cross-sectional view of the structure of Figure 1A following patterning of the first hardmask layer by pitch doubling, in accordance with an embodiment of the present invention. Referring to Figure IB, the patterned mask 106 is removed and the resulting pattern of the spacers 108 is transferred, e.g., by an etch process, to the first hardmask material layer 104 to form a first patterned hardmask 110. In one such embodiment, the first patterned hardmask 110 is formed with a grating pattern, as is depicted in Figure IB. In an embodiment, the grating structure of the first patterned hardmask 110 is a tight pitch grating structure. In a specific such embodiment, the tight pitch is not achievable directly through conventional lithography. For example, a pattern based on conventional lithography may first be formed (mask 106), but the pitch may be halved by the use of spacer mask patterning, as is depicted in Figures 1 A and IB. Even further, although not shown, the original pitch may be quartered by a second round of spacer mask patterning. Accordingly, the grating-like pattern of the first patterned hardmask 110 of Figure IB may have hardmask lines spaced at a constant pitch and having a constant width.Figure 1C illustrates a cross-sectional view of the structure of Figure IB following formation of a second patterned hardmask, in accordance with an embodiment of the present invention. Referring to Figure 1C, a second patterned hardmask 112 is formed interleaved with the first patterned hardmask 110. In one such embodiment, the second patterned hardmask 112 is formed by deposition of a second hardmask material layer (e.g., having a composition different from the first hardmask material layer 104). The second hardmask material layer is then planarized, e.g., by chemical mechanical polishing (CMP), to provide the second patterned hardmask 112.Figure ID illustrates a cross-sectional view of the structure of Figure 1C following deposition of a hardmask cap layer (third hardmask layer), in accordance with an embodiment of the present invention. Referring to Figure ID, a hardmask cap layer 114 is formed on the first patterned hardmask 110 and the first patterned hardmask 112. In one such embodiment, the material composition and etch selectivity of the hardmask cap layer 114 is different as compared to the first patterned hardmask 110 and the first patterned hardmask 112.Figure IE illustrates an angled view of the structure of Figure ID following patterning of the hardmask cap layer, in accordance with an embodiment of the present invention. Referring to Figure IE, a patterned hardmask cap layer 114 is formed on the first patterned hardmask 110 and the first patterned hardmask 112. In one such embodiment, the patterned hardmask cap layer 114 is formed with a grating pattern orthogonal to the grating pattern of the first patterned hardmask 110 and the first patterned hardmask 112, as is depicted in Figure IE. In an embodiment, the grating structure formed by the patterned hardmask cap layer 114 is a tight pitch grating structure. In one such embodiment, the tight pitch is not achievable directly throughconventional lithography. For example, a pattern based on conventional lithography may first be formed, but the pitch may be halved by the use of spacer mask patterning. Even further, the original pitch may be quartered by a second round of spacer mask patterning. Accordingly, the grating-like pattern of the patterned hardmask cap layer 114 of Figure IE may have hardmask lines spaced at a constant pitch and having a constant width. It is to be appreciated that description herein concerning forming and patterning a hardmask layer (or hardmask cap layer, such as hardmask cap layer 114) involves, in an embodiment, mask formation above a blanket hardmask or hardmask cap layer. The mask formation may involve use of one or more layers suitable for lithographic processing. Upon patterning the one or more lithographic layers, the pattern is transferred to the hardmask or hardmask cap layer by an etch process to provide a patterned hardmask or hardmask cap layer.Figure IF illustrates an angled view and corresponding plan view of the structure ofFigure IE following further patterning of the first patterned hardmask, in accordance with an embodiment of the present invention. Referring to Figure IF, using the patterned hardmask cap layer 114 as a mask, the first patterned hardmask 110 is further patterned to form first patterned hardmask 116. The second patterned hardmask 112 is not further patterned in this process. In an embodiments, the first patterned hardmask 110 is patterned to a depth sufficient to expose regions of ILD layer 102 as is depicted in Figure IF.Figure 1G illustrates a plan view of the structure of Figure IF following removal of the hardmask cap layer and formation of a fourth hardmask layer, in accordance with an embodiment of the present invention. Referring to Figure 1 G, the hardmask cap layer (third hardmask layer) 114 is removed, e.g., by a wet etch process, dry etch process, or CMP process. A fourth hardmask layer 118 is formed on the resulting structure by, in one embodiment, a deposition and CMP process. In one such embodiment, the fourth hardmask layer 118 is formed by deposition of a material layer different from the material of the second patterned hardmask layer 112 and the first patterned hardmask layer 116.Figure 1H illustrates a plan view of the structure of Figure 1G following deposition and patterning of a first diagonal hardmask layer, in accordance with an embodiment of the present invention. Referring to Figure 1H, a first diagonal hardmask layer 120 is formed on the fourth hardmask layer 118, the second patterned hardmask layer 112, and the first patterned hardmask layer 116 arrangement of Figure 1G. In an embodiment, the first diagonal hardmask layer 120 has a pattern essentially or perfectly symmetrically diagonal, e.g., at 45 degrees relative to the grating structure of the second pattern hardmask layer 112, to cover alternate lines of the fourth hardmask layer 118. In an embodiment, the diagonal pattern of the first diagonal hardmask layer 120 is printed at minimum critical dimension (CD), i.e., without the use of pitch halving or pitch quartering. It is to be appreciated that the individual lines may be printed even larger than minimum CD so long as some area of adjacent rows of the fourth hardmask layer 118 remains revealed. Regardless, the grating-like pattern of the first diagonal hardmask layer 120 of Figure 1H may have hardmask lines spaced at a constant pitch and having a constant width. It is to be appreciated that description herein concerning forming and patterning a diagonal hardmask layer (such a the first diagonal hardmask layer 120) involves, in an embodiment, mask formation above a blanket hardmask layer. The mask formation may involve use of one or more layers suitable for lithographic processing. Upon patterning the one or more lithographic layers, the pattern is transferred to the hardmask layer by an etch process to provide a diagonally patterned hardmask layer. In a particular embodiment, the first diagonal hardmask layer is a carbon-based hardmask layer.Figure II illustrates a plan view of the structure of Figure 1H following removal of revealed regions of the fourth hardmask layer, in accordance with an embodiment of the present invention. Referring to Figure II, using the first diagonal hardmask layer 120 as a mask, revealed regions of the fourth hardmask layer 118 are removed. In one such embodiment, the revealed regions of the fourth hardmask layer 118 are removed by an isotropic etch process (e.g., a wet etch process or non-anisotropic plasma etch process) such that any partial exposure leads to full removal of the partially revealed block of fourth hardmask material. In one embodiment, regions where the fourth hardmask layer 118 have been removed reveal portions of the ILD layer 102, as is depicted in Figure II.Figure 1J illustrates a plan view of the structure of Figure II following removal of the first diagonal hardmask layer, in accordance with an embodiment of the present invention.Referring to Figure 1J, the first diagonal hardmask layer 120 is removed to reveal the first patterned hardmask layer 116 and the second patterned hardmask layer 112. Also revealed are the portions of the fourth hardmask layer 118 that were protected from isotropic etching by the first diagonal hardmask layer 120. Accordingly, along each alternate row or down each alternate column of the resulting grid- like pattern of Figure 1J, a region of the fourth hardmask layer 118 is alternated with a revealed region of the underlying ILD layer 102. That is, the result is a checkerboard pattern of ILD layer 102 regions and fourth hardmask layer regions 118. As such, an increase by a factor the square root of two is achieved in the nearest neighbor distance 122 (shown as distance in direction b). In a particular embodiment, the first diagonal hardmask layer 120 is a carbon-based hardmask material and is removed with a plasma ashing process.Figure 1 K illustrates a plan view of the structure of Figure 1J following first plurality of photobucket formation, in accordance with an embodiment of the present invention. Referring to Figure IK, a first plurality of photobuckets 124 is formed in openings above the ILD layer 102 such that no portion of the ILD layer 102 remains revealed. The photobuckets 124, at this stage, represent a first half of all possible via locations in a resulting metallization layer.Figure 1L illustrates a plan view and corresponding cross-sectional view (taken along the a-a' axis) of the structure of Figure IK following photobucket exposure and development to form selected via locations, and subsequent via opening etch into the underlying ILD, in accordance with an embodiment of the present invention. Referring to Figure 1L select photobuckets 124 are exposed and removed to provide selected via locations 126. The via locations 126 are subjected to a selective etch process, such as a selective plasma etch process, to extend via openings into the underlying ILD layer 102, forming patterned ILD layer 102'. The etching is selective to remaining, unexposed, photobuckets 124, selective to the first patterned hardmask layer 116, selective to the second patterned hardmask layer 112, and selective to the fourth hardmask layer 118.Figure 1M illustrates a plan view and corresponding cross-sectional view (taken along the b-b' axis) of the structure of Figure 1L following removal of the remaining photobuckets and subsequent formation of a fifth hardmask material, in accordance with an embodiment of the present invention. Referring to Figure 1M, the remaining of the first plurality of photobuckets 124 are removed, e.g., by a selective etch or ash process. All openings revealed (e.g., openings formed upon removal of photobuckets 124 along with the via locations 126) are then filled with a hardmask material 128, such as a carbon-based hardmask material.Figure IN illustrates a plan view and corresponding cross-sectional view (taken along the c-c' axis) of the structure of Figure 1M following removal of the remaining regions of the fourth hardmask layer, in accordance with an embodiment of the present invention. Referring to Figure IN, all remaining regions of the fourth hardmask layer 118 are removed, e.g., by a selective etch or ash process. In one embodiment, regions where the remaining fourth hardmask layer 118 have been removed reveal portions of the patterned ILD layer 102', as is depicted in Figure IN.Figure 10 illustrates a plan view and corresponding cross-sectional view (taken along the d-d' axis) of the structure of Figure IN following second plurality of photobucket formation, in accordance with an embodiment of the present invention. Referring to Figure 10, a second plurality of photobuckets 130 is formed in openings above the patterned ILD layer 102' such that no portion of the patterned ILD layer 102' remains revealed. The photobuckets 130, at this stage, represent a second half of all possible via locations in a resulting metallization layer.Figure IP illustrates a plan view and corresponding cross-sectional view (taken along the e-e' axis) of the structure of Figure 10 following photobucket exposure and development to form selected via locations, and subsequent via opening etch into the underlying ILD, in accordance with an embodiment of the present invention. Referring to Figure IP, select photobuckets 130 are exposed and removed to provide selected via locations 132. The via locations 132 are subjected to a selective etch process, such as a selective plasma etch process, to extend via openings into the underlying patterned ILD layer 102', forming further patterned ILD layer 102". The etching is selective to remaining, unexposed, photobuckets 130, selective to the first patterned hardmask layer 116, selective to the second patterned hardmask layer 112, and selective to the hardmask material 128. Figure 1Q illustrates a plan view and corresponding cross-sectional view (taken along the f-f axis) of the structure of Figure IP following removal of the fifth hardmask material, trench etching, and subsequent sacrificial layer formation, in accordance with an embodiment of the present invention. Referring to Figure 1Q, hardmask material layer 128 is removed, revealing all of the original first and second halves of the potential via locations. The patterned ILD layer 102" is then patterned to form ILD layer 102"' which includes the via openings 132 and 126, along with trenches 136 where via openings were not formed. The trenches 136 will ultimately be used for metal line fabrication, as is described below. Upon completion of the trench etch, all openings (including via openings 126 and 132 and trenches 136) are filled with a sacrificial material 134. In one embodiment, the hardmask material layer 128 is a carbon-based hardmask material and is removed with a plasma ashing process. In one embodiment, the sacrificial material 134 is flowable organic or inorganic material such as a sacrificial light absorbing material (SLAM), as is known in the art. The sacrificial material 134 is either formed to, or planarized to, to level of the first patterned hardmask 116 and the second patterned hardmask 112, as is depicted in Figure 1Q.Figure 1R illustrates a plan view of the structure of Figure 1Q following deposition and patterning of a second diagonal hardmask layer, in accordance with an embodiment of the present invention. Referring to Figure 1R, a second diagonal hardmask layer 138 is formed on the sacrificial material 134, the second patterned hardmask layer 112, and the first patterned hardmask layer 116 arrangement of Figure 1 Q. In an embodiment, the second diagonal hardmask layer 138 has a pattern essentially or perfectly symmetrically diagonal, e.g., at 45 degrees relative to the grating structure of the second pattern hardmask layer 112, to cover alternate lines of the first patterned hardmask layer 116. In an embodiment, the diagonal pattern of the second diagonal hardmask layer 138 is printed at minimum critical dimension (CD), i.e., without the use of pitch halving or pitch quartering. It is to be appreciated that the individual lines may be printed even larger than minimum CD so long as some area of adjacent rows of the first patterned hardmask layer 116 remains revealed. Regardless, the grating-like pattern of the second diagonal hardmask layer 138 of Figure 1R may have hardmask lines spaced at a constant pitch and having a constant width. It is to be appreciated that description herein concerning forming and patterning a diagonal hardmask layer (such a the second diagonal hardmask layer 138) involves, in an embodiment, mask formation above a blanket hardmask layer. The mask formation may involve use of one or more layers suitable for lithographic processing. Upon patterning the one or more lithographic layers, the pattern is transferred to the hardmask layer by an etch process to provide a diagonally patterned hardmask layer. In a particular embodiment, the second diagonal hardmask layer 138 is a carbon-based hardmask layer. Figure IS illustrates a plan view and corresponding cross-sectional view (taken along the g-g' axis) of the structure of Figure 1R following removal of revealed regions of the first patterned hardmask layer, removal of the second diagonal hardmask layer, and following third plurality of photobucket formation, in accordance with an embodiment of the present invention. Referring to Figure IS, using the second diagonal hardmask layer 138 as a mask, revealed regions of the first patterned hardmask layer 116 are removed. In one such embodiment, the revealed regions of the first patterned hardmask layer 116 are removed by an isotropic etch process (e.g., a wet etch process or non- anisotropic plasma etch process) such that any partial revealing leads to full removal of the partially revealed block of the first patterned hardmask layer 116. Referring again to Figure IS, the second diagonal hardmask layer 138 is removed to reveal the sacrificial material 134 and the second patterned hardmask layer 112. Also revealed are the portions of the first patterned hardmask layer 116 that were protected from isotropic etching by the second diagonal hardmask layer 138. In a particular embodiment, the second diagonal hardmask layer 138 is a carbon-based hardmask material and is removed with a plasma ashing process. Referring again to Figure IS, a third plurality of photobuckets 140 is formed in the resulting openings above the patterned ILD layer 102' ' ' such that no portion of the patterned ILD layer 102" ' remains revealed. The photobuckets 140, at this stage, represent a first half of all possible plug locations in a resulting metallization layer. Accordingly, along each alternate row or down each alternate column of the resulting grid-like pattern of Figure IS, a region of the first patterned hardmask layer 116 is alternated with a photobucket 140. That is, the result is a checkerboard pattern of photobucket 140 regions and first patterned hardmask layer 116 regions. As such, an increase by a factor the square root of two is achieved in the nearest neighbor distance 142 (shown as distance in direction b).Figure IT illustrates a plan view and corresponding cross-sectional view (taken along the h-h' axis) of the structure of Figure IS following plug location selection and trench etching, in accordance with an embodiment of the present invention. Referring to Figure IT, the photobuckets 140 from Figure IS in are removed from locations 142 where plugs will not be formed. In locations where plugs are selected to be formed, the photobuckets 140 are retained. In one embodiment, in order to form locations 142 where plugs will not be formed, lithography is used to expose the corresponding photobuckets 140. The exposed photobuckets may then be removed by a developer. The patterned ILD layer 102' ' ' is then patterned to form ILD layer 102"" which includes trenches 144 formed at locations 142. The trenches 144 will ultimately be used for metal line fabrication, as is described below.Figure 1U illustrates a plan view and corresponding cross-sectional view (taken along the i-i' axis) of the structure of Figure IT following removal of remaining third photobuckets and subsequent hardmask formation, in accordance with an embodiment of the present invention. Referring to Figure 1U, all remaining photobuckets 140 are removed, e.g., by an ashing process. Upon removal of all remaining photobuckets 140, all openings (including trenches 144) are filled with a hardmask material layer 146. In one embodiment, the hardmask material layer 146 is a carbon-based hardmask material.Figure IV illustrates a plan view and corresponding cross-sectional view (taken along the j-j' axis) of the structure of Figure IV following first patterned hardmask removal and fourth plurality of photobucket formation, in accordance with an embodiment of the present invention. Referring to Figure IV, the first patterned hardmask layer 116 is removed (e.g., by a selective dry or wet etch process), and a fourth plurality of photobuckets 148 is formed in the resulting openings above the patterned ILD layer 102" " such that no portion of the patterned ILD layer 102' ' ' ' remains revealed. The photobuckets 148, at this stage, represent a second half of all possible plug locations in a resulting metallization layer.Figure 1W illustrates a plan view and corresponding cross-sectional view (taken along the k-k' axis) of the structure of Figure IV following plug location selection and trench etching, in accordance with an embodiment of the present invention. Referring to Figure 1W, the photobuckets 148 from Figure IV in are removed from locations 150 where plugs will not be formed. In locations where plugs are selected to be formed, the photobuckets 148 are retained. In one embodiment, in order to form locations 150 where plugs will not be formed, lithography is used to expose the corresponding photobuckets 148. The exposed photobuckets may then be removed by a developer. The patterned ILD layer 102" " is then patterned to form ILD layer 102""' which includes trenches 152 formed at locations 150. The trenches 152 will ultimately be used for metal line fabrication, as is described below.Figure IX illustrates a plan view and corresponding first cross-sectional view (taken along the 1- axis) and second cross-sectional view (taken along the m-m' axis) of the structure of Figure 1W following removal of remaining fourth photobuckets, hardmask material layer and sacrificial material, and subsequent metal fill, in accordance with an embodiment of the present invention. Referring to Figure IX, remaining fourth photobuckets 148, hardmask material layer 146 and sacrificial material 134 are removed. In one such embodiment, the hardmask material layer 146 is a carbon-based hardmask material, and both the hardmask material layer 146 and the remaining fourth photobuckets 148 are removed with a plasma ashing process. In one embodiment, the sacrificial material 134 is removed in a different etch process. Referring to the plan view of Figure IX, metallization 154 is formed interleaved and co-planar with the second patterned hardmask layer 112. Referring to the first cross-sectional view taken along the 1- axis of the plan view of Figure IX, the metallization 154 fills trenches 152 and 154 (i.e., as corresponding to the cross-sectional view taken along the k-k' axis of Figure 1 W) formed in patterned interlayer dielectric layer 102' " " . Referring to the second cross-sectional view taken along the m-m' axis of the plan view of Figure IX, the metallization 154 also fills trenches 136 and via openings 132 and 126 (i.e., as corresponding to the cross-sectional view taken along the f-f axis of Figure 1 Q) formed in patterned interlayer dielectric layer 102' " " . Thus, the metallization 154 is used to form a plurality of conductive lines and conductive vias in an interlayer dielectric layer for a metallization structure, such as a BEOL metallization structure.In an embodiment, the metallization 154 is formed by a metal fill and polish back process. In one such embodiment, the second patterned hardmask layer 112 is reduced in thickness during the polish back process. In a particular such embodiment, although reduced in thickness, a portion of the second patterned hardmask 112 is retained, as is depicted in Figure IX. Accordingly, metal features 156 that are neither conductive lines nor conductive vias formed in the patterned interlayer dielectric layer 102" " ' remain interleaved with the second patterned hardmask layer and on or above (but not in) the patterned interlayer dielectric layer 102' " " , as is also depicted in Figure IX. In an alternative particular embodiment (not shown), the second patterned hardmask 112 is entirely removed during the polish back. Accordingly, metal features 156 that are neither conductive lines nor conductive vias are not retained in the final structure. In either case, the described structures for Figure IX may subsequently be used as a foundation for forming subsequent metal line/via and ILD layers. Alternatively, the structure of Figure IX may represent the final metal interconnect layer in an integrated circuit.It is to be appreciated that the above process operations may be practiced in alternative sequences, not every operation need be performed and/or additional process operations may be performed. Referring again to Figure IX, metallization layer fabrication by using a diagonal hardmask may be complete at this stage. A next layer fabricated in a like manner likely requires initiation of the entire process once again. Alternatively, other approaches may be used at this stage to provide additional interconnect layers, such as conventional dual or single damascene approaches.In an embodiment, the term "photobucket" as used herein involves use of an ultrafast photoresist or ebeam resist or other photosensitive material as formed in etched openings. In one such embodiment, a thermal reflow of a polymer into the openings is used following a spin coat application. In one embodiment, the fast photoresist is fabricated by removing a quencher from an existing photoresist material. In another embodiment, the photobuckets are formed by an etch-back process and/or a lithography/shrink/etch process. It is to be understood that the photobuckets need not be filled with actual photoresist, so long as the material acts as a photosensitive switch. In one embodiment, lithography is used to expose the corresponding photobuckets that are selected for removal. However, the lithographic constraints may be relaxed and misalignment tolerance may be high since the photobuckets are surrounded by non- photolyzable materials. Furthermore, in an embodiment, instead of exposing at, e.g. 30mJ/cm2, such photobuckets might be exposed at, e.g., 3mJ/cm2. Normally this would result in very poor critical dimension (CD) control and roughness. But in this case, the CD and roughness control will be defined by the photobuckets, which can be very well controlled and defined. Thus, the photobucket approach may be used to circumvent imaging/dose tradeoff which limits the throughput of next generation lithographic processes. In one embodiment, the photobuckets are subject to exposure of extreme ultraviolet (EUV) light in order to expose the photobuckets, where in a particular embodiment, EUV exposure is in the range of 5-15 nanometers.In an embodiment, the term "grating structure" for metal lines, ILD lines or hardmask lines is used to refer to a tight pitch grating structure. In one such embodiment, the tight pitch is not achievable directly through conventional lithography. For example, a pattern based on conventional lithography may first be formed, but the pitch may be halved by the use of spacer mask patterning, as is known in the art. Even further, the original pitch may be quartered by a second round of spacer mask patterning. Accordingly, the grating-like patterns described above may have metal lines, ILD lines or hardmask lines spaced at a constant pitch and having a constant width. The pattern may be fabricated by a pitch halving or pitch quartering approach.In an embodiment, as used throughout the present description, interlayer dielectric (ILD) material is composed of or includes a layer of a dielectric or insulating material. Examples of suitable dielectric materials include, but are not limited to, oxides of silicon (e.g., silicon dioxide (Si02)), doped oxides of silicon, fluorinated oxides of silicon, carbon doped oxides of silicon, various low-k dielectric materials known in the arts, and combinations thereof. The interlayer dielectric material may be formed by conventional techniques, such as, for example, chemical vapor deposition (CVD), physical vapor deposition (PVD), or by other deposition methods.In an embodiment, as is also used throughout the present description, interconnect material (e.g., metal lines and/or vias) is composed of one or more metal or other conductive structures. A common example is the use of copper lines and structures that may or may not include barrier layers between the copper and surrounding ILD material. As used herein, the term metal includes alloys, stacks, and other combinations of multiple metals. For example, the metal interconnect lines may include barrier layers, stacks of different metals or alloys, etc. The interconnect lines are also sometimes referred to in the arts as traces, wires, lines, metal, or simply interconnect.In an embodiment, as is also used throughout the present description, plug and/or cap and/or hardmask materials are composed of dielectric materials different from the interlayer dielectric material. In one embodiment, these materials are sacrificial, while interlayer dielectric materials are preserved at least somewhat in a final structure. In some embodiments, a plug and/or cap and/or hardmask material includes a layer of a nitride of silicon (e.g., silicon nitride) or a layer of an oxide of silicon, or both, or a combination thereof. Other suitable materials may include carbon-based materials. In another embodiment, a plug and/or cap and/or hardmask material includes a metal species. For example, a hardmask or other overlying material may include a layer of a nitride of titanium or another metal (e.g., titanium nitride). Potentially lesser amounts of other materials, such as oxygen, may be included in one or more of these layers. Alternatively, other plug and/or cap and/or hardmask material layers known in the arts may be used depending upon the particular implementation. The plug and/or cap and/or hardmask material layers maybe formed by CVD, PVD, or by other deposition methods.It is to be appreciated that the layers and materials described above are typically formed on or above an underlying semiconductor substrate or structure, such as underlying device layer(s) of an integrated circuit. In an embodiment, an underlying semiconductor substrate represents a general workpiece object used to manufacture integrated circuits. Thesemiconductor substrate often includes a wafer or other piece of silicon or another semiconductor material. Suitable semiconductor substrates include, but are not limited to, single crystal silicon, polycrystalline silicon and silicon on insulator (SOI), as well as similar substrates formed of other semiconductor materials. The semiconductor substrate, depending on the stage of manufacture, often includes transistors, integrated circuitry, and the like. The substrate may also include semiconductor materials, metals, dielectrics, dopants, and other materials commonly found in semiconductor substrates. Furthermore, the structures depicted above may be fabricated on underlying lower level back end of line (BEOL) interconnect layers.Resulting structures may enable fabrication of vias that are directly centered on underlying metal lines. That is, the vias may be wider than, narrower than, or the same thickness as the underlying metal lines, e.g., due to non-perfect selective etch processing. Nonetheless, in an embodiment, the centers of the vias are directly aligned (match up) with the centers of the metal lines. Furthermore, the ILD used to select certain of the plugs and vias will likely be very different from the primary ILD and will be perfectly self-aligned in both directions. As such, in an embodiment, offset due to conventional lithograph/dual damascene patterning that must otherwise be tolerated, is not a factor for the resulting structures described herein.Embodiments disclosed herein may be used to manufacture a wide variety of different types of integrated circuits and/or microelectronic devices. Examples of such integrated circuits include, but are not limited to, processors, chipset components, graphics processors, digital signal processors, micro-controllers, and the like. In other embodiments, semiconductor memory may be manufactured. Moreover, the integrated circuits or other microelectronic devices may be used in a wide variety of electronic devices known in the arts. For example, in computer systems (e.g., desktop, laptop, server), cellular phones, personal electronics, etc. The integrated circuits may be coupled with a bus and other components in the systems. For example, a processor may be coupled by one or more buses to a memory, a chipset, etc. Each of the processor, the memory, and the chipset, may potentially be manufactured using the approaches disclosed herein.Figure 2 illustrates a computing device 200 in accordance with one implementation of the invention. The computing device 200 houses a board 202. The board 202 may include a number of components, including but not limited to a processor 204 and at least one communication chip 206. The processor 204 is physically and electrically coupled to the board 202. In some implementations the at least one communication chip 206 is also physically and electrically coupled to the board 202. In further implementations, the communication chip 206 is part of the processor 204.Depending on its applications, computing device 200 may include other components that may or may not be physically and electrically coupled to the board 202. These other components include, but are not limited to, volatile memory (e.g., DRAM), non-volatile memory (e.g., ROM), flash memory, a graphics processor, a digital signal processor, a crypto processor, a chipset, an antenna, a display, a touchscreen display, a touchscreen controller, a battery, an audio codec, a video codec, a power amplifier, a global positioning system (GPS) device, a compass, an accelerometer, a gyroscope, a speaker, a camera, and a mass storage device (such as hard disk drive, compact disk (CD), digital versatile disk (DVD), and so forth).The communication chip 206 enables wireless communications for the transfer of data to and from the computing device 200. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non- solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. The communication chip 206 may implement any of a number of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 family), WiMAX (IEEE 802.16 family), IEEE 802.20, long term evolution (LTE), Ev- DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The computing device 200 may include a plurality of communication chips 206. For instance, a first communication chip 206 may be dedicated to shorter range wirelesscommunications such as Wi-Fi and Bluetooth and a second communication chip 206 may be dedicated to longer range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others.The processor 204 of the computing device 200 includes an integrated circuit die packaged within the processor 204. In some implementations of the invention, the integrated circuit die of the processor includes one or more structures, such as self-aligned vias and plugs, built in accordance with implementations of the invention. The term "processor" may refer to any device or portion of a device that processes electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory.The communication chip 206 also includes an integrated circuit die packaged within the communication chip 206. In accordance with another implementation of the invention, the integrated circuit die of the communication chip includes one or more structures, such as self- aligned vias and plugs, built in accordance with implementations of the invention.In further implementations, another component housed within the computing device 200 may contain an integrated circuit die that includes one or more structures, such as self-aligned vias and plugs, built in accordance with implementations of the invention.In various implementations, the computing device 200 may be a laptop, a netbook, a notebook, an ultrabook, a smartphone, a tablet, a personal digital assistant (PDA), an ultra mobile PC, a mobile phone, a desktop computer, a server, a printer, a scanner, a monitor, a set-top box, an entertainment control unit, a digital camera, a portable music player, or a digital video recorder. In further implementations, the computing device 200 may be any other electronic device that processes data.Thus, embodiments of the present invention include self- aligned via and plug patterning using diagonal hardmasks for improved overlay in fabricating back end of line (BEOL) interconnects.In an embodiment, an interconnect structure for an integrated circuit includes an interlayer dielectric layer disposed above a substrate. A grating structure is disposed above the interlayer dielectric layer and includes co-planar alternating dielectric hardmask lines and conductive lines. One or more of the conductive lines extends into the interlayer dielectric layer, and one or more of the conductive lines does not extend into the interlayer dielectric layer.In one embodiment, one of the one or more of the conductive lines that extends into the interlayer dielectric layer extends entirely through the interlayer dielectric layer to provide a conductive via to an underlying metallization layer disposed between the substrate and the interlayer dielectric layer. In one embodiment, one of the one or more of the conductive lines that extends into the interlayer dielectric layer extends only partially into the interlayer dielectric layer to provide a conductive metal line for a metallization layer including the interlayer dielectric layer.In one embodiment, the grating structure is disposed on the interlayer dielectric layer. In an embodiment, a method of fabricating an interconnect structure for an integrated circuit involves forming a first hardmask layer above an interlayer dielectric layer disposed above a substrate. The first hardmask layer includes a plurality of first hardmask lines having a first grating in a first direction and comprising one or more sacrificial materials interleaved with the first grating. The method also involves forming a second hardmask layer above the first hardmask layer. The second hardmask layer includes a plurality of second hardmask lines having a second grating in a second direction, diagonal to the first direction. The method also involves, using the second hardmask layer as a mask, etching the first hardmask layer to form a patterned first hardmask layer. The etching involves removing a portion of the one or more sacrificial materials.In one embodiment, forming the first hardmask layer involves forming the plurality of first hardmask lines using a pitch-halving or pitch-quartering patterning process relative to a minimum critical dimension (CD), and forming the second hardmask layer involves forming the plurality of second hardmask lines at the minimum CD.In one embodiment, forming the second hardmask layer involves forming the plurality of second hardmask lines having the second grating 45 degrees to the first direction.In one embodiment, the method further involves removing the second hardmask layer subsequent to etching the first hardmask layer.In one embodiment, the method further involves, subsequent to removing the second hardmask layer, forming a plurality of photobuckets in the patterned first hardmask, and exposing, developing and removing fewer than all of the plurality of photobuckets to reveal portions of the interlayer dielectric layer, and etching entirely through the revealed portions of the interlayer dielectric layer to form via openings, and forming metal vias in the via openings.In one embodiment, the method further involves, subsequent to removing the second hardmask layer, forming a plurality of photobuckets in the patterned first hardmask, and exposing, developing and removing fewer than all of the plurality of photobuckets to reveal portions of the interlayer dielectric layer, and etching only partially through the revealed portions of the interlayer dielectric layer to form trenches, and forming metal lines in the trenches.In one embodiment, the plurality of second hardmask lines is composed of a carbon- based material, and removing the second hardmask layer involves using an ashing process. In an embodiment, a method of fabricating an interconnect structure for an integrated circuit involves forming a plurality of hardmask lines having a grating pattern above an interlayer dielectric layer disposed above a substrate. The method also involves forming a first plurality of photobuckets interleaved with the plurality of hardmask lines, the first plurality of photobuckets corresponding to a first half of all possible via locations in a metallization layer of the interconnect structure. The method also involves exposing, developing and removing fewer than all of the first plurality of photobuckets to reveal first portions of the interlayer dielectric layer. The method also involves etching entirely through the revealed first portions of the interlayer dielectric layer to form first via openings in the interlayer dielectric layer.In one embodiment, the method further involves removing all remaining of the first plurality of photobuckets and, subsequently, forming a second plurality of photobuckets interleaved with the plurality of hardmask lines, the second plurality of photobuckets corresponding to a second half of all possible via locations in the metallization layer of the interconnect structure, and exposing, developing and removing fewer than all of the second plurality of photobuckets to reveal second portions of the interlayer dielectric layer, and etching entirely through the revealed second portions of the interlayer dielectric layer to form second via openings in the interlayer dielectric layer.In one embodiment, the method further involves removing all remaining of the second plurality of photobuckets and, subsequently, forming metal vias in the first and second via openings of the interlayer dielectric layer.In one embodiment, forming the first plurality of photobuckets interleaved with the plurality of hardmask lines involves forming each of the first plurality of photobuckets to have a nearest neighbor distance of a factor of the square root of two multiplied by a line width of the grating pattern of the plurality of hardmask lines.In one embodiment, exposing, developing and removing fewer than all of the first plurality of photobuckets involves exposing to extreme ultra-violet (EUV) irradiation.In an embodiment, a method of fabricating an interconnect structure for an integrated circuit involves forming a plurality of hardmask lines having a grating pattern above an interlayer dielectric layer disposed above a substrate. The method also involves forming a first plurality of photobuckets interleaved with the plurality of hardmask lines, the first plurality of photobuckets corresponding to a first half of all possible plug locations in a metallization layer of the interconnect structure. The method also involves exposing, developing and removing fewer than all of the first plurality of photobuckets to reveal first portions of the interlayer dielectric layer. The method also involves etching only partially through the revealed first portions of the interlayer dielectric layer to form first trenches in the interlayer dielectric layer. In one embodiment, the method further involves removing all remaining of the first plurality of photobuckets and, subsequently, forming a second plurality of photobuckets interleaved with the plurality of hardmask lines, the second plurality of photobuckets corresponding to a second half of all possible plug locations in the metallization layer of the interconnect structure, and exposing, developing and removing fewer than all of the second plurality of photobuckets to reveal second portions of the interlayer dielectric layer, and etching only partially through the revealed second portions of the interlayer dielectric layer to form second trenches in the interlayer dielectric layer.In one embodiment, the method further involves removing all remaining of the second plurality of photobuckets and, subsequently, forming metal lines in the first and second trenches of the interlayer dielectric layer.In one embodiment, forming the first plurality of photobuckets interleaved with the plurality of hardmask lines involves forming each of the first plurality of photobuckets to have a nearest neighbor distance of a factor of the square root of two multiplied by a line width of the grating pattern of the plurality of hardmask lines.In one embodiment, exposing, developing and removing fewer than all of the first plurality of photobuckets involves exposing to extreme ultra-violet (EUV) irradiation. |
A transistor (105) formed on a semiconductor substrate (100). The transistor including: a transistor gate (108) and an extended drain (107) between the transistor gate and a transistor drain contact; a transistor source contact (120) coupled to a source contact probe pad (128); a first dielectric layer (110) covering the semiconductor substrate and the transistor gate; a source field plate (122) on the first dielectric layer and coupled to a source field plate probe pad spaced from and electrically isolated from the source contact probe pad; and the source field plate capacitively coupled through the first dielectric layer to a first portion (109) of the extended drain. |
1.A device comprising:A transistor formed on a semiconductor substrate, the transistor comprising:a transistor gate and an extended drain between the transistor gate and transistor drain contacts;a transistor source contact coupled to the source contact probe pad;a first dielectric layer covering the semiconductor substrate and the transistor gate;a source field plate on the first dielectric layer and coupled to source field plate probe pads spaced and electrically isolated from the source contact probe pads; andThe source field plate is capacitively coupled to the first portion of the extended drain through the first dielectric layer.2.The apparatus of claim 1, wherein the first dielectric layer comprises a first intermetal dielectric layer/pre-metal dielectric layer dielectric stack.3.The apparatus of claim 2, the transistor further comprising:a gate field plate over the metal front dielectric layer; andThe gate field plate is capacitively coupled to a second portion of the extended drain through the pre-metal dielectric layer, the second portion being located between the transistor gate and the first portion of the extended drain .4.3. The apparatus of claim 3, wherein the gate field plate is coupled to the transistor gate in an opening through the pre-metal dielectric layer.5.4. The apparatus of claim 3, wherein the gate field plate is coupled to a gate field plate probe pad and the transistor gate is coupled to a gate probe pad.6.The apparatus of claim 1, wherein the transistor comprises a gallium nitride (GaN) transistor.7.The apparatus of claim 1, wherein the transistor comprises a gallium oxide (Ga2O3) transistor.8.The apparatus of claim 1, wherein the transistor comprises a silicon drain extended metal oxide semiconductor (DEMOS) transistor.9.The apparatus of claim 1, wherein the source field plate of the transistor further comprises:a first source field plate capacitively coupled to the first portion of the extended drain adjacent to the transistor gate through a first intermetal dielectric layer/pre-metal dielectric layer dielectric stack;the first source field plate is coupled to the first source field plate probe pad;A second source field plate capacitively coupled to the first portion of the extended drain and the transistor drain contact through a second intermetal dielectric layer/first intermetal dielectric layer/pre-metal dielectric layer dielectric stack a third portion of the extended drain between; andThe second source field plate is coupled to the second source field plate probe pad.10.9. The apparatus of claim 9, the transistor further comprising a gate field plate over the pre-metal dielectric layer, the gate field plate capacitively coupled to the extended drain through the pre-metal dielectric layer A second portion, the second portion of the extended drain is positioned between the gate and the first portion of the extended drain.11.11. The apparatus of claim 10, wherein the gate field plate is coupled to the transistor gate in an opening through the pre-metal dielectric layer.12.11. The apparatus of claim 10, wherein the gate field plate is coupled to a gate field plate probe pad and the transistor gate is coupled to a gate spaced and electrically isolated from the gate field probe pad Pole probe pad.13.A device comprising:a transistor formed on a semiconductor substrate having a transistor gate and an extended drain between the transistor gate and transistor drain contacts, the transistor further comprising:transistor source contact;a pre-metal dielectric layer covering the semiconductor substrate and a portion of the transistor gate;a gate field plate overlying the pre-metal dielectric layer and covering a portion of the transistor gate and capacitively coupled through the pre-metal dielectric layer to the first portion of the extended drain adjacent the transistor gate ;a first intermetal dielectric layer covering the pre-metal dielectric layer and the gate field plate;a source field plate over the first intermetal dielectric layer;the source field plate covers an end of the gate field plate and covers a second portion of the extended drain between the first portion of the extended drain and the transistor drain contact;the source field plate is capacitively coupled to the second portion of the extended drain through the dielectric stack of the first intermetal dielectric layer/pre-metal dielectric layer;the transistor source contact is coupled to a transistor source probe pad; andThe source field plate is coupled to source field plate probe pads.14.14. The apparatus of claim 13, wherein the source field plate of the transistor is a first source field plate and the apparatus further comprises:a second intermetal dielectric layer overlying the first intermetal dielectric layer and the first source field plate; andA second source field plate over the second intermetal dielectric layer and capacitively coupled to the extended drain through the dielectric stack of the second intermetal dielectric layer/first intermetal dielectric layer/pre-metal dielectric layer and a third portion of the extended drain between the second portion of the electrode and the drain contact.15.A method comprising:applying a first voltage stress to the dielectric between the source field plate and the extended drain of the transistor while applying a second voltage different from the first voltage on the source of the transistor;measuring a first leakage current between the source field plate and the drain of the transistor;applying a second voltage stress to the dielectric between the gate field plate and the extended drain of the transistor;measuring a second leakage current between the gate field plate and the drain;coupling the source field plate to the source;mounting a die containing the transistor on a substrate;coupling the transistor source to the first substrate lead;coupling the transistor drain to a second substrate lead;coupling the gate of the transistor to a third substrate lead; andA packaged extended drain transistor is formed by covering a portion of the transistor including the extended drain and the substrate with a molding compound.16.16. The method of claim 15, wherein the source field plate is coupled to the source electrode by coupling the source field plate and the source electrode to the same substrate lead.17.17. The method of claim 16, wherein the source field plate is coupled to the source by a stitch bond.18.17. The method of claim 16, wherein the source field plate is coupled to the source by a strip of conductive material.19.19. The method of claim 18, wherein the strips of conductive material are deposited using ink jet deposition.20.17. The method of claim 16, wherein the source field plate is coupled to the source with a conductive portion of a redistribution layer. |
High Voltage Transistors with Field Platestechnical fieldThe present disclosure relates generally to high voltage transistors, and more particularly to high voltage transistors having field plates.SUMMARY OF THE INVENTIONIn the described example, an apparatus includes a transistor formed on a semiconductor substrate, the transistor including: a transistor gate and an extended drain between the transistor gate and transistor drain contacts; a transistor source a pole contact coupled to a source contact probe pad; a first dielectric layer covering the substrate and the transistor gate; a source field plate on the first dielectric layer and coupled to a source field plate probe pad spaced and electrically isolated from the source contact probe pad; and the source field plate capacitively coupled to the first portion of the extended drain through the first dielectric layer.Description of drawings1 is a cross-sectional view of a high voltage, high electron mobility transistor (hv-HEMT) having a gate field plate, a source field plate, and separate probe pads for the transistor source and source field plates.Figure 2 is a cross-sectional view of an hv-HEMT with a gate field plate, a first source field plate and a second source field plate with source, first and second sources for transistors Individual probe pads for the field plate.3A and 3B are cross-sectional views of an hv-HEMT with a gate field plate isolated from the transistor gate.4 is a cross-sectional view of a high-voltage, drain-extended MOS transistor (hvDEMOS) with a gate field plate, a first source field plate, and a second source field plate with a source for the transistor, a first source Separate probe pads for the field plate and the second source field plate.5 is a plan view of a corner of a die having a high voltage extended drain transistor with a transistor source and first and second source field plate probe pads with first leads wire bonded to the lead frame , and has a transistor drain probe pad wire bonded to the second lead.Separately, FIG. 6A is a plan view of a die with a high voltage extended drain transistor, and FIG. 6B is a cross-sectional view thereof.Separately, FIG. 7A is a plan view and FIG. 7B is a cross-sectional view of a die with a high voltage extended drain transistor whose source and first and second source field plate probe pads are coupled together .Separately, FIG. 8A is a plan view and FIG. 8B is a cross-sectional view of a die with a high voltage extended drain transistor, source probe pads and first and second source field plate probe pads of the transistor coupled with the redistribution layer.Figures 9A, 9B, 9C and 9D are illustrations illustrating transistor source probe pads, first source field plate probe pads and second source field plate probes with identical leads coupled to the substrate using flip-chip ball bonding Cross-sectional view of the arrangement of the pin pads.10A, 10B, 10C and 10D are cross-sectional views illustrating the main steps in testing and packaging a high voltage extended drain transistor with electrically independent source probe pads and source field plate probe pads.11 is a flow chart listing the steps for testing and packaging a high voltage extended drain transistor with electrically separate source probe pads and source field plate probe pads.detailed descriptionCorresponding numerals and symbols in the different figures generally refer to corresponding parts unless otherwise indicated. The figures are not necessarily drawn to scale.In this description, layers are described as being formed "on" an underlying layer. However, an interposer can be used. For example, the conductor metal may be formed on a dielectric layer known as an "intermetal dielectric" or "IMD". The term "on" includes the alternative of depositing the metal directly on the intermetal dielectric (IMD) layer, as well as depositing the metal on, for example, antireflective coating (ARC) layers, backside antireflective coating (BARC) layers, An alternative on interposers for adhesion or diffusion barrier layers; these interposers improve results, including improved lithography results, reduced delamination, and reduced diffusion of atoms into surrounding materials. Whether or not such an interposer is present, a conductor layer is referred to herein as being "on" or "over" a dielectric layer.Several of the layers depicted in the arrangement are dielectric layers. Examples include pre-metal dielectric (PMD) layers and inter-metal dielectric (IMD) layers, sometimes referred to as "interlayer dielectric" layers (ILD). Although the layers are described in the examples as a single layer, the arrangement also includes multiple dielectric layers. Materials for the dielectric layers of the arrangement include silicon dioxide or simply "oxide", silicon nitride or simply "nitride", silicon oxynitride, silicon carbide, and other dielectrics used in semiconductor devices. Various dielectric film deposition processes can be used for the arrangement, including chemical vapor deposition (CVD), plasma enhanced chemical vapor deposition (PECVD), molecular beam epitaxy (MBE), and the like. Several layers are described herein as "metal layers" or "conductor layers." These metal or conductor layers may be, for example, aluminum or aluminum alloys, copper or copper alloys, and may contain additional plates such as nickel, palladium, gold, silver, platinum, tungsten, titanium, and combinations of these. Sputtering and damascene processes can be used with patterning and etching to form metal layers. The metal layer can be formed using electroplating and electroless plating. The metal layer may be formed using chemical mechanical polishing (CMP).In this description, the term "via" is used. As used herein, vias are connections formed between metal layers separated by dielectric layers. The vias include openings in the dielectric layer and conductive material (eg, conductive plugs or electroplating material) in the openings that fill the openings or form a conductive liner in the openings to electrically connect metals through the dielectric layer Floor.In this description, the term "contact" is used. As used herein, a contact is an area of a conductive material that electrically and physically contacts an area in a semiconductor substrate. Contacts, for example, form electrical connections between the conductor layers and the source, body, or drain regions.In this description, the term "high voltage transistor" is used. As used herein, the term high voltage transistor refers to a transistor that operates to provide voltages greater than 20 volts. The arrangement is useful for transistors using one source field plate or more than one source field plate. High voltage transistors often use source field plates.In this description, the term "wide band gap semiconductor substrate" is used. As used herein, a wide bandgap semiconductor substrate is one of those materials that has a bandgap voltage in the range of 2 to 4 electron volts (eV). Example materials include III to V and II to VI compounds. Gallium nitride (GaN), aluminum gallium nitride (AlGaN), aluminum nitride (AlN), and boron nitride are example materials. In the example, a GaN layer is used as a semiconductor substrate. The GaN layer may be an epitaxial layer on an insulator or on another semiconductor substrate. In another arrangement, Ga2O3 (gallium trioxide) may be used.In an example arrangement, multiple metal layers are shown. The number of metal layers used is process dependent and can be larger or smaller than the examples shown herein. A semiconductor process may contain eight or more layers of metal, although less metal is typically used.In an extended drain transistor, a transistor with a low voltage gate dielectric is used to switch the high voltage. For example, transistors with gate dielectrics that break down at 5 volts or less can be used to switch up to several hundred volts in high voltage extended drain transistors. The voltage drop across the extended drain region between the drain contact and the low voltage transistor gate is sufficient to protect the low voltage gate dielectric from the high voltage applied to the drain contact. By forming a field plate of conductive material on the drain region, the length of the extended drain region required for this transistor can be reduced. For example, one of the overlaid interconnect layers may be used. When the high voltage transistor is turned off, the field plate is coupled to the source contact and remains at ground. The grounded source field plate is capacitively coupled to the extended drain region through the dielectric layer, reducing the surface potential of the extended drain.In an arrangement, the problem of testing a device having a source and a source field plate is solved by using a probe pad provided to the source field plate in the device and another probe pad to the source of the device to Overvoltage stress testing (OVST) of the device is implemented in wafer probe testing. The arrangement enables dielectric layers to be tested during wafer probe testing and prior to packaging the die, saving cost and time that would otherwise be spent packaging bad die. When the device is packaged, the source and source field plates are electrically coupled together for use as a single terminal in normal operation.FIG. 1 illustrates an enhancement mode GaN high voltage (hv) high electron mobility transistor (hv-HEMT) with gate field plate 112 and source field plate 122 . Source field plate 122 is coupled to source field plate probe pads 130 separate from source probe pads 128 to improve testability. A GaN hv-HEMT 105 is used in the illustrated example. Hv-HEMTs employing alternative high mobility substrates such as gallium trioxide (Ga2O3) can also be used. Drain extended MOS transistors (DEMOS) with field plates can be used with the arrangements described further below. Either enhancement mode or depletion mode transistors can be used with the arrangement.The ability to separately control the voltage on the source and source field plates of a high voltage extended drain transistor enables testing of the dielectric in the transistor in wafer probe testing, so that defective dielectrics can be detected and These units were scrapped during testing. This ability to identify faulty devices in wafer probe testing avoids the expense of packaging good and bad dies, and reduces or eliminates the need to perform burn-in testing on dies to then identify which packaged dies have defective dielectrics and Should be scrapped as needed. Due to the high cost of the packaging step, scrapping a packaged device in wafer probe testing is much more expensive and wasteful than identifying defective dies.The substrate of the example hv-HEMT 105 in FIG. 1 is gallium nitride (GaN) 104 formed on aluminum nitride insulator (AlN) 102 on a silicon substrate 100 . An aluminum gallium nitride (AlGaN) electron generating layer 106 covers portions of the gallium nitride layer 104 and generates a two-dimensional electron gas (shown as dashed line 103 in FIG. 1 ) in the GaN 104 layer. Other wide band gap semiconductor substrates can be used with the arrangement. Drain extensions formed on a silicon substrate can also be used with the arrangement. The gate 108 over the channel between the source contact 120 and the drain contact 124 forms an enhancement mode hv-HEMT. The length of the extended drain region 107 between the gate 108 and the high voltage drain contact 124 is sufficient to drop sufficient voltage to protect the transistor gate 108 . A pre-metal dielectric (PMD) layer 110 covers the GaN substrate 104 and the gate 108 . A gate field plate 112 on or over the PMD layer 110 covers the gate 108 and covers a first portion of the extended drain region 109 adjacent to the gate 108 . The gate field plate may be made of the first interconnect layer. The gate field plate 112 is coupled to the gate 108 through contacts through the PMD layer 110 . A first intermetal dielectric (IMD1) layer 116 covers the PMD layer 110 and the gate field plate 112 .The source field plate 122 on the first intermetal dielectric (IMD1) layer 116 partially covers the second portion of the extended drain region 111 between the end of the gate field plate 112 and the end of the source field plate 122 . The source field plate 122 may be made of the second interconnect layer. The second IMD layer ( IMD2 ) covers the IMD1 layer and covers the source field plate 122 . The source field plate probe pads 130 on the IMD2 layer 126 are connected to the source field plate 122 through vias through the IMD2 layer. Source probe pads 128 on IMD2 layer 126 are connected to source contacts 120 by a plurality of vias extending through IMD2 layer 126 , IMD1 layer 116 and PMD layer 110 . Drain probe pads 134 on IMD2 layer 126 are connected to drain 124 by a plurality of vias extending through IMD2 layer 126 , IMD1 layer 116 and PMD layer 110 . A contact liner metal 114 , such as TiN or TiW, at the bottom of the source 120 and drain 124 contacts forms an ohmic connection to the underlying GaN substrate 104 .In operation, when a sufficiently positive potential is placed at the gate 108 relative to the potential of the source contact 120, a conductive channel region is formed under the gate 108 and current can occur between the drain and source Conduction. When the gate potential is removed, the electron gas layer is dispersed from below the gate 108 and conduction between the drain and source is blocked.When the hv-HEMT is off, the source contact 120 and the source field plate 122 may be grounded. The surface potential of the extended drain region is lowered by capacitive coupling of the PMD layer between the grounded source field plate 122 and the extended drain region 107 . For a given device size (compared to a device without a source field plate), the use of field plates enables higher voltage operation without losses due to current collapse. The coupling length between the gate field plate 112 and the first extended drain region 109 and the coupling length between the source field plate 122 and the second extended drain region 111 depends on the magnitude of the high voltage switched by the hv-HEMT 105 and the breakdown voltage of the gate dielectric under the transistor gate 108 .The separate control of the voltage on the source field plate 122 and the voltage on the source contact 120 can detect defective PMD layer 110 and defective IMD1 layer 116 dielectrics in wafer probe testing. Defect detection at wafer probe testing eliminates the need to package defective cells with good cells and perform burn-in testing later to identify which packaged cells are bad. By not encapsulating bad molds, substantial savings in material, cost, and time are achieved by using the arrangement.In the off state of hv-HEMT 105, source contact 120, source field plate 122, gate 108, and gate field plate 112 may be grounded. When a high voltage is applied to the hv-HEMT drain contact 124, the highest electric field passes through the dielectric stack between the grounded source field plate 122 and the second extended drain region 111 of the underlying layer of the grounded source field plate 122 (IMD1 layer 116/PMD layer 110). If there are defects in this dielectric stack, the leakage current from the source field plate 122 to the drain contact 124 increases, or the dielectric stack is damaged. These changes in device operation can be observed during wafer probe testing, indicating defective devices. Defective dies can be identified during wafer probe testing and can be scrapped.In the arrangement described, providing separate probe pads 130 for the source field plate 122 allows the voltage on the source field plate 122 to be increased while maintaining the voltage on the source contact 120 to ground. An hv-HEMT 105 with a defective PMD layer 110 cannot be identified in wafer probe testing without individually controlling the voltages on the source field plate 122 and the source contact 120 . Raising the voltage on the source field plate 122 reduces the field in the underlying dielectric stack (IMD1 layer 116/PMD layer 110), allowing the potential on the first extended drain region 109 under the gate field plate 112 to rise. This increases the voltage between the extended drain region and the gate field plate 112, thereby stressing the PMD layer 110 layer. In wafer probe testing, devices having defects in the PMD layer 110 under the gate field plate 112 can be identified in this manner and can be scrapped.FIG. 2 illustrates another hv-HEMT 205 in which the source field plate is segmented into a first source field plate 222 and a second source field plate 232 . The addition of the second source field plate 232 enables the hv-HEMT 205 to switch higher voltages (when compared to devices without the second source field plate) with little increase in the length of the extended drain. In FIG. 2 , for the sake of clarity, similar reference numerals are used for similar elements as shown in FIG. 1 . (For example, gate field plate 212 in FIG. 2 corresponds to gate field plate 112 in FIG. 1.) The hv-HEMT 205 in FIG. 2 is similar to the hv-HEMT 105 in FIG. 1, but after forming the first source The length of the source field plate is reduced (compared to 122 in FIG. 1 ) when the pole field plate 222 is present (compared to the arrangement of FIG. 1 ), and a second source field plate 232 is now added above the IMD2 layer 226 . The second source field plate 232 covers the end of the first source field plate 222 and covers the third portion of the extended drain region 213 between the end of the first source field plate 222 and the high voltage drain contact 224 . The second source field plate 232 is capacitively coupled to the third extended drain region 213 through the dielectric stack including the IMD2 layer 226/IMD1 layer 216/PMD layer 210.When the hv-HEMT 205 is turned off, the source contact 220, the first source field plate 222, and the second source field plate 232 may remain grounded. Since the dielectric stack under the second source field plate 232 (IMD2 layer 226/IMD1 layer 216/PMD layer 210) is thicker than the dielectric stack under the first source field plate (IMD1 layer 216/PMD layer 210), capacitive coupling smaller, resulting in a higher surface potential of the third extended drain region 213 under the second source field plate 232 than that of the second extended drain region 211 under the first source field plate 222 . When a high voltage is applied to the hv-HEMT drain contact 224, the maximum electric field passes through the dielectric stack between the grounded second source field plate 232 and the third extended drain region 213 (IMD2 layer 226/IMD1 layer 216/PMD layer 210). If there are defects in this dielectric stack that increase leakage current through the dielectric stack or cause breakdown of the dielectric stack, the defects can be detected in wafer probe testing and the defective chip can be scrapped.In this arrangement, source probe pads 228, first source field plate probe pads 230, and second source probe pads 228, first source field plate probe pads 230, and second source Separate probe pads for source field plate 232 allow the voltages on source 230, first source field plate 222 and second source field plate 232 to be individually controlled. Without the separate control of the voltages on the first source field plate 222 and the second source field plate 232 as provided using the described arrangement, the gate field plate 212 and the second source field plate 232 cannot be determined in wafer probe testing. Is the dielectric PMD 210 between the underlying first extended drain region 209 defective, or the dielectric stack (IMD1 216/PMD 210) between the first source field plate 222 and the underlying second extended drain region 211 defective. During wafer probe testing, after the dielectric stack (IMD2 layer 226/IMD1 layer 216/PMD layer 210) under the second field plate 232 is determined to pass parametric testing, the voltage may first be boosted on the second source field plate 232, High voltages are allowed to reach and stress the dielectric stack (IMD1 layer 216/PMD layer 210) below the first source field plate 222; and may be second on both the second source field plate 232 and the first source field plate 222 The voltage is raised, allowing the high voltage to reach and stress the PMD layer 210 below the gate field plate 212 . This enables high voltage transistors with dielectric defects in the PMD layer 210 under the gate field plate 212 and/or in the dielectric stack (IMD1 layer 216/PMD layer 210) under the first source field plate 222 to Probes are detected and scrapped.3A and 3B illustrate alternative arrangements. In FIGS. 3A and 3B, similar reference numerals are used for similar elements as shown in FIG. 2 for clarity of explanation. For example, gate field plate 312 in FIG. 3 corresponds to gate field plate 212 in FIG. 2 . The hv-HEMT 305 in FIG. 3A is similar to the hv-HEMT 205 in FIG. 2 , but the gate field plate 312 in FIG. 3A is electrically isolated from the gate 308 by the PMD layer 310 . This enables the dielectric under the gate 308 to be stressed independently of the PMD layer 310 under the gate field plate 312 . As shown in FIG. 3B, gate field plate 312 is coupled to gate field plate probe pads 340 on IMD2 layer 326, allowing the voltage on gate field plate 312 to be independently controlled. Source contact 320 is shown corresponding to source contact 220 in FIG. 2 . Drain contact 324 is shown corresponding to drain contact 224 in FIG. 2 . In Figure 3A, the extended drain region 307 is between the gate 308 and the drain contact 324, the first portion 309 of the extended drain region is below the gate field plate 312, and the second portion 311 of the extended drain region is located on the first A source field plate 322 is below, and the third portion 313 of the extended drain region is below the second source field plate 332 .Figure 3B is a cross-section through the length of gate electrode 308 along dashed line 3B-3B' in Figure 3A. The gate field plate 312 is electrically isolated from the gate 308 by the PMD layer 310 . A dielectric stack IMD1 layer 316 /IMD2 layer 326 covers the gate field plate 312 . Vias 321 through IMD1 layer 316 connect gate field plate 312 to interconnect leads 322 on IMD1 layer 316 . Vias 323 through IMD2 layer 326 connect interconnect leads 322 to probe pads 340 on IMD2 layer 326 . A separate stack of contacts and vias connects gate 308 to separate probe pads 344 over IMD2 layer 326 . Separate probe pads 340 and 344 for gate field plate 312 and gate 308, respectively, allow for independent control of the voltages on gate 308 and gate field plate 312, respectively. This enables the gate dielectric under the gate 308 and the PMD dielectric 310 under the gate field plate 312 to be independently stressed during wafer probe testing. Before or during packaging of the hv-HEMT, gate field probe pads 340 for gate field plate 312 and gate probe pads 344 for gate 308 are coupled together, as described further below.FIG. 4 shows a high voltage (hv) drain extended MOS transistor (DEMOS) 405 having a first source field plate 422 and a second source field plate 432 . In FIG. 4, similar reference numerals are used for similar elements as shown in FIG. 2 for clarity. For example, the first source field plate 422 in FIG. 4 corresponds to the source field plate 222 in FIG. 2 . An enhancement mode n-type DEMOS (nDEMOS) transistor is used for illustration, but depletion mode nDEMOS and enhancement or depletion mode p-type DEMOS (pDEMOS) transistors may also be used with the arrangement. The extended drain region 407 in DEMOS is lightly doped so that when a high voltage is applied to the drain, it will deplete carriers. A voltage drop occurs across the extended drain depletion region between the drain and gate.The substrate 400 of this example nDEMOS device is p-type doped single crystal silicon. Gate 450 over the channel between source contact 420 and drain contact 424 forms an enhancement mode nDEMOS transistor. The length of the extended drain region 407 including the extended drain diffusion 456 between the gate 450 and the drain diffusion 460 is sufficient to drop sufficient voltage between the application to the drain contact 424 and the gate 450 to enable the use of a low voltage transistor gate dielectric. For example, a voltage of several hundred volts can be applied to the drain contact, and the extended drain diffusion 456 can be designed to drop enough voltage to enable transistors with gate dielectrics with gate voltages of 5 volts or less used.A pre-metal dielectric (PMD) layer 410 covers a portion of the substrate 400 and the DEMOS gate 450 . A gate field plate 442 is formed over the PMD layer 410 . The gate field plate 442 may be formed using the first interconnect layer. Vias extending through PMD layer 410 couple gate field plate 442 to gate 450 . The gate field plate 442 covers the first extended drain portion 409 of the extended drain diffusion 456 adjacent to the gate 450 . The IMD1 layer 416 covers the PMD layer 410 and covers the gate field plate 442 . The first source field plate 422 formed on the IMD1 layer 416 covers the second extended drain portion 411 of the extended drain 456 adjacent to the end of the gate field plate 442 . The first source field plate 422 may be formed using the second interconnect layer. The IMD2 layer 426 covers the IMD1 layer 416 and covers the first source field plate 422 . The second source field plate 432 over the IMD2 layer 426 partially covers the third portion of the extended drain region 413 of the extended drain 456 between the end of the first source field plate 422 and the high voltage drain contact 424 . The second source field plate 432 may be formed using a third conductive interconnect layer. Source probe pads 428 on IMD2 layer 426 are connected to transistor source diffusions 458 with a stack of vias extending through the dielectric stack formed by IMD2 layer 426/IMD1 layer 416/PMD layer 410. The drain probe pads 434 on the IMD2 layer 426 are connected to the high voltage drain diffusion 460 with a stack of vias through the dielectric stack IMD2 layer 426/IMD1 layer 416/PMD layer 410. The first source field plate probe pads 430 on the IMD2 layer 426 are connected to the first source field plate 422 with vias extending through the IMD2 layer 426 . The second field plate 432 on the IMD2 layer 426 can be directly probed and used as a second field plate probe pad. Source 420 , first source field plate 422 and second source provided by applying voltages to respective probe pads of source probe pad 428 , first source field plate 430 and second source field plate 432 The individual control of the voltage on the pole field plate 432 allows the PMD layer 410, the dielectric stack (IMD1 layer 416/PMD layer 410) and the dielectric stack (IMD2 layer 426/IMD1 under the gate field plate 442 to be individually controlled in wafer probe testing layer 416/PMD layer 410) stress. Individual control of the voltages on these source field plates and source contacts enables detection of defects in each of these dielectric stacks in wafer probe testing that would not have been possible without these arrangements .During normal operation, source probe pad 428, source field plate probe pad 430 and second source field plate 432 are coupled. After testing for defects in various dielectric stacks at the probes, source field plate probe pads 430, source field plates 432, and source probe pads 428 can be coupled together while the die is still in wafer form , or may be coupled together after dicing and during packaging, as described further below.5 illustrates, in partial plan view, source probe pads 528, first source field plate probe pads 530, and second source field pads 536 of an arranged high voltage transistor semiconductor die 536 coupled to the same leadframe lead 570 with wire bonds 576 board 532. In FIG. 5 , similar reference numerals are used for similar elements as shown in FIG. 1 . For example, source probe pads 528 in FIG. 5 correspond to source probe pads 128 in FIG. 1 . The high voltage drain probe pads 534 are coupled to individual leadframe leads 572 with wire bonds 576 .FIGS. 6A and 6B are plan and cross-sectional views, respectively, of laying out a high voltage semiconductor device 636 having source probe pads 628 , first source field plate probe pads 630 , and second source probe pads 630 coupled together with stitched bonds 638 Source field plate probe pads 632 . The suture joint 638 may be formed before or after cutting. In Figures 6A and 6B, similar reference numerals are used for similar elements as shown in Figure 1 . For example, source probe pads 628 in FIGS. 6A and 6B correspond to source probe pads 128 in FIG. 1 . After high voltage transistor device 636 is mounted on lead frame 675 (in FIG. 6B ), source probe pad 628 and high voltage drain probe pad 634 are coupled to lead frame leads 670 and 672 with wire bonds 676 . A protective cap layer 636 of polyimide formed over portions of device 605 is shown.7A and 7B are a partial plan view and a cross-sectional view, respectively, of another arrangement of high voltage transistor devices 705 . Device 705 is shown with source probe pad 728 , first source field plate probe pad 730 and second source field plate probe pad 732 coupled together by shorting bar 788 . A shorting bar coupling gate field plate probe pad 740 and gate probe pad 744 is shown (see Figure 3B). In Figures 7A and 7B, similar reference numerals are used for similar elements as shown in Figures 6A and 6B. For example, source probe pads 728 in Figures 7A and 7B correspond to source probe pads 628 in Figures 6A and 6B. Shorting bars 788 may be added after final testing in wafer form using standard interconnect lithography deposition, patterning and etching processes. In the alternative, the shorting bars may be added in wafer form or in the form of post-cutting ink jet deposition using conductive ink. In the described arrangement, source probe pad 728 , drain probe pad 734 and gate probe pad 744 are coupled to leadframe leads 770 , 772 and 778 of leadframe 775 using wire bonds 776 , respectively. The shorting bar 788 is shown overlying the protective cover layer 736 that covers the device 705 between the probe pads.Figures 8A and 8B are plan and cross-sectional views, respectively, of source probe pad 828, first source field plate probe pad 830, and second source field plate probe pad 832 coupled together with conductive redistribution layer 891 . In Figures 8A and 8B, similar reference numerals are used for similar elements as shown in Figures 6A and 6B. For example, source probe pads 828 in Figures 8A and 8B correspond to source probe pads 628 in Figures 6A and 6B. After final probe testing, a dielectric layer 890, such as polyimide, is deposited overlying the probe pads 828, 830, 832, and 834 and the IMD2 layer 836. A redistributed layer of conductive material 891 is deposited on dielectric layer 890 and patterned to form source bond pads 892 and drain bond pads 894 . Vias extending through dielectric layer 890 couple source probe pads 828 , first source field plate probe pads 830 , and second source field plate probe pads 832 to source bond pads 892 . Drain probe pads 834 are coupled to drain bond pads 894 with vias through dielectric layer 890 . Source bond pads 892 and drain bond pads 894 are connected to leadframe leads 870 and 872 on leadframe 875 with wire bonds 876 .Figures 9A-9D show another arrangement in cross-sectional view in which source probe pads 928, first source field plate probe pads 930, and second source field plate probes when the chip is mounted on the substrate The pads 932 are coupled together. In Figures 9A-9D, similar reference numerals are used for similar elements as shown in Figures 6A and 6B. For example, hv-HEMT 905 in Figures 9A to 9D corresponds to hv-HEMT 605 in Figures 6A and 6B. In FIG. 9A , ball bonds 980 are formed on source probe pads 928 , first source field plate probe pads 930 , second source field plate probe pads 932 , and high voltage drain probe pads 934 .Substrate 984 with leads 970 and 972 is illustrated in Figure 9B. The substrate can be a printed circuit board, a lead frame, or any non-conductive substrate with conductive leads. Pre-molded lead frame (PMLF) and molded interconnect substrate (MIS) substrates can be used. Partially etched leadframes can be used with the arrangement.9C shows hv-HEMT 905 flip-chipped on substrate 984 on leads 970 and 972. FIG. 9D shows hv-HEMT 905 flip-chipped on lead frame 975 . Ball bonds 980 on source probe pads 928, first source field plate probe pads 930, and second source field plate probe pads 932 are all coupled together by bonding to the same substrate or leadframe leads 970 . Individual ball bonds 980 are formed between the high voltage drain probe pads 934 and individual substrate or leadframe leads 972 .10A-10D depict, in a series of cross-sectional views, the main steps for forming a packaged high voltage transistor with separate source field plate probe pads coupled to source probe pads. The main steps are also described in the flowchart of FIG. 11 . In Figures 10A-10D, similar reference numerals are used for similar elements as shown in Figure 1 . For example, source probe pads 1028 in FIG. 10A correspond to source probe pads 128 in FIG. 1 .In FIG. 10A (steps 1101, 1103, 1105, 1107, 1109, and 1111 in FIG. 11), voltage stress is applied sequentially to the second source field plate, the first source field plate, and the gate field plate under the Separate dielectric stacks to detect defects in the dielectric stacks. First, in step 1101, ground source probe pad 1028 (see Vsource 1095), ground first source field plate probe pad 1030 (see Vsfp1 1096), ground second source field plate probe pad 1033 Ground (see Vsfp21097) and apply high voltage to drain probe pad 1034 (see Vdrain 1098). This applies high voltage stress to the dielectric stack (IMD2 layer 1026/IMD layer 1016/PMD layer 1010) between the second source field plate 1032 and the third portion below the extended drain region (1013). If the leakage between the second source field plate probe pad 1033 and the drain probe pad 1034 exceeds specification, then the dielectric stack is defective and the hv-HEMT can be scrapped (step 1103). Next, in step 1105, the voltage on the source probe pad 1028 is kept at ground (see Vsource 1095) and the voltage on the second source field probe pad 1033 is raised (see Vsfp2 1097), causing the second source field plate The potential of the portion of the extended drain region under 1032 rises, applying a voltage stress to the dielectric stack (IMD1 layer 1016/PMD) between the first source field plate 1022 and the underlying second extended drain region (1011). layer 1010). The voltage at the first source plate probe pad 1030 is grounded (Vsource 1096). If the leakage current between the first source field plate probe pad 1030 and the drain probe pad 1034 exceeds specification, the dielectric stack (IMD1 layer 1016/PMD layer 1010) is defective and the hv-HEMT ( step 1107). Third, in step 1109, the voltages on the first and second source field plate probe pads 1030 and 1033 are raised (see 1096, Vspf1 and 1097, Vspf2), resulting in the first and second source field plates 1022 The potentials of the second and third drain regions below and 1032 rise, thereby applying voltage stress to the PMD 1010 below the gate field plate 1012 . If the leakage current between the gate field plate 1012 and the drain probe pads 1034 exceeds specifications, then the PMD layer 1010 is defective and the hv-HEMT can be scrapped (step 1111). In this way, using the described arrangement, the dielectric stack under each field plate including the second source field plate 1032, the first source field plate 1022, and the gate field plate 1012 can be advantageously individually stressed, and Defective dielectric stacks can be detected. The stress test is illustrated with two source field plates 1022 and 1032. One source field plate or more than two source field plates can also be used. When using the arrangement, dielectric defects can be detected in wafer probe testing, whereas in previous methods, dielectric defects were only detected after packaging was completed.In Figure 10B (step 1113, Figure 11), high voltage transistor die 1005 is mounted on the substrate. The lead frame 1075 substrate is used for illustration.In FIG. 10C (step 1115, FIG. 11 ), wire bonds are used to form coupling together the first source field plate probe pad 1030 and the second source field plate probe pad 1033 and to the source probe pads The suture of 1028 joins 1038. The wire bonds also form wire bonds between the source probe pads 1028 and the lead frame leads 1072 of the lead frame 1075 and between the drain probe pads 1034 and the lead frame leads 1074 (step 1117, Figure 11). Alternatively, other methods of connecting the source field plate probe pads to the source probe pads, such as described in FIGS. 5 , 6 , 7 , 8 and 9 , may be used.FIG. 10D (step 1119 , FIG. 11 ) shows a packaged high voltage transistor with first source field plate probe pad 1030 and second source field plate probe pad 1033 coupled to source probe pad 1028 . The high voltage transistor 1005 , wire bonds 1076 and a portion of the lead frame 1075 are partially enclosed in molding compound 1099 to form a packaged high voltage transistor 1095 .Modifications are possible in the arrangements described, and other alternative arrangements are possible within the scope of the claims. |
A simple instruction set processor preferably utilizes six primary components: a fetch unit, and instruction and address register, a controller/decoder, an arithmetic logic unit, an address multiplexer, and a storage multiplexer. The processor utilizes a data stream containing within it the address for a subsequent instruction to be executed by the processor, thereby avoiding the need for registers of the type utilized in prior art processors. As a result, the processor utilizes a minimal number of registers to perform its operations. The processor utilizes an instruction set in which every instruction contains a JUMP to the next instruction. By utilizing JUMPs in every instruction and providing the address to which the processor is to JUMP, there is no need for address counters and register pointers. Also, extremely fast state changes are facilitated the contents of only one register identifying a next address must be saved or restored. By eliminating data registers, data streams of any width may be supported by suitably utilizing a plurality of processor connected in parallel. The elimination of multiple registers enables the processor to more easily be embedded within memory arrays themselves. The processor preferably utilizes six primary components: a fetch unit, and instruction and address register, a controller/decoder, an arithmetic logic unit, an address multiplexer, and a storage multiplexer. |
What is claimed is: 1. A system for processing data, comprising:a fetching unit operable to fetch a data stream directly from a first location within a memory device designated by a first address, the data stream including an instruction, a next address and a destination address, each of the addresses designating a location within the memory device; a storage device in communication with the fetching unit, the storage device being operable to temporarily store the instruction, the next address and the destination address; and a control unit in communication with the storage device, the control unit being operable to receive the instruction and control an implementation of the instruction using the addresses. 2. The system of claim 1 wherein the storage device further comprises an instruction register and an address register.3. The system of claim 1 wherein the control unit is structured to implement the instruction by directing the fetching unit to retrieve a second data stream from a location designated by the next address, returning a first result, and directing the fetching unit to retrieve a third data stream from a location designated by the destination address responsive to a second result being returned.4. The system of claim 1 wherein the first address, the next address and the destination address designate the same location within a memory device, and wherein the system further comprises a comparator that is structured to compare the next address and the destination address and to generate a halt signal when both addresses designate the same location within a memory device as the first address.5. The system of claim 1 wherein the data stream further comprises a first source address that identifies a location within the memory device.6. The system of claim 5 wherein the first source address provides a first operand.7. The system of claim 6 wherein the control unit responds to the first source address by directing the fetching unit to fetch a first operand from a location in the memory device identified by the first source address.8. The system of claim 7 wherein the control unit is operable to cause the first operand to be saved at the destination address responsive to the fetching of the first operand.9. The system of claim 5 wherein each of the next address, the destination address and the first source address reference a respective location within the memory device.10. The system of claim 5 wherein at least one of the next address, the first source address, and the destination address references a location within a second memory device.11. The system of claim 5 wherein the system further comprises an instruction implementation unit in communication with the control unit and the fetching unit, the implementation unit being operable to implement the instruction by receiving a first operand retrieved by the fetching unit from a location within the memory device referenced by the first source address, utilizing the operand under the direction of the control unit, and outputing a result of the implementation.12. The system of claim 11 wherein the instruction implementation unit further comprises an arithmetic logic unit.13. The system of claim 12 wherein the instruction further comprises a single operand ALU instruction.14. The system of claim 11 wherein the data further comprises a second source address, and the control unit is operable to direct the fetching unit to retrieve a second operand from a location within a memory device identified by the second source address, the fetching unit being operable to provide the second operand to the instruction implementation unit, the instruction implementation unit being operable to utilize at least one of the first operand and the second operand while implementing the instruction under the direction of the control means and outputting a result of the implementation.15. The system of claim 14 wherein the instruction further comprises a multiple operand ALU instruction.16. The system of claim 11 wherein the system further comprising an address selection unit in communication with the storage device and the control unit, the address selection unit being operable and the control means being operable to select a fetch address from at least one input address.17. The system of claim 16 wherein the address selection unit receives as an input address at least one address selected from the group consisting of: a next address, a first source address, a second source address, a destination address, and a conditional address.18. The system of claim 17 wherein the system further comprises:a first designating unit; and a second designating unit, the second designating unit being operable to receive a status indicator from the instruction implementation unit and communicate the status indicator to the control unit. 19. The system of claim 18 wherein the control unit is structured to determine which of the input addresses is used to designate the fetch address from the address selection unit based upon a status signal provided by each of the designating units.20. The system of claim 18 wherein the first designating unit is operable to indicate whether a zero status has occurred in the instruction implementation unit.21. The system of claim 18 wherein the second designating unit is operable to indicate whether a carry status has occurred in the instruction implementation unit.22. The system of claim 1 wherein the memory device comprises at least one device selected from the group consisting of: a random access memory, a read only memory, a hard magnetic disc, a cdrom, a digital versatile disc, a cache memory, a magnetic storage device, and an optical storage device.23. The system of claim 1 wherein the system further comprises a storage selection unit in communication with the control unit and the storage device, the storage selection unit being and the control unit being operable to select a storage address from at least one destination address.24. The system of claim 23 wherein the storage selection unit receives as a destination address at least one address selected from the group consisting of: a second source address, a destination address, and a conditional address.25. The system of claim 24 wherein the control unit is operable to direct the storage selection unit to select a destination address based upon a type of instruction received by the control unit.26. The system of claim 23 wherein the storage address designates a location within a memory device in which the result is to be stored.27. The system of claim 1 wherein the system is embedded within a memory device.28. The system of claim 1 wherein the data stream is fetched directly from a memory device without being stored in any registers between the fetching unit and the memory device.29. The system of claim 1 wherein system is utilized in conjunction with a central processing unit.30. A system for processing data obtained directly from a storage device, the system comprising:a fetching unit structured to fetch a data stream directly from a first location within a memory device designated by a first address, the data stream comprising an instruction, a next address, a first source address, a second source address and a destination address, each of the addresses designating a location within a memory device; a storage device in communication with the fetching unit, the storage device being structured to temporarily store the instruction, the next address and the destination address; a control unit in communication with the storage device, the control unit being structured to receive the instruction, control implementation of the instruction, and control a result of the implementation of the instruction; an instruction implementation unit in communication with the control unit and the fetching unit, the instruction implementation unit being structured to implement the instruction by receiving a first operand retrieved by the fetching unit from a location within the memory storage device referenced by the first source address, and by receiving from the fetching unit a second operand from a location within a memory device identified by the second source address; the instruction implementation unit being structured to utilize the first operand and the second operand while implementing the instruction under the direction of the control unit and to output a result of the implementation; an address selection unit in communication with the storage device and the control unit, the address selection unit being structured to select a fetch address from at least one input address as directed by the control unit; a first and second designating units each of which is structured to receive a status indicator from the instruction implementation unit and communicate the status indicator to the control unit, the control unit being operable to determine which of the input addresses to designate as the fetch address based upon a status signal provided by each of the designating units; and a storage selection unit in communication with the control unit and the storage unit, the storage selection unit being structured to selectfrom at least one destination address a storage address for a location in a memory device in which the result is to be stored as directed by the control unit. 31. The system of claim 30 wherein the system is embedded within a memory device.32. A microprocessor comprising:a controller/decoder; an arithmetic logic unit; an address multiplexer; and a storage multiplexer; and a fetch unit structured to provide an instruction responsive to receiving a data stream directly from a memory device, the data stream comprising an instruction, a next address, a source address and a destination address; an instruction and address register receiving the instruction from the fetch unit, the instruction and address register being structured to provide the instruction to the controller/decoder to allow the controller/decoder to decodes the instruction, the instruction and address register being further structured to direct the fetch unit to obtain at least one operand specified by the source address, to direct the arithmetic logic unit to perform an operation upon the operand, and to direct the storage multiplexer to save a result of the operation at a location in a memory device designated by the destination address, the controller/decoder being structured to direct the fetch unit to obtain a second data stream from a location in a memory device designated by the next address. 33. The microprocessor of claim 32 wherein the instruction further comprises a MOVE instruction.34. The microprocessor of claim 32 wherein the instruction further comprises a JUMP instruction.35. The microprocessor of claim 32 wherein the instruction further comprises a HALT instruction.36. The microprocessor of claim 32 wherein the instruction further comprises a special dual operand ALU instruction.37. The microprocessor of claim 32 wherein the instruction further comprises a single operand ALU instruction.38. The microprocessor of claim 32 wherein the instruction further comprises a dual operand ALU instruction.39. The microprocessor of claim 32 wherein the microprocessor is utilized to control an Input/output bus for a computer system.40. The microprocessor of claim 32 wherein the microprocessor is utilized in conjunction with a central processing unit.41. The microprocessor of claim 32 wherein the fetch unit receives the data stream from a memory device selected from the group consisting of: random access memory, read only memory, hard magnetic disc, cdrom, digital versatile, cache memory, floppy magnetic disc, a magnetic storage device, and an optical storage device.42. A method for processing data, the method comprising:obtaining a data stream from a location within a memory device designated by a first address, the data stream including an instruction, a next address, and a destination address; decoding the instruction; determining whether the decoded instruction contains a JUMP; jumping to the next address when the decoded instruction is a JUMP and a status indicator designates the next address as a fetch address; jumping to the destination address when the decoded instruction is a JUMP and the status indicator designates the destination address as the fetch address; and comparing the next address and the destination address against the first address and halting the processing of data by the simple instruction set processor when the next address, the destination address, and the first address designate the same location within a memory device. 43. A method for processing data, the method comprising:obtaining a data stream from a location within a memory device designated by a first address, the data stream including an instruction, a next address, a destination address, and a first source address, the first source address identifying a location within a memory device at which a first operand is stored; fetching the first operand from the first source address; decoding the instruction; determining whether the decoded instruction contains a JUMP; jumping to the next address when the decoded instruction is a JUMP and a status indicator designates the next address as a fetch address; and jumping to the destination address when the decoded instruction is a JUMP and the status indicator designates the destination address as the fetch address. 44. The method of claim 43 wherein the method further comprises:determining if the data stream contains a second source address; fetching a second operand stored at a location within a memory device designated by the second source address when the second source address is present; executing the instruction upon the first operand and the second operand; and storing a result of the execution of the instruction in the destination address. 45. The method of claim 43 wherein the method further comprises:determining that the data stream does not contain a second source address; executing the instruction upon the first operand; and storing a result of the execution of the instruction in the destination address. |
FIELD OF THE INVENTIONThe present invention relates to processors for computer systems and, more specifically, to processors utilized in conjunction with and/or embedded within memory devices.BACKGROUND OF THE INVENTIONAutomated systems commonly utilize Central Processing Units (CPU) connected to various peripheral devices including caches, memory storage devices, and numerous peripherals over various busses and other interconnections. Generally, designers of automated systems have strived to improve system performance by increasing CPU processing speeds, bus speeds, memory utilization rates, and various other parameters. Additionally, significant efforts have been undertaken to simultaneously reduce the size and power requirements of such systems. While significant reductions in size and power requirements have occurred, software programs used by many of today's systems have tremendously increased in size and complexity. As a result, today's designers are often faced with the daunting challenge of having to squeeze ever more data, including video data and audio data, through CPUs at ever increasing rates while decreasing the size and power requirements of such systems.For many applications, the ability of CPUs to process large quantities of data is often dictated by how fast, how much, and how quickly the CPU can obtain information from and/or write to memory or other data storage devices. As is well known in the art, today's systems often include multiple data storage devices, such as Random Access Memory (RAM), Read Only Memory (ROM), and various other peripheral storage devices such as hard disc drives, and write/rewritable magnetic and optical storage devices. Additionally, CPUs often obtain data from various non-localized data storage devices via communications networks such as the Internet. Since each storage device often contains data which is specified in variable word lengths and since today's CPUs generally utilize registers of fixed widths, the CPU commonly has to repeatedly request segments of the data until an entire data word is processed.In most computer applications, the process of retrieving data from a memory location often takes longer than the time necessary to actually process the given quantity of data because the ability of the CPU to process information is significantly greater than its ability to retrieve information from memory storage devices. In order to speed up the processing capabilities of CPUs, many system designers utilize cache memory, which may be built onto the same chip as the processor itself. While caching certain segments of code is helpful in processing routine instructions, for many applications, such as data mining, speech recognition and video image processing, caching such information is generally not practical. As a result, for many applications, CPUs generally have to recall vast quantities of information from memory storage devices in byte sizes set by the size of registers.Additionally, since registers are commonly provided in pre-set widths (i.e., 64 bits or 32 bits), multiple registers are often needed to download/retrieve large quantities of data from a storage device within a reasonable time period. These registers are often directed to download data and then hold it until the CPU is ready to perform a specific task. When configured in this manner, many systems result in CPUs with large numbers of registers, each of which increase power requirements and inhibit system miniaturization. For example, the popular Pentium III(R) processor utilizes over 100 registers to support its various features and fuinctions.As is commonly known in the art, CPU's often begin the processing of large quantities of data by first determining a location for the data (i.e., the address), then fetching the data provided at the address, processing the fetched data, determining a location (i.e., a second address) where the result of the data processing is to be sent, sending the result to the second location, and then determining an instruction pointer, which preferably contains the address for the next instruction. Generally, the first address, the data, the second address, the result location, and the instruction pointer are provided in a memory array in sequential order. The memory is generally configured in sequential order during compiling so that the number of JUMPs are limited and the processing needed to determine which instruction is to be processed next is reduced. While compiling a program to reduce the number of JUMPs is often desirable from a CPU processing viewpoint, compiling often results in memory arrays which are not utilized to their maximum capacity. Instead, many memories often have significant blocks in which data may be stored that are never used.Additionally, while compilers often attempt to create software instructions that flow from one sequence line to a next, in reality, much of today's software code contains JUMPs, conditional branches, loops, and other data flow techniques. Since these software programs often do not naturally flow from one line to the next, system designers generally must also keep track of code locations via address pointers, and various other devices, each of which require additional registers and additional power.Additionally, currently available CPUs commonly require multiple instructions and processing steps to accomplish some of the simplest tasks, such as adding two operands. For example, currently available CPUs often execute an instruction requiring Operand 1 to be added to Operand 2 by performing the following steps:1. Fetch ADD instruction from location pointed to by Instruction Pointer ("IP"), and load the instruction into an instruction register;2. Decode the instruction and store in instruction register;3. Access a location in memory where a first operand is located, obtain the value for the first operand and store it in a temporary register;4. Access a second location in memory where a second operand is located, obtain the value for the second operand and store it in a temporary register;5. Perform the operation specified in the instruction register on the first and second operands by transferring the instruction and the first and second operands from their respective registers to the ALU;6. Determine where the result of the ALU process is to be stored;7. Store the results data to the determined location; and8. Determine the next address for the next instruction, which may require a JUMP to another memory location.While the above operation may be accomplished extremely quickly for a single mathematical calculation, today's CPUs often are required to process millions of transactions a second. When utilized on this magnitude, the constant reading, storing, addressing, and writing to and from memory via registers may significantly degrade a system's performance.Therefore, since today's CPU often spend inordinate amounts of time determining from where data and instructions are to be obtained and/or stored, storing the data, processing data, determining where the result of the data processing is to be stored, and then actually storing the result, a system is needed that reduces the amount of time a CPU spends determining where to obtain data and actually fetching the data needed for processing.Additionally, many of today's systems control numerous input/output devices, all of which are constantly requesting processor time. Each time a processor determines that a different Input/Output (I/O) device or a different processing routine needs to be executed, the processor commonly performs a state change. In a Windows(R) multi-tasking environment, state changes occur often because the various devices connected to the I/O bus are continuously jostling for the attention of the processors.As shown in FIG. 3A, the process by which many currently available processors perform a state change often requires numerous steps. The state change operation begins at 302 when a processor receives a request to stop processing a first task and to begin, as soon as possible, processing a second task. When a state change request is received, the CPU sets a register pointer equal to zero at step 304 and begin transferring the contents of each register utilized by the CPU into memory at a location specified by a stack pointer. The data transfer continues through steps 306-310 until the contents of each register utilized by the CPU are copied to a block of memory, often in sequential order. As each register is transferred, the CPU also increments the stack pointer and a register pointer until the value of the register pointer equals the total number of registers whose contents need to be saved. At this point, the CPU is ready to implement the desired state change (i.e., the registers may now be loaded with new instructions, addresses, and operands). For advanced CPUs, such as Pentium IIIs, which utilize hundreds of registers, implementing a state change can often take many microseconds.FIG. 4A shows a process 400 by which many current systems recover from a state change (i.e., resume the processing interrupted by the state change). Generally, the process 400 of recovering to the first state requires as many processing steps as does the changing of states to process the second task. As shown, the recovery operation begins at 402 when the CPU receives a direction that indicates the second task has been completed and that the first task may be restored. Next, the processor sets a register pointer equal to or less than the number of registers available to the CPU at step 404, and begins transferring the contents of memory from the location specified by the stack pointer into the appropriate registers until the contents have been restored for all of the registers which changed states in steps 406-410. After all of the registers are restored, the CPU then resumes processing the steps needed for the first task.In many environments, such as the Microsoft(R) Windows(R) operating system, state changes occur frequently. These state changes often interrupt the performance of user interface devices, such as keyboards and audio and video display devices. Therefore, a system is needed which enables a CPU to more efficiently perform state change operations.SUMMARY OF THE INVENTIONThe present invention provides a microprocessor which does not utilize registers to temporarily store data used by an arithmetic logic unit, a controller, or similar component. More specifically, the present invention provides a microprocessor which utilizes a data stream containing embedded addresses to process operations and read and write data directly from memory and other data storage devices.By providing an address embedded within a data stream, the present invention allows a microprocessor to be utilized which does not store data (i.e., instructions, addresses, and operands) in registers prior to and/or after execution of a processing step. Instead, the present invention preferably utilizes addresses embedded within the data stream to immediately determine from where operands are to be obtained, where a result of a processing step is to be stored, and where a next instruction is located. By preferably utilizing orthogonal data streams, the present invention enables a microprocessor to directly access data to/from storage devices. As such, the processor of the present invention is not limited by registers as to the size of words which may be processed and encourages the use of parallel microprocessors to simultaneously manipulate data streams of any width. Similarly, the present invention eliminates the need for address pointers, stack pointers, register pointers and various other flow and control registers and devices commonly utilized by today's CPUs to determine where data is to be obtained and/or stored.By providing within a data stream an address for the next instruction to be implemented by a microprocessor, the present invention is able to accomplish every transition from a first instruction to a second instruction via a JUMP. Utilizing JUMPs instead of address counters/pointers greatly simplifies the logic utilized when compiling software code sequences. Instead of compiling a software routine such that instructions follow each other in sequential order (and thus JUMPs are minimized), every transition between instructions is treated as a JUMP and thereby encourages a compiler to maximize code usage, minimize memory needs, expand code sequences, and compile a software code based upon considerations other than minimizing JUMPs. As such, the compiler is able to maximize the utilization of memory.The present invention also preferably simplifies state change operations. Instead of requiring a processor to record the values of numerous registers every time a state change is requested, only the address for the next instruction must be recorded, preferably in a single register, prior to performing the state change. Similarly, when recovering from a state change, only the address for the next instruction must be restored, and there is no need to restore registers with values of operands, instructions, destination addresses, or the like because such data is preferably obtained directly from memory and is not stored temporarily in registers.The foregoing and various other features and functions of the present invention are explained in detail with reference to the drawing figures and the following detailed description of the invention.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a block diagram of a processor that is substantially registerless according to a one embodiment of the present invention.FIG. 2 is a flow diagram of a process by which the embodiment of FIG. 1 retrieves data from memory and processes such data without using registers according.FIG. 3A is a flow diagram representing the processing steps by which a prior art processor changes states.FIG. 3B is a flow diagram representing the processing steps by which a processor used in the embodiment of FIG. 1 changes states.FIG. 4A is a flow diagram representing the processing steps by which a prior art processor returns to an original state after a state change.FIG. 4B is a flow diagram representing the processing steps by which a processor used in the embodiment of FIG. 1 returns to an original state after a state change.DETAILED DESCRIPTION OF THE INVENTIONAs shown in FIG. 1, one embodiment of a central processing unit "CPU" 100 according to the present invention provides a Simple Instruction Set Computer or processor (SISC) that drastically reduces the number of registers needed to store and process data. Instead of providing numerous registers into which data (data herein includes instructions, addresses and operands) is temporarily stored, the CPU 100 utilizes only one instruction and address register to process CPU operations.The CPU 100 accomplishes the before mentioned reductions in registers (and the accompanying reductions in size, speed and power requirements for the CPU) by utilizing an instruction set that encodes addresses directly into the data stream. As shown in FIG. 1, the CPU 100 utilizes many of those components which are commonly available in prior art CPUs, including an Arithmetic Logic Unit (ALU) 102, an Instruction and Address Register (IAR) 104, a Controller/Decoder 106 (ConDec), a Fetch Unit (FU) 110, various multiplexers 108 and 112, various flip flops for Carry 116 and Zero 118 bits, and reset 120 and start vector 122 inputs, which allow the CPU 100 to restart when necessary. However, unlike prior art CPUs, the CPU 100 does not utilize reads/writes from/to various registers and instead directly reads and stores information from/to a Storage Unit 114 (i.e., a memory device).The CPU 100 is preferably implemented with a reduced set of instructions that are highly orthogonal. As is commonly known in the art, an orthogonal instruction set is generally easier to decode than a corresponding non-orthogonal instruction set because each orthogonal instruction provides basically the same information in the same place and provides no preference for registers into which the data is to be temporarily stored. As such, the processor is not constrained by register requirements and may utilize any memory location as the destination or the source. In the CPU 100, an instruction preferably follows the following format: [Operation] [NEXT ADDRESS] [SOURCE ADDRESS 1] [SOURCE ADDRESS 2] [DESTINATION ADDRESS] wherein, Operation specifies the task to be performed by the ALU 102; SOURCE ADDRESS 1 and SOURCE ADDRESS 2 specify the location of the first and second operands, respectively, on which the ALU will perform the specified operation; NEXT ADDRESS specifies the location from memory where the next instruction will be obtained; and DESTINATION ADDRESS specifies the location where the result of the ALU operation is to be stored. However, those skilled in the art appreciate that the CPU 100 may instead utilize non-orthogonal instructions, as desired, upon suitable modification of the data stream and processing elements. The use of control bits, sync patterns, and other devices may be suitably utilized when non-orthogonal data streams are desired.By utilizing the above instruction format (or a derivation thereof), wherein the NEXT ADDRESS is embodied in the data stream, the CPU 100 provides those various data processing features commonly associated with CPUs without utilizing registers to temporarily store data. As those skilled in the art readily appreciate, the above instruction data structure and method of processing instructions is significantly different from those structures and methodologies commonly utilized in today's CPUs. The computer system preferably does not utilize an instruction counter to track instruction locations and instead embeds a NEXT ADDRESS within each instruction. In its most simple form, the above instruction format provides a JUMP between every instruction. Since a JUMP between each instruction is preferably utilized, software programs utilized by the computer system are not constrained during compiling by requirements that limit the number of JUMPs executed within a program.Additionally, since the CPU 100 does not constrain compiling by limiting the number of JUMPs, the CPU 100 provides a system that enables a compiler to compile a software program based upon other parameters, for example, the tasks to be completed by the CPU. Similarly, a compiler is able to maximize the utilization of memory. By not requiring instruction sequences to be stored in a specific order (i.e., by configuring each instruction as a JUMP to a subsequent instruction), the computer system allows a compiler to utilize commonly unused blocks of memory commonly present in most memory arrays. Additionally, those skilled in the art appreciate that, as memory utilization is maximized, the actual size of a memory array may be reduced. Lastly, those skilled in the art appreciate the various methods by which a data structure may be efficiently compiled in light of the addressing features provided in each instruction by the computer system.Additionally, in the preferred embodiment, address fields in each instruction also contain cachability information which is encoded on a single or multiple bits (depending upon the types of cachability supported by the specific embodiment). During compiling, these cachability bits indicate whether specific program instructions are desired to be cached, thereby further increasing the processing speed of the CPU by allowing commonly executed data streams to be placed in cache instead of other memory storage devices. Just as the CPU 100 is able to operate without registers by reading and writing data directly from/to memory devices, the CPU 100 may also achieve significant increases in processing speed by directly reading and writing data from/to cache. Therefore, the CPU 100 provides efficient caching of data at the time of compiling and the efficient utilization of such cached information during processing.Additionally, since the CPU 100 does not utilize registers to store data and/or instructions, the CPU is not limited by a predetermined maximum instruction length. Unlike prior art systems wherein the amount of data which can be processed by an ALU on a given cycle is limited by a register size, the CPU 100 may be configured with multiple ALUs (for example, in parallel, if needed) to process large data streams. Similarly, extremely small instructions may be efficiently processed without wasting space and/or power on unnecessarily large data registers. Thus, the CPU 100 provides a system that can support instructions of varying lengths and thereby maximize the data processing capabilities of the CPU while reducing power and space requirements.As mentioned previously, the CPU 100 is not limited to any specific instruction set and may be configured with a limited instruction set designed to accomplish certain tasks. An illustrative example of an instruction set for the CPU 100 might include a MOVE instruction, JUMP instruction, a Single Operand ALU Instruction (SOAI), and a Multiple Operand ALU Instruction (MOAI). Each of these exemplary instructions are described in greater detail below.A MOVE instruction provides that data located at the SOURCE ADDRESS is moved to a DESTINATION ADDRESS and then processing continues at the NEXT ADDRESS. An exemplary embodiment of a MOVE instruction preferably consists of the following format: [MOV] [NEXT ADDRESS] [SOURCE ADDRESS] [DESTINATION ADDRESS] where the location in the data stream of the NEXT ADDRESS, SOURCE ADDRESS, and DESTINATION ADDRESS are orthogonal relative to other data streams. Similarly, for an instruction in which multiple data widths may need to be supported, a MOVE instruction is preferably implemented as a MOVn, where "n" encodes the different data widths supported. For example, "n" might be two bits long and support data widths varying from 8 bits to 64 bits, as show below:<tb> <sep>n<sep>data width<tb> <sep>00<sep>8 bits (i.e., one byte)<tb> <sep>01<sep>16 bits<tb> <sep>10<sep>32 bits<tb> <sep>11<sep>64 bits.Another instruction the present invention preferably includes in an instruction set is a conditional or unconditional JUMP instruction. Such an instruction is preferably formatted as follows: [JC] [NEXT ADDRESS] [CONDITIONAL ADDRESS] wherein OP defines the JUMP condition. In the preferred embodiment, the JUMP condition is designated by a ZERO or CARRY bit based upon a result of the ALU's operations. However, those skilled in the art appreciate that a JUMP condition may be based upon any variable or parameter. As such, the present invention is not to be construed as being limited to any specific embodiment of a JUMP condition. When a complement to a given JUMP condition is desired, those skilled in the art appreciate that a separate instruction is not needed. Instead, the compiler creates a complement instruction by suitably swapping the address fields. Similarly, an unconditional JUMP may be created by merely setting both the "NEXT ADDRESS" and the "CONDITIONAL ADDRESS" fields to point to the same address (i.e., the desired destination).Additionally, as is commonly known in the art, a JUMP can be used to create a HALT instruction. The computer system 10 is designed to support this mode of operation by preferably setting both of the address fields to the same address as the JUMP instruction. When configured in this manner, the present invention suitably repeats the JUMP instruction by jumping back to the same instruction and thereby prohibiting the processor from performing any other operations. In such an embodiment, additional hardware elements, such as a comparator, may be utilized to detect the existence of a looping condition and power-down the processor until an interrupt is received.The CPU 100 also supports logical and arithmetic operations. Preferably, the ALU 102 supports an instruction set which includes the following operations: NAND, NOR, AND, OR, NOT, ADD, SUB, SHIFT/RDT, RST, and CMP. Those skilled in the art will readily understand the various functions performed by the above operations, and a further explanation will therefore be omitted in the interest of brevity. Additionally, the CPU100 supports single, double, and multiple operand instructions. For example, the CPU 100 is preferably configured to support a SOAI in the following format: [ALU] [NEXT ADDRESS] [SOURCE ADDRESS] [DESTINATION ADDRESS]. As is commonly known in the art, a SOAI commonly includes the shift (rotate) instruction and the invert (NOT) instruction. For a shift instruction, the number of bits to be shifted is preferably encoded into the [OP] field, however, those skilled in the art appreciate that such parameters may be specified in various other manners, including additional data fields, if necessary.Additionally, the computer system also supports special dual operand ALU instructions. These instructions generally use further coding of the [OP] bits to specify the special instruction. As may be appreciated by those skilled in the art, these instructions are unique in that they use the "SOURCE ADDRESS" and the "DESTINATION ADDRESS" data as the two operands. The DESTINATION ADDRESS is then over-written with the result of the ALU operations such that the original DESTINATION ADDRESS data is lost. Additionally, unlike currently available systems, the CPU 100 allows the result data to be placed anywhere in the system's address space and is not limited to any register or memory locations.As mentioned previously, another ALU instruction type the CPU 100 also preferably supports is the MOAI, which preferably is in the following format: [ALU][NEXT ADDRESS] [SOURCE ADDRESS 1] [SOURCE ADDRESS 2] [DESTINATION ADDRESS].As for the previous ALU instruction formats, the OP field encodes the desired logical or arithmetic function. Additionally, the SOURCE ADDRESS 1 and SOURCE ADDRESS 2 fields preferably specify the location within a memory or similar data storage device where the operands, upon which the ALU operation is to be performed, are located. Those skilled in the art will appreciate the various methods by which an OP field may encode an ALU operation or other operations and the methods by which locations for operands may be designated. Additionally, while the CPU 100 is herein described with reference to the before mentioned instruction types, it is to be appreciated that the CPU 100 is not limited to a specific instruction format, instruction length, or any other parameter and may be configured, as necessary, to process any instruction desired.The CPU 100 preferably controls various operations in larger systems, such as controlling the Input/Output bus, searching memory, processing video and audio data files, and various other functions. However, the CPU 100 is not limited to playing only a supportive role. The CPU 100 may be suitably configured to provide any processing function desired in any system, with those skilled in the art appreciating the various modifications, if any, which may be necessary to enable the CPU 100 to provide such data processing capabilities.The CPU 100 may also be implemented within a memory array itself. Due to the significant savings in size realized by the elimination of registers, the CPU 100 may be configured to reside within a "chip" containing a memory array (for example, RAM or ROM). Additionally, since the CPU 100 need not include registers (which generally come in fixed word lengths), by combining multiple CPUs 100 together, multiple CPUs 100 may be suitably configured to process data streams of any length.A process for implementing an instruction utilizing the CPU 100 is shown in FIG. 2, with reference also to the hardware design shown in FIG. 1. As shown in FIG. 2, the process by which the CPU 100 provides operations without the use of registers preferably begins when a RESET signal is received. The RESET signal suitably instructs the CPU 100 to restart processing. As shown in FIG. 1, the RESET signal is preferably received by the Con/Dec 106 at step 200, the ALU 102, the IAR 104, the FU 110, and the Zero and Carry flip-flops 118 and 116, respectively. Those skilled in the art appreciate that a CPU may be interrupted in various manners in order to begin processing a new instruction. Similarly, those skilled in the art also appreciate that an interrupt or reset signal may be received by numerous components in a CPU or system to reset a system as necessary. The CPU 100 suitably supports resets/interrupts when necessary to initiate new processing.Upon receiving a RESET signal at step 202, the Address Multiplexer (AddMux) 108 determines whether a signal is present from the Start_Vector 122. The Start_Vector 122, when activated, provides an address for a location in a storage device where an instruction to be implemented resides. When an address is being provided by the Start_Vector 122, the AddMux 108 preferably utilizes the address provided by the Start_Vector 122 as the location from which the next instruction is to be fetched. When an address is not being provided by the Start_Vector 122, the AddMux 108 preferably uses the address provided in the previous instruction's NEXT ADDRESS field, which is provided to the AddMux 108 on the NEXT_ADDR line 124.Upon receiving the address designating the location of the next instruction, the FU 110 suitably contacts the memory storage device and retrieves the desired data stream. The FU 110 first breaks out the various addresses and instructions (opcodes) specified in the data stream and sends these addresses/instructions to the IAR 104. For example, for a MOAI instruction, the IAR 104 preferably receives from the FU 110 an opcode which designates the instruction to be performed. The opcode is provided via the IAR 104 to the Con/Dec 106 via an INSTRUCTION line 136. Additionally, the MOAI receives the NEXT ADDRESS, SOURCE ADDRESS 1, SOURCE ADDRESS 2, and DESTINATION ADDRESS, which are suitably provided by the IAR 104 to the AddMux 108 on the NEXT_ADDR 124, the SRC1/COND_ADDR 126, and the SRC2/DEST 1128 lines, respectively.The SRC 1/COND_ADDR line 126 and the SRC2/DEST1 line 128 (when a two operand operation is being performed) preferably provide the addresses for the locations where the first and second operands, respectively, are stored. When only a single operand is being utilized for a given instruction, the SRC2/DEST1 line 128 preferably provides a destination address for a result of the operation. However, the operation of the CPU 100 is not limited to single and/or dual operand instructions. Those skilled in the art appreciate that additional operands may be added/deleted to a data stream (with additional data lines being added/deleted to the system 100 shown in FIG. 1).Additionally, some data streams may specify a constant (for example, the value of Pi), as an operand on a SRC 1 or SCR2 address line instead of specifying an address where the constant is located. The CPU 100 suitably distinguishes between addresses and operands in the SOURCE ADDRESS 1 and 2 fields and provides addresses/instructions to the IAR 104 while providing operands to the ALU 102 via the OPRI 138 and OPR2140 lines.As mentioned previously, the IAR 104 also receives instructions/opcodes from the FU 110 which are contained within the data stream. These instructions are suitably routed by the IAR 104 to the ConDec 106 on the INSTRUCTION line 136. When the ConDec 106 receives an opcode on the INSTRUCTION line 136 from the IAR 104, the ConDec 106 suitably decodes the instruction at step 204. The decoding of opcodes by controllers is well known in the art. The CPU 100 may utilize any known or to be known method for decoding an instruction and is not limited to decoding specific types of instructions or decoding such instructions using specific procedures.In addition to providing the instruction to the controller 106 for decoding, the IAR 104 also breaks out each field of the data stream and suitably provides this information to the AddMux 108. As shown in FIG. 1, the IAR 104 preferably provides three input lines to the AddMux 108, namely the SRC2/DEST 1 line 128, the SRC1/COND_ADDR line 126, and the NEXT_ADDR line 124. However, the CPU 100 may be configured such that more or less input lines for addresses are utilized by the AddMux 108, as necessary. For example, when a JUMP instruction is retrieved by the FU 110, a NEXT ADDRESS and a SRC1/COND_ADDR (CONDITIONAL ADDRESS) are utilized, while the SRC2/DEST 1 address is not utilized by the AddMux 108.After the instruction has been decoded and the addresses provided to the AddMux 108, the CPU 100 determines at step 206 whether the instruction is a JUMP. If the instruction is a JUMP, the CPU 100 suitably fetches the next instruction from the memory location specified on the NEXT_ADDR 124 line for the current data stream or from the memory location specified on the SRC1/COND_ADDR 126 line. The AddMux 108 determines which address line to process based upon the value provided by the ConDec 106 on the SEL2134 line. Similarly, the ConDec 106 suitably determines which address to select based upon the Instruction decoded, whether the Reset 120 has been triggered, and the values provided by the Carry 116 and Zero 118 flip flops at step 208. Additionally, the CPU 100 may suitably utilize known or future developed multiplexer and controller/decoder operations to determine from which address in memory to retrieve instructions, as necessary.As described above, when the instruction to be executed is a JUMP, the CPU 100 suitably fetches the instruction located at the JUMP address, and resumes processing by decoding the new instruction and determining whether a subsequent JUMP instruction is present at steps 202, 204 and 206. In the instance of a HALT instruction, the CPU 100 may continue to loop indefinitely until a reset is received or additional hardware, such as a comparator, determines that a HALT has occurred and suitably interrupts the system's processing.When the decoded instruction is not a JUMP, the CPU 100 continues processing by configuring the AddMux 108 to select the SRC1/COND_ADDR line 126. When the operand is not a constant, the CPU 100 is preferably configured such that the SRC1/COND_ADDR line 126 (or SRC2/DEST 1 line 128) designates an address for a memory location where the first/second operand is stored. At this point, the FU 110 retrieves data from the SRC1 address of memory or a similar data storage, and provides this data to the ALU 102 over the OPR1 line 138.In the CPU 100, the FU 110 retrieves variables and data parameters from memory locations. The CPU 100, however, may also be suitably configured such that a data stream provides the variables and data parameters to be utilized in processing an instruction within the data stream itself and does not require the FU 110 to retrieve the data from additional memory locations. Those skilled in the art appreciate, for example, that a data stream of 32 bits could be designated such that the first eight bits specify an operation to be performed, the second four bits specify a NEXT ADDRESS, the third eight bits specify a first operand, the fourth eight bits specify a second operand or an address, and the last four bits specify a destination where the result of the ALU operation is to be stored. A FU 110 may be suitably designed to separate such bits into their respective categories and provide such data bits to the appropriate devices which utilize the data bits.After the data variables for the first operand have been retrieved, the CPU 100 preferably determines at step 212 whether a single operand or two operands are specified in the data stream. The CPU 100 may make this determination based upon various factors including, but not limited to, the length of the data stream and the operation to be performed by the ALU and/or the controller. When two operands are specified, the ConDec 106 preferably directs the Store Multiplexer (StoreMux) 112 to select the address provided on the DEST2 line 130 as the destination for the results of the ALU operation (Block 214). Also, the ConDec 106 directs the FU 110 to retrieve from memory the value for the second operand, which is then provided to the ALU 102 via the OPR2 line 140 (Block 218). Similarly, when a single operand instruction is being processed (Block 216), the ConDec 106 preferably directs the StoreMux 112 to select the SRC2/DEST1 address as the destination for the result of the ALU operation.After the operand(s) have been retrieved from the data stream, the CPU 100 continues at step 220 by performing the specified operation. The operation to be performed by the ALU 102 is provided by the ConDec 106 via the ALU OP line 146. However, the present invention may be suitably configured such that operations/instructions are provided from the IAR 104 and/or the FU 110 directly to the ALU 102 with the appropriate control signals being provided by the ConDec 106.After the ALU 102 has performed the specified operation, the result is then moved to the selected destination address at 222. As shown in FIG. 1, the StoreMux 112 preferably includes two input address lines, the SRC2/DEST1 line 128 and the DEST2 line 130. Additionally, a control line, SEL1142, provides control signals from the ConDec 106 that designate which address to utilize when storing a result. Also, the ADDRESS line 144 provides an output from the StoreMux 112, which designates where in a Storage Unit 114 a result is to be recorded. While the CPU 100 is depicted as showing two address locations from which the ConDec 106 may select to record a result, it is to be appreciated that the StoreMux 112 may be configured to support more than two addresses. Additionally, the StoreMux 112 may also be connected to multiple storage devices, including Memory 114, all of which may be suitably designated via the StoreMux 112 using known in the art techniques.Additionally, the CPU 100 provides quicker state change processing since the CPU 100 does not store data in numerous registers and thus, does not have to save the contents of such registers in memory before implementing the desired state change. As mentioned previously with respect to FIG. 3A, currently available systems commonly must perform multiple steps for each register utilized by the CPU when changing states. In contrast, FIG. 3B illustrates the processing steps the CPU 100 performs when changing states. More specifically, when implementing a state change using the present invention, the CPU 100 receives a request to save the state at step 322. The CPU 100 then retrieves the NEXT ADDRESS from the data stream for the currently implemented instruction and pushes this address location into a preselected memory location, location "X" (Block 324). The CPU 100 then increments the value of the address identified as location "X" by one and verifies the NEXT ADDRESS was loaded into the X location at step 326. The CPU 100 preferably increments the value of X by one to ensure that a subsequent state change (for example, from a second task to a third task) may also be accomplished, and the NEXT ADDRESS for the second task is suitably stored before the third task is accomplished. At this point, the CPU resumes processing with the instruction for which the state change was requested (Block 328). In short, the CPU 100 preferably requires only one parameter, the NEXT ADDRESS to be stored before a state change may be implemented.When the processing for the second task has been completed, the CPU 100 resumes the first task. FIG. 4B illustrates the process by which the CPU 100 recovers from a state change. As shown, this process preferably begins when the second task is completed and a restore state signal is generated at step 422. At this point, the CPU 100 recalls the NEXT ADDRESS from memory location "X" for the interrupted task (Block 424) and decrements the value of X by one (1) at step 426. In this manner, the present invention coordinates state changes and returns to original states regardless of the number of state changes upon state changes that have been requested. For example, when a first task is interrupted by a second task that is interrupted by a third task, upon completing the third task, X points to a memory location in which the NEXT ADDRESS for the second task is stored. The CPU 100 transfers the NEXT ADDRESS data and decrements X by one. Once the second task is completed, the restore state indicator is activated telling the CPU 100 to retrieve from register X the NEXT ADDRESS, which now points to the NEXT ADDRESS for the first task. As such, the CPU 100 greatly simplifies state changes, thereby allowing the system to focus more of its processing capabilities upon solving problems instead of swapping and saving data.As described herein, the CPU 100 may be utilized in various embodiments either as a stand-alone processor or in parallel with various other processors. In another embodiment of the present invention, the CPU 100 is utilized in memory as a built in self-test device. Instead of utilizing prior art processes of burning memory chips, testing the chip on a testing stand, fixing errors in the memory chip, packaging the chip, testing the chip again, and, if acceptable, shipping the chip, the present invention may be built onto the chip itself and used to test the memory device. The CPU 100 is aptly suited for verifying a memory device because it directly reads from and writes to memory without requiring extra processing steps, such as temporarily storing information in registers.While the present invention has been described and illustrated with reference to a preferred embodiment, it will be appreciated by those skilled in the art that changes in the above descriptions or illustrations may be made with respect to form or detail without departing from the spirit or scope of the present invention as expressed in the following claims. |
Disclosed embodiments include silicon interconnect bridges that are in a molded frame, where the molded frame includes passive devices and the silicon interconnect bridge includes through-silicon vias that couple to a redistribution layer on both the silicon interconnect bridge and the molded frame. |
CLAIMS1. An integrated-circuit package substrate, comprising: a silicon interconnect bridge in a molding-mass frame, wherein the molding-mass frame has a die side and a package side, and wherein the silicon interconnect bridge shares the die side; a passive device in the molding-mass frame, wherein the silicon interconnect bridge and the passive device, occupy at least some of the same vertical space encompassed by the molding- mass frame; and a redistribution layer on the die side, wherein the redistribution layer is coupled to the passive device and to a through-silicon via in the silicon interconnect bridge, and wherein the through-silicon via communicates to the package side.2. The integrated-circuit package substrate of claim 1, further including: a package substrate including a bridge side and a land side, and a ground (VSS) plane in a dielectric material and a power (VCC) plane in the dielectric material; and where the bridge side of the package substrate is coupled to the package side through an electrical bump.3. The integrated-circuit package of claim 1, wherein the redistribution layer is coupled at the die side to a first integrated-circuit die by a first electrical bump and to a subsequent integrated-circuit die by a subsequent electrical bump, and wherein communication between the first integrated-circuit die and the subsequent integrated-circuit die is by a trace in the redistribution layer.4. The integrated-circuit package of claim 1 , wherein the redistribution layer is coupled at the die side to a first integrated-circuit die by a first electrical bump and to a subsequent integrated-circuit die by a subsequent electrical bump, and wherein communication between the first integrated-circuit die and the subsequent integrated-circuit die is by the through- silicon via in the silicon interconnect bridge.5. The integrated-circuit package of claim 1, wherein the redistribution layer is coupled at the die side to a first integrated-circuit die by a first electrical bump and to a subsequent integrated-circuit die by a subsequent electrical bump, wherein communication between the first integrated-circuit die and the subsequent integrated-circuit die is by the through-silicon via in the silicon interconnect bridge, and also by a trace in the redistribution layer.6. The integrated-circuit package of claim 1, wherein the passive device is both coupled to the die side and to the package side.7. The integrated-circuit package of claim 1, wherein the passive device is both coupled to the die side and to the package side, wherein the passive device is a first passive device, further including a subsequent passive device in the molding-mass frame, wherein the first passive device and the subsequent passive device are on opposite sides of the silicon interconnect bridge.8. The integrated-circuit package of claim 1, wherein the passive device is a first capacitor, further including a subsequent capacitor, and wherein the first and subsequent capacitor are contacted at respective power terminals, to form a power rail.9. The integrated-circuit package of claim 1, wherein the passive device is a first capacitor, further including a subsequent capacitor, and wherein the first and subsequent capacitor are contacted at respective power terminals, to form a power rail, further including: a package substrate including a bridge side and a land side, and a ground (VSS) plane in a dielectric material and a power (VCC) plane in the dielectric material; where the bridge side of the package substrate is coupled to the package side through an electrical bump; and wherein the VCCplane is coupled to the power rail.10. The integrated-circuit package of claim 1, wherein the passive device is a first capacitor, further including a subsequent capacitor, and wherein the first and subsequent capacitor are contacted at respective power terminals, to form a power rail, further including: a third capacitor, wherein the third capacitor contacts the subsequent capacitor to form a VSSrail, and wherein the subsequent capacitor is stacked on the first capacitor and the third capacitor.11. The integrated-circuit package of claim 1, wherein the passive device is a first capacitor, further including a subsequent capacitor, and wherein the first and subsequent capacitor are contacted at respective power terminals, to form a power rail; a third capacitor, wherein the third capacitor contacts the subsequent capacitor to form a VSSrail, and wherein the subsequent capacitor is stacked on the first capacitor and the third capacitor; a package substrate including a bridge side and a land side, and a ground (VSS) plane in a dielectric material and a power (VCC) plane in the dielectric material;
where the bridge side of the package substrate is coupled to the package side through an electrical bump; wherein the Vec plane is coupled to the power rail; and wherein the VSSplane is coupled to the VSSrail.12. An integrated-circuit package substrate, comprising: a first silicon interconnect bridge in a molding-mass frame, wherein the molding-mass frame has a die side and a package side, and wherein the first silicon interconnect bridge shares the die side; a subsequent silicon interconnect bridge in the molding-mass frame, wherein some molding- mass material of the molding-mass frame, spaces apart the first silicon interconnect bridge from the subsequent silicon interconnect bridge, and wherein the subsequent silicon interconnect bridge also shares the die side; an interstitial passive device in the molding-mass material between the first silicon interconnect bridge and the subsequent silicon interconnect bridge, wherein the first and subsequent silicon interconnect bridges and the interstitial passive device, occupy at least some of the same vertical space encompassed by the molding-mass frame; a redistribution layer on the die side, wherein the redistribution layer is coupled to the passive device and to a first through-silicon via in the first silicon interconnect bridge, and wherein the first through-silicon via communicates to the package side; and wherein the redistribution layer is coupled to the passive device and to a subsequent through- silicon via in the subsequent silicon interconnect bridge, and wherein the subsequent through-silicon via communicates to the package side.13. The integrated-circuit package substrate of claim 12, further including: a first capacitor in the molding-mass material and adjacent the first silicon interconnect bridge and opposite the interstitial passive device; a subsequent capacitor in the molding-mass material and adjacent the subsequent silicon interconnect bridge and opposite the interstitial passive device.14. The integrated-circuit package substrate of claim 12, further including: a package substrate including a bridge side and a land side, and a ground (VSS) plane in a dielectric material and a power (VCC) plane in the dielectric material; where the bridge side of the package substrate is coupled to the package side through an electrical bump;
wherein the redistribution layer is coupled at the die side to a first integrated-circuit die by a first electrical bump and to a subsequent integrated-circuit die by a subsequent electrical bump, and wherein communication between the first integrated-circuit die and the subsequent integrated-circuit die is by a trace in the redistribution layer.15. The integrated-circuit package of claim 12, wherein the redistribution layer is coupled at the die side to a first integrated-circuit die by a first electrical bump and to a subsequent integrated-circuit die by a subsequent electrical bump, and wherein communication between the first integrated-circuit die and the subsequent integrated-circuit die is by a trace in the redistribution layer.16. The integrated-circuit package of claim 12, wherein the passive device is a capacitor that is both coupled to the die side and to the package side.17. The integrated-circuit package substrate of claim 12, further including: a first-side first capacitor and a first-side subsequent capacitor in the molding-mass material and adjacent the first silicon interconnect bridge and opposite the interstitial passive device, wherein the first-side first capacitor and a first-side subsequent capacitor are contacted at respective power terminals, to form a first power rail; a subsequent-side first capacitor and a subsequent-side subsequent capacitor in the molding-mass material and adjacent the subsequent silicon interconnect bridge and opposite the interstitial passive device, wherein the subsequent-side first capacitor and a subsequent-side subsequent capacitor are contacted at respective power terminals, to form a subsequent power rail.18. The integrated-circuit package of claim 17, further including: a package substrate including a bridge side and a land side, and a ground (VSS) plane in a dielectric material and a power (VCC) plane in the dielectric material; where the bridge side of the package substrate is coupled to the package side through an electrical bump; and wherein the VCCplane is coupled to at least one of the first power rail and the subsequent power rail.19. The integrated-circuit package of claim 18, further including:
a first-side third capacitor, wherein the first-side third capacitor contacts the first-side subsequent capacitor to form a VSSrail, and wherein the first-side subsequent capacitor is stacked on the first-side first capacitor and the first-side third capacitor.20. A computing system comprising: a silicon interconnect bridge in a molding-mass frame, wherein the molding-mass frame has a die side and a package side, and wherein the silicon interconnect bridge shares the die side; a first integrated-circuit die on the die side, wherein the first IC die is a logic processor; a subsequent integrated-circuit die on the die side and adjacent the first integrated-circuit die, wherein the subsequent IC die is a graphics processor; a passive device in the molding-mass frame, wherein the silicon interconnect bridge and the passive device, occupy at least some of the same vertical space encompassed by the molding- mass frame; a redistribution layer on the die side, wherein the redistribution layer is coupled to the passive device and to a through-silicon via in the silicon interconnect bridge, and wherein the through-silicon via communicates to the package side; a package substrate including a bridge side and a land side, and a ground (VSS) plane in a dielectric material and a power (VCC) plane in the dielectric material; where the bridge side of the package substrate is coupled to the package side through an electrical bump; wherein the redistribution layer is coupled at the die side to the first IC die by a first electrical bump and to the subsequent IC die by a subsequent electrical bump, and wherein communication between the first integrated-circuit die and the subsequent integrated-circuit die is by a trace in the redistribution layer; and wherein the molded silicon-interconnect bridge is part of a chipset.21. The computing system of claim 20, further including a third IC die on the redistribution layer, wherein the third IC die is a memory die; and a board coupled to the package substrate at the land side, by an electrical-bump array.22. The computing system of claim 21, wherein the board includes an external shell that is a dielectric material, and wherein the external shell is at least part of the exterior of an apparatus selected from a mobile computing system and a drone. |
MOLDED SILICON INTERCONNECTS IN BRIDGES FOR INTEGRATED-CIRCUIT PACKAGESPRIORITY APPLICATIONThis application claims the benefit of priority to Malaysian Application Serial Number PI PI2019005034, filed August 30, 2019, which is incorporated herein by reference in its entirety.FIELDThis disclosure relates to power delivery for integrated- circuit device packages.BACKGROUNDIntegration of multiple integrated-circuit chips within a package, for example three- dimensional (3D) stacked integrated-circuit device has power-delivery issues such as undesired inductance loops and impedance peak profiles.BRIEF DESCRIPTION OF THE DRAWINGSDisclosed embodiments are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings where like reference numerals may refer to similar elements, in which:Figure 1A is a cross-section elevation of an integrated-circuit package apparatus with a molded silicon interposer bridge according to an embodiment;Figure 1B is a top plan of portions of the integrated-circuit package apparatus depicted in Figure 1A according to an embodiment;Figure 1C is a bottom view of the molded silicon-interconnect bridge depicted in Figure 1A according to an embodiment;Figure 2A is a top plan of portions of an integrated-circuit package apparatus similar to that depicted in Figure 1B according to an embodiment;Figure 2B is a perspective elevation of stacked passive devices according to an embodiment;Figure 2C is a cross-section elevation of an array of stacked passive devices in a molding mass material according to several embodiments;Figure 3 is a top plan of portions of a molded silicon-interconnect bridge according to several embodiments;
Figure 4 is a top plan of portions of a molded silicon-interconnect bridge according to several embodiments;Figure 5A is a cross-section elevation of an integrated-circuit package apparatus with dual molded silicon interposer bridges and an interstitial array of passive devices according to an embodiment;Figure 5B is a top plan of portions of the integrated-circuit package apparatus depicted in Figure 5A including interstitial passive devices according to an embodiment;Figure 5C is a bottom view of the molded silicon-bridge interconnect depicted in Figure 5 A according to an embodiment;Figures 6A through 6F represent fabrication of molded silicon-bridge interconnects for assembly to at least two IC dice and to a package substrate according to several embodiments; andFigure 7 is included to show an example of a higher-level device application for the disclosed embodiments.DETAILED DESCRIPTIONDisclosed embodiments include molded silicon-interconnect bridges (MSiBs) that interface between integrated-circuit package substrates and integrated-circuit dice. Passive devices such as decoupling capacitors are embedded in the MSiBs such that power delivery demand changes are faster by the proximate location of the passive devices. In an embodiment, the capacitor is a multi-layer ceramic capacitor. In an embodiment, the capacitor is a silicon capacitor.Ball-grid array densities are improved for input-output (I/O) density changes where keep- out-zone issues are addressed. Location of the passive devices, closer to the integrated-circuit dice, relieves integrated-circuit package substrate real estate issues to increase interconnect densities.Power integrity of electrical performance is achieved by reduced package inductance looping. Decoupling capacitors are directly coupled to power rails (VCC) and to ground (VSS), which lowers power delivery network impedance (ZPDN) and jitter behaviors. Location of the MSiBs on a die side of an integrated-circuit package substrate, provides close GTE mismatch tolerances.The molded silicon-interconnect bridge embodiments use the term "silicon" as a genus for semiconductive material such as silicon or III-V semiconductive material, with useful doping variations according to several embodiments. In an embodiment, the molded silicon-interconnect bridge embodiments use the term "silicon" as a genus for inorganic glass materials
with useful doping variations to closely match coefficients of thermal expansions of integrated- circuit dice that use the MSiB embodiments, according to several embodiments.Figure 1A is a cross-section elevation of an integrated-circuit package apparatus 100 with a molded silicon interposer bridge according to an embodiment. A silicon interconnect bridge in a molding-mass frame 110 includes a die side 111 and a package side 109. A silicon interconnect bridge 112 is in a molding-mass frame 114 and at least part of the silicon interconnect bridge 112 and the molding-mass frame 114 share the die side 111. In an embodiment, the assembly may be referred to as a molded silicon-interconnect bridge (MSiB) 110.A passive device 116 is in the molding-mass frame 114 and the silicon interconnect bridge 112 and the passive device 116, occupy at least some of the same vertical space encompassed by the molding-mass frame 114. A redistribution layer (RDL)118 is on the die side 111 and the redistribution layer 118 is coupled to the passive device 116 and to a through- silicon via 120 in the silicon interconnect bridge 112. The through-silicon via 120 communicates to the package side 109.In an embodiment, the passive device 116 is both coupled to the die side 111 and to the package side 109. At the package side 109, the passive device 116 is coupled by an electrical interconnect 122 to an electrical bump in an array, one electrical bump of which is indicated by reference number 124.In an embodiment, the passive device 116 is a first passive device 116 and a subsequent passive device 126 is in the molding-mass frame 114, such that the two passive devices 116 and 126 are on opposite sides of the silicon interconnect bridge 112. As illustrated in an embodiment, the passive devices 116 and 126 are decoupling capacitors. in an embodiment, a first integrated-circuit die 10 is on the redistribution layer 118 and a subsequent integrated-circuit die 20 is also on the redistribution layer 118, where the two IC dice 10 and 20 are side-by-side.Interconnection between the two IC dice 10 and 20 is through an inter-die trace 128 in the RDL 118. in an embodiment, interconnection between the two IC dice 10 and 20 is by coupling through the through-silicon via 120. In an embodiment, coupling between the two IC dice 10 and 20 is through an inter-die trace 128 in the RDL 118. In an embodiment, interconnection between the two IC dice 10 and 20 is both by an inter-die trace 128 in the RDL 118 and by the TSV 120.An integrated-circuit (IC) package substrate 130 includes a bridge side 131 and a land side 129. In an embodiment, the IC package substrate 130 includes interconnections on either side of and passing through a package core 132. A bridge-side redistribution layer (RDL) 134
and a land-side RDL 136, as well as through-core interconnects 138 provide electrical communication between the bridge side 131 and the land side 129 according to an embodiment. Within the IC package substrate 130 is a ground (VSS) plane 140 in the dielectric material of the IC package substrate 130, as well as a power (VCC) plane 142 in the dielectric material.In an embodiment, the land side 129 faces a board 144 such as a motherboard in a computing system, and an electrical bump array 146 is seen being brought toward the board 144. In an embodiment, the board 144 has an external shell 148 that provides at least one of physical and electrical insulative protection for components on the board 144. For example, the external shell 148 may be part of a hand-held computing system such as a communication device. In an embodiment, the external shell 148 is part of the exterior of a mobile computing platform such as a drone.Figure 1B is a top plan of portions of the integrated-circuit package apparatus 101 depicted in Figure 1 A according to an embodiment The integrated-circuit package apparatus 102 shows the first and subsequent IC dice 10 and 20 on the RDL 118. The passive devices 116 and 126 are seen along a section line A - - A and the passive devices 116 and 126 are depicted in ghosted lines below the RDL 118. Further below the RDL 118, six columns of the electrical bump 124 (in ghosted lines) are also depicted in an array on the bridge side 131 of the 1C package substrate 130. Two columns each of bumps 124 are below the passive devices 116 and 126 (see Figure 1A) and they are not depicted in Figure 1B.Further in ghosted lines, the material of the molding-mass frame 114 occupies approximately the same perimeter as the RDL 118. Consequently, the silicon interconnect bridge 112 is framed by the material of the molding-mass frame 114.Figure 1C is a bottom view of the molded silicon-interconnect bridge 101 depicted in Figure 1 A according to an embodiment The package side 109 of the mold material that makes up the molding-mass frame 114, exhibits the several capacitors 116 and 126, which are depicted in ghosted lines as they may be embedded beyond the package side 109 of the mold material that makes up the molding-mass frame 114. As illustrated in Figure 1A, 10 electrical bumps 124 are arrayed in columns that include seven row's. These electrical bumps 124 are on the package side 109 and are in solid lines as they are not behind the package side 109 of the MSiB 110.Figure 2 A is a top plan of portions of an integrated-circuit package apparatus 201 similar to that depicted in Figure 1B according to an embodiment. The integrated-circuit package apparatus 201 shows respective first and subsequent IC dice 10 and 20 on an RDL 218 (depicted as an upper surface 218 of the RDL 218). Passive devices 216 and 226 are seen along a section line A - - A’, that is a similar placeholder as section line A - - A’ seen in Figure 1B, with differences as herein discussed.
Passive devices 216, 217, 226 and 227 are arrayed in two rows or respective opposite sides of the embedded silicon -interconnect bridge 212 where passive devices 217 and 227 are respectively stacked on passive devices 216 and 226 within the mold material of the molding- mass frame 214 and the passive devices 216 and 217, and 226 and 227 are depicted in ghosted lines below the RDL 218. Further below the RDL 218, six columns of the electrical bump 224 (in ghosted lines) are also depicted in an array on the bridge side 231 of the IC package substrate 230. At least one column each of bumps 224 are below the passive devices 216 and 226 (see similar bump array 124 in Figure 1A) and they are not depicted in Figure 2A.By viewing the passive devices 216 and 217 in seriatim repetition on the first side of the MSiB 212, one observes a first-side third capacitor, wiiich is a capacitor 216 that contacts a first- side second capacitor 217, which in turn contacts a first-side first capacitor; a different occurrence of 216. Similarly by viewing the passive devices 226 and 227 in seriatim repetition on the subsequent side of the MSiB 212, one observes a subsequent-side third capacitor, which is a capacitor 226 that contacts a subsequent-side second capacitor 227, which in turn contacts a subsequent-side first capacitor; a different occurrence of 226.Further in ghosted lines, the material of the molding-mass frame 214 occupies approximately the same perimeter as the RDL 218. Consequently, the silicon interconnect bridge 212 is framed by the material of the molding-mass frame 214.Figure 2B is a perspective elevation of stacked passive devices according to an embodiment. Two bottom capacitor 216 are configured with a top capacitor 217, where the darker-shaded electrodes represent power terminal, and the lighter-shaded electrodes represent ground or source terminal. Consequently, a power rail is depicted where respective power terminals of the bottom 216 and top capacitor 217 make contact. Similarly, a ground (VSS) rail is depicted where the respective ground terminals of the bottom 216 and top capacitor 217 make contact.Figure 2C is a cross-section elevation of an array of stacked passive devices in a molding mass material according to several embodiments. In a multi-die embodiment, different potentials are used for various dice, such as a V1for a first die, VNfor a subsequent die, and V3for a third die. In an embodiment, different voltages are used in different parts of a given die. In a non-limiting example embodiment, a stacked array of top 217 and bottom 216 capacitors are configured in a molding mass 214 where a first power rail 240 is associated with a voltage of 1.0 V, a subsequent power rail 242 is associated with a voltage of 1.5 V, and a third power rail 244 is associated with a voltage of 1.8 V. A metal build-up layer 246 is also configured, such that when assembled as an integral part of, e.g., the MSiB 210 of the integrated- circuit package apparatus 201 depicted in Figure 2A, several different voltages may be delivered to composite
power rails by use of stacked embedded capacitors. In an embodiment, the metal build-up layer 246 is part of the redistribution layer 218.In an embodiment, a first power plane 241 is related to the first power rail 240 within the metal build-up layer 246. in an embodiment, a subsequent power plane 243 is related to the subsequent power rail 242 within the metal build-up layer 246. In an embodiment, a third power plane 245 is related to (but not connected in the drawing) the third power rail 244 within the metal build-up layer 246. In an embodiment, a ground plane 248, provides a ground voltage (VSS) reference access or current return path to several devices on the lee side of electrical- potential usage.Figure 3 is a top plan of portions of a molded silicon-interconnect bridge 310 according to several embodiments. The MSiB 310 includes stacked capacitors 316 and 317, and 326 and 327 that are arrayed within molding-material 314 that makes up a molding-material frame for the MSiB 310.As illustrated and in similar stacking fashion depicted in Figures 2A, 2B and 2C, power rails are formed as well as ground rails by contacting appropriate power terminals to power terminals, and ground terminals to ground terminals. Whereas the stacking fashion is in semi- circular, three-capacitor arrangements, the several configurations have different voltages for power rails according to an embodiment. For example in an embodiment, the two power rails (all capacitors 316 and 317) that are seen on the left of the silicon -interconnect bridge 312, have a voltage of IV. The power rail (the capacitors 326 and 327 at the upper right-hand corner) has a voltage of 1.5 V. And the power rail (the capacitors 326 and 327 at the lower right-hand corner) has a voltage of 1.8 V.In an embodiment, a different occurrence of top capacitors 317’ and 327’ are disposed on the bottom capacitors 316 and 326 opposite the top capacitors 317 and 327 to form a four-capacitor stacked arrangement in a circular configuration. Improved real-estate utilization can be achieved through both semi-circular and circular (or closed loop) stacked capacitor arrangements.The molded silicon-interconnect bridge 310 shows the first and subsequent IC dice 10 and 20 on an RDL 318 (where the RDL 318 has substantially the same perimeter as the molding-material 314). The reference number 318 is showing as an upper surface. The passive devices 316 and 317, and 326 and 327 are depicted in ghosted lines below the RDL 318. Further below the RDL 318, six columns of the electrical bump 324 (in ghosted lines) are also depicted in an array on the package side 309 of the MSiB 310. At least two columns each of electrical bumps 324 are below the passive devices 316 and 326 and they are not depicted in Figure 3.
Further in ghosted lines, the material of the molding-mass frame 314 occupies approximately the same perimeter as the RDL 318. Consequently, the silicon interconnect bridge 312 is framed by the material of the molding-mass frame 314.Figure 4 is a top plan of portions of a molded silicon-interconnect bridge 410 according to several embodiments. The MSiB 410 includes stacked capacitors 416 and 417, and 426 and 427 that are arrayed within molding-material 414 that makes up a molding-material frame for the MSiB 410.As illustrated and in similar stacking fashion depicted in Figures 2A, 2B, 2C and 3, power rails are formed as well as ground rails by contacting appropriate power terminals to power terminals, and ground terminals to ground terminals. Whereas the stacking fashion is in semi-serpentine, seven-capacitor arrangements, the several configurations have different voltages for power rails according to an embodiment. For example, the four power rails (all capacitors 416 and 417) that are seen on the top part of the drawing of the silicon-interconnect bridge 412, have a voltage of IV The power rail (the capacitors 426 and 427 at the bottom left to an approximate midline 480) have a voltage of 1.5 V. And the power rail (the capacitors 426 and 427 at the bottom right to the approximate midline 480) has a voltage of 1.8 V.The molded silicon-interconnect bridge 410 shows the first and subsequent IC dice 10 and 20 on an RDL 418 (where the RDL 418 is depicted as a top surface, which has substantially the same perimeter as the molding-material 414). The passive devices 416 and 417, and 426 and 427 are depicted in ghosted lines below the RDL 418. Further below the RDL 418, 26 columns of the electrical bump 424 (in ghosted lines) are also depicted in an array on the package side (not pictured) of the MSiB 410. More electrical bumps 424 may be located below the passive devices 416 and 417, and 426 and 427 and they are not depicted in Figure 4.Further in ghosted lines, the material of the molding-mass frame 414 occupies approximately the same perimeter as the RDL 418. Consequently, the silicon interconnect bridge 412 is framed by the material of the molding-mass frame 414.Figure 5A is a cross-section elevation of an integrated-circuit package apparatus 501 with dual molded silicon interposer bridges and an interstitial array of passive devices according to an embodiment. At least two silicon interconnect bridges in a molding-mass frame 510 includes a die side 511 and a package side 509. At least two silicon interconnect bridges 512 and 512’ are in a molding-mass frame 514 and at least part of the silicon interconnect bridges 512 and 512’ and the molding-mass frame 514 share the die side 511. In an embodiment, the assembly may be referred to as a molded silicon-interconnect bridge (MSiB) 510.A first passive device 516 is in the molding-mass frame 514 and the first silicon interconnect bridge 512 and the first passive device 516, occupy at least some of the same
vertical space encompassed by the molding-mass frame 514. A subsequent passive device 526 is in the molding-mass frame 514 and the subsequent silicon interconnect bridge 512’ and the subsequent passive device 526, occupy at least some of the same vertical space encompassed by the molding-mass frame 514.In an embodiment, an interstitial passive device 576 is located within the molding material of the molding-mass frame 514, between the first silicon interconnect bridge 512 and the subsequent silicon interconnect bridge 512’. Consequently, a multiple-bridge, molded silicon-interconnect bridge (MBMSiB) 510 is achieved with an interstitial passive device. The MBMSiB 510 creates useful decoupling-capacitor locations to support the first IC die 10 and the subsequent IC die 20.In an embodiment, any of the stacked passive device embodiments are located within the molding-mass frame 514. In an embodiment, the C- or circular-stacked capacitors 316 and 317 in Figure 3 are located at the Z-level of the X-Y location of the first passive device 516. In an embodiment, all capacitors 516 and 526 are C- or circular-stacked capacitors. In an embodiment, the stacked-capacitor string 216 and 217 embodiments in Figure 2A, are located at the Z-level of the X-Y location of the interstitial capacitor 576. In an embodiment, the semi- serpentine stacked capacitors 416 and 417 in Figure 4 are located at the Z-level of the X-Y location of the subsequent passive device 526. In an embodiment all capacitors 516 and 526 are semi-serpentine stacked capacitors. Similarly, at least three power rails such as a 1 V rail, a 1.5 V rail and a 1.8 V rail may be configured for each the capacitor locations 516, 576 and 526.A redistribution layer 518 is on the die side 511 and the redistribution layer 518 is coupled to the first passive device 516 and to a first through-silicon via 520 in the first silicon interconnect bridge 512. The first through-silicon via 520 communicates to the package side 509. Similarly, the redistribution layer 518 is also coupled to the subsequent passive device 526 and to a subsequent through-silicon via 521 in the subsequent silicon interconnect bridge 512.In an embodiment, the first passive device 516 is both coupled to the die side 511 and to the package side 509. Similarly in an embodiment, the subsequent passive device 526 is both coupled to the die side 511 and to the package side 509. At the package side 509, the first passive device 516 is coupled by an electrical interconnect 522 to an electrical bump in an array, one electrical bump of which is indicated by reference number 524.In an embodiment, the passive device 516 is a first passive device 516 and the subsequent passive device 526 is in the molding-mass frame 514, such that the two passive devices 516 and 526 are on opposite sides of the respective first and subsequent silicon interconnect bridges 512 and 512’. As illustrated in an embodiment, the passive devices 516 and 526 are decoupling capacitors.
In an embodiment, a first integrated-circuit die 10 is on the redistribution layer 518 and a subsequent integrated-circuit die 20 is also on the redistribution layer 518, where the two IC dice 10 and 20 are side-by-side.Interconnection between the two IC dice 10 and 20 is through an inter-die trace 528 in the RDL 518. In an embodiment, interconnection between the two IC dice 10 and 20 is by coupling through at least one of the first through-silicon via 520 and the subsequent TSV 521. In an embodiment, coupling between the two IC dice 10 and 20 is through the inter-die trace 528 in the RDL 518. In an embodiment, interconnection between the two IC dice 10 and 20 is both by an inter-die trace 528 in the RDL, 518 and by at least one of the TSVs 520 and 521.An integrated-circuit (IC) package substrate 530 includes a bridge side 531 and a land side 529. In an embodiment, the IC package substrate 530 includes interconnections on either side of and passing through a package core 532. A bridge-side redistribution layer (RDL) 534 and a land-side RDL. 536, as well as through-core interconnects 538 provide electrical communication between the bridge side 531 and the land side 529 according to an embodiment. Within the IC package substrate 530 is a ground (VSS) plane 540 in the dielectric material of the IC package substrate 530, as well as a plurality of power (VCC) planes 542 in the dielectric material.In an embodiment, the land side 529 faces a board 544 such as a motherboard in a computing system, and an electrical bump array 546 is seen being brought toward the board 544. In an embodiment, the board 544 has an external shell 548 that provides at least one of physical and electrical insulative protection for components on the board 544. For example, the external shell 548 may be part of a hand-held computing system such as a communication device. In an embodiment, the external shell 548 is part of the exterior of a mobile computing platform such as a drone.Figure 5B is a top plan of portions of the integrated-circuit package apparatus 501 depicted in Figure 5A including interstitial passive devices according to an embodiment. The integrated-circuit package apparatus 502 shows the first and subsequent IC dice 10 and 20 on the RDL 518. The passive devices 516, 576 and 526 are seen along a section line A - - A’, and the passive devices 516, 576, 577 and 526 are depicted in ghosted lines below the RDL 518. Further below the RDL 518, five columns of the electrical bumps 524 (in ghosted lines) are also depicted in an array on the bridge side 531 of the IC package substrate 530. One column of bumps 524 is below- the interstitial passive device 576 (see Figure 5B below). TWO columns each of bumps 524 are below the passive devices 516 and 526 (see Figure 5A) and they are not depicted in Figure 5B.
Further in ghosted lines, the material of the molding-mass frame 514 occupies approximately the same perimeter as the RDL 518 and it houses the several passive devices 516, 576, 577 and 526. Consequently, the respective first and subsequent silicon interconnect bridge at 512 and 512’ are framed by the material of the molding-mass frame 514.Figure 5C is a bottom view of the molded silicon-bridge interconnect depicted in Figure 5A according to an embodiment. The package side 509 of the mold material that makes up the molding-mass frame 514, exhibits the several capacitors 524, 576, 577 and 526, which are depicted in ghosted lines as they may be embedded beyond the package side 509 of the mold material that makes up the molding-mass frame 514. As illustrated in Figure 5A, 10 electrical bumps 524 are arrayed in columns that include seven rows. These electrical bumps 524 are on the package side 509 and are in solid lines as they are not behind the package side 509 of the MMMSiB 510.Figures 6A through 6F represent fabrication of molded silicon-bridge interconnects for assembly to at least two IC dice and to a package substrate according to several embodiments. Although only two IC die, e.g. 10 and 20 are shown, four dice in a chipset may also be seated above the die sides on the RDLs, e.g., 118, 218, 318, 418 and 518. For example a chipset of a processor die 10 and a graphics die 20, is complement with a third die such as a memory die that has memory-controller hub or a platform-controller hub, and a fourth die such as a baseband processor. These four or more IC dice may be assembled 2.5-D style upon a given RDL according to several embodiments.At Figure 6A, a cross-section elevation of a molded silicon-interconnect bridge during assembly 601 according to an embodiment. A first carrier 666 supports a first passive device 616 and a subsequent passive device 626, and a silicon interconnect bridge 612. The silicon interconnect bridge 612 includes at least one TSV 620. The assembly of passive devices 616 and 626 and the silicon interconnect bridge 612 are overmolded by a molding mass material 614 such that a die side 611 and a package side 609 are formed for later assembly to IC dice and to a package. In an embodiment, a temporary bonding layer is disposed on the first carrier 666 to secure the first and subsequent passive devices 616, 626, and the silicon interconnect bridge 612.At Figure 6B, the assembly 602 has been inverted and the first carrier 666 has been stripped from the die side 611. A second carrier 668 is assembled to the package side 609. A redistribution layer 618 has been fabricated on the die side 611 such that the passive devices 616 and 626, and the silicon interconnect bridge 612 are directly coupled to the RDL 618.At Figure 6C, the assembly 603 has been again inverted after removing the second carrier 668 from the package side 609. Further, a third carrier 670 has been assembled to the RDL 618
and the package side 609 has been processed by back-grinding to expose the silicon interconnect bridge 612 and the at least one TSV 620.At Figure 6D, the assembly 604 has been etched to open contact corridors to the several passive devices 616 and 626, and an electroless plating process has formed a seed layer 672 that contacts the several termini of the several passive devices 616 and 626, the silicon interconnect bridge 612 and the at least one TSV 620.At 6E, the assembly 605 has been litho-patterned and electroplated to form a final package side 609, and to leave filled vias 674 in the several contact corridors within the molding mass material 614, which has now become the molding-mass frame, e.g., the molding-mass frame 114 depicted in Figure 1A. In an embodiment, at least one TSV contact pad 676 is patterned and formed on the at least one TSV 620 at the package side 609.At 6F, the assembly 606 has been bumped with several electrical bumps 624 in an array, for coupling the MSiB 610 to an integrated-circuit package at the package side 609, and after separation of the third carrier 670, the MSiB 610 can be assembled at the die side 611 to at least two IC dice according to several disclosed embodiments.Figure 7 is included to show an example of a higher-level device application for the disclosed embodiments. The molded silicon-bridge interconnect embodiments may he found in several parts of a computing system. In an embodiment, the molded silicon-bridge interconnect embodiments can be part of a communications apparatus such as is affixed to a cellular communications tower. In an embodiment, a computing system 700 includes, but is not limited to, a desktop computer. In an embodiment, a computing system 700 includes, but is not limited to a laptop computer. In an embodiment, a computing system 700 includes, but is not limited to a tablet. In an embodiment, a computing system 700 includes, but is not limited to a notebook computer. In an embodiment, a computing system 700 includes, but is not limited to a personal digital assistant (PDA). In an embodiment, a computing system 700 includes, but is not limited to a server. In an embodiment, a computing system 700 includes, but is not limited to a workstation. In an embodiment, a computing system 700 includes, but is not limited to a cellular telephone. In an embodiment, a computing system 700 includes, but is not limited to a mobile computing device. In an embodiment, a computing system 700 includes, but is not limited to a smart phone. In an embodiment, a system 700 includes, but is not limited to an internet appliance. Other types of computing devices may be configured with the microelectronic device that includes molded silicon-bridge interconnect embodiments.In an embodiment, the processor 710 has one or more processing cores 712 and 712N, where 712N represents the Nth processor core inside processor 710 where N is a positive integer. In an embodiment, the electronic device system 700 using a molded silicon-bridge interconnect
embodiment that includes multiple processors including 710 and 705, where the processor 705 has logic similar or identical to the logic of the processor 710. in an embodiment, the processing core 712 includes, but is not limited to, pre-fetch logic to fetch instructions, decode logic to decode the instructions, execution logic to execute instructions and the like. In an embodiment, the processor 710 has a cache memory 716 to cache at least one of instructions and data for the molded silicon -bridge interconnect element on an integrated-circuit package substrate in the system 700. The cache memory 716 may be organized into a hierarchal structure including one or more levels of cache memory. in an embodiment, the processor 710 includes a memory controller 714, which is operable to perform functions that enable the processor 710 to access and communicate with memory 730 that includes at least one of a volatile memory 732 and a non-volatile memory 734. In an embodiment, the processor 710 is coupled with memory 730 and chipset 720. In an embodiment, the chipset 720 is part of a molded silicon-bridge interconnect embodiment depicted in Figure 1A. The processor 710 may also be coupled to a wireless antenna 778 to communicate with any device configured to at least one of transmit and receive wireless signals. In an embodiment, the wireless antenna interface 778 operates in accordance with, hut is not limited to, the IEEE 802.11 standard and its related family, Home Plug AV (HPAV), Ultra Wide Band (UWB), Bluetooth, WiMax, or any form of wireless communication protocol.In an embodiment, the volatile memory 732 includes, but is not limited to, Synchronous Dynamic Random-Access Memory (SDRAM), Dynamic Random-Access Memory (DRAM), RAMBUS Dynamic Random-Access Memory (RDRAM), and/or any other type of random access memory device. The non-volatile memory 734 includes, but is not limited to, flash memory, phase change memory (PCM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), or any other type of non-volatile memory device.The memory 730 stores information and instructions to be executed by the processor 710. In an embodiment, the memory 730 may also store temporary variables or other intermediate information while the processor 710 is executing instructions. In the illustrated embodiment, the chipset 720 connects with processor 710 via Point-to-Point (PtP or P-P) interfaces 717 and 722. Either of these PtP embodiments may be achieved using a molded silicon-bridge interconnect embodiment as set forth in this disclosure. The chipset 720 enables the processor 710 to connect to other elements in a molded silicon-bridge interconnect embodiment in a system 700. In an embodiment, interfaces 717 and 722 operate in accordance with a PtP communication protocol such as the Intel®) QuickPath Interconnect (QPI) or the like. In other embodiments, a different interconnect may be used.
In an embodiment, the chipset 720 is operable to com muni cate with the processor 710, 705N, the display device 740, and other devices 772, 776, 774, 760, 762, 764, 766, 777, etc. The chipset 720 may also be coupled to a wireless antenna 778 to communicate with any device configured to at least do one of transmit and receive wireless signals.The chipset 720 connects to the display device 740 via the interface 726. The display 740 may be, for example, a liquid crystal display (LCD), a plasma display, cathode ray tube (CRT) display, or any other form of visual display device. In an embodiment, the processor 710 and the chipset 720 are merged into a molded silicon-bridge interconnect embodiment in a system. Additionally, the chipset 720 connects to one or more buses 750 and 755 that interconnect various elements 774, 760, 762, 764, and 766. Buses 750 and 755 may be interconnected together via a bus bridge 772 such as at least one molded silicon-bridge interconnect embodiment. In an embodiment, the chipset 720, via interface 724, couples with a non-volatile memory 760, a mass storage device(s) 762, a keyboard/mouse 764, a network interface 766, smart TV 776, and the consumer electronics 777, etc.In an embodiment, the mass storage device 762 includes, but is not limited to, a solid- state drive, a hard disk drive, a universal serial bus flash memory drive, or any other form of computer data storage medium. In one embodiment, the network interface 766 is implemented by any type of well-known network interface standard including, but not limited to, an Ethernet interface, a universal serial bus (USB) interface, a Peripheral Component Interconnect (PCI) Express interface, a wireless interface and/or any other suitable type of interface. In one embodiment, the wireless interface operates in accordance with, but is not limited to, the IEEE 802.11 standard and its related family Home Plug AV (HPAV), Ultra Wide Band (UWB), Bluetooth, WiMax, or any form of wireless communication protocol.While the modules shown in Figure 7 are depicted as separate blocks within the molded silicon-bridge interconnect embodiments in a computing system 700, the functions performed by some of these blocks may be integrated within a single semiconductor circuit or may be implemented using two or more separate integrated circuits. For example, although cache memory 716 is depicted as a separate block within processor 710, cache memory 716 (or selected aspects of 716) can be incorporated into the processor core 712.To illustrate the molded silicon interconnect bridge package embodiments and methods disclosed herein, a non-limiting list of examples is provided herein:Example 1 is an integrated- circuit package substrate, comprising: a silicon interconnect bridge in a molding-mass frame, wherein the molding-mass frame has a die side and a package side, and wherein the silicon interconnect bridge shares the die side; a passive device in the
molding-mass frame, wherein the silicon interconnect bridge and the passive device, occupy at least some of the same vertical space encompassed by the molding-mass frame; and a redistribution layer on the die side, wherein the redistribution layer is coupled to the passive de vice and to a through-silicon via in the silicon interconnect bridge, and wherein the through- silicon via communicates to the package side.In Example 2, the subject matter of Example 1 optionally includes a package substrate including a bridge side and a land side, and a ground (VSS) plane in a dielectric material and a power (VCC) plane in the dielectric material; and where the bridge side of the package substrate is coupled to the package side through an electrical bump. in Example 3, the subject matter of any one or more of Examples 1-2 optionally include wherein the redistribution layer is coupled at the die side to a first integrated-circuit die by a first electrical bump and to a subsequent integrated-circuit die by a subsequent electrical bump, and wherein communication between the first integrated-circuit die and the subsequent integrated- circuit die is by a trace in the redistribution layer.In Example 4, the subject matter of any one or more of Examples 1-3 optionally include wherein the redistribution layer is coupled at the die side to a first integrated-circuit die by a first electrical bump and to a subsequent integrated-circuit die by a subsequent electrical bump, and wherein communication between the first integrated-circuit die and the subsequent integrated- circuit die is by the through-silicon via in the silicon interconnect bridge.In Example 5, the subject matter of any one or more of Examples 1-4 optionally include wherein the redistribution layer is coupled at. the die side to a first integrated-circuit die by a first electrical bump and to a subsequent integrated-circuit die by a subsequent electrical bump, wherein communication between the first integrated-circuit die and the subsequent integrated- circuit die is by the through-silicon via in the silicon interconnect bridge, and also by a trace in the redistribution layer.In Example 6, the subject matter of any one or more of Examples 1-5 optionally include wherein the passive device is both coupled to the die side and to the package side.In Example 7, the subject matter of any one or more of Examples 1-6 optionally include wherein the passive device is both coupled to the die side and to the package side, wherein the passive device is a first passive device, further including a subsequent passive device in the molding-mass frame, wherein the first passive device and the subsequent passive device are on opposite sides of the silicon interconnect bridge.In Example 8, the subject matter of any one or more of Examples 1-7 optionally include wherein the passive device is a first capacitor, further including a subsequent capacitor, and
wherein the first and subsequent capacitor are contacted at respective power terminals, to form a power rail.In Example 9, the subject matter of any one or more of Examples 1-8 optionally include wherein the passive device is a first capacitor, further including a subsequent capacitor, and wherein the first and subsequent capacitor are contacted at respective power terminals, to form a power rail, further including: a package substrate including a bridge side and a land side, and a ground (VSS) plane in a dielectric material and a power ( VCC) plane in the dielectric material; where the bridge side of the package substrate is coupled to the package side through an electrical bump; and wherein the VCCplane is coupled to the power rail.In Example 10, the subject matter of any one or more of Examples 1-9 optionally include wherein the passive device is a first capacitor, further including a subsequent capacitor, and wherein the first and subsequent capacitor are contacted at respective power terminals, to form a power rail, further including: a third capacitor, wherein the third capacitor contacts the subsequent capacitor to form a VSSrail, and wherein the subsequent capacitor is stacked on the first capacitor and the third capacitor.In Example 11, the subject matter of any one or more of Examples 1-10 optionally include wherein the passive device is a first capacitor, further including a subsequent capacitor, and wherein the first and subsequent capacitor are contacted at respective power terminals to form a power rail; a third capacitor, wherein the third capacitor contacts the subsequent capacitor to form a VSSrail, and wherein the subsequent capacitor is stacked on the first capacitor and the third capacitor; a package substrate including a bridge side and a land side, and a ground (VSS) plane in a dielectric material and a power ( VCC) plane in the dielectric material; where the bridge side of the package substrate is coupled to the package side through an electrical bump; wherein the VCCplane is coupled to the power rail; and wherein the VSSplane is coupled to the VSSrail.Example 12 is an integrated-circuit package substrate, comprising: a first silicon interconnect bridge in a molding-mass frame, wherein the molding-mass frame has a die side and a package side, and wherein the first silicon interconnect bridge shares the die side; a subsequent silicon interconnect bridge in the molding-mass frame, wherein some molding-mass material of the molding-mass frame, spaces apart the first silicon interconnect bridge from the subsequent silicon interconnect bridge, and wherein the subsequent silicon interconnect bridge also shares the die side; an interstitial passive device in the molding-mass material between the first silicon interconnect bridge and the subsequent silicon interconnect bridge, wherein the first and subsequent silicon interconnect bridges and the interstitial passive device, occupy at least some of the same vertical space encompassed by the molding-mass frame; a redistribution layer on the die side, wherein the redistribution layer is coupled to the passive device and to a first through-
silicon via in the first silicon interconnect bridge, and wherein the first through-silicon via communicates to the package side; and wherein the redistribution layer is coupled to the passive device and to a subsequent through- silicon via in the subsequent silicon interconnect bridge, and wherein the subsequent through-silicon via communicates to the package side.In Example 13, the subject matter of Example 12 optionally includes a first capacitor in the molding-mass material and adjacent the first silicon interconnect bridge and opposite the interstitial passive device; a subsequent capacitor in the molding-mass material and adjacent the subsequent silicon interconnect bridge and opposite the interstitial passive device. in Example 14, the subject matter of any one or more of Examples 12—13 optionally include a package substrate including a bridge side and a land side, and a ground (VSS) plane in a dielectric material and a power (VCC) plane in the dielectric material; where the bridge side of the package substrate is coupled to the package side through an electrical bump; wherein the redistribution layer is coupled at the die side to a first integrated-circuit die by a first electrical bump and to a subsequent integrated-circuit die by a subsequent electrical bump, and wherein communication between the first integrated-circuit die and the subsequent integrated-circuit die is by a trace in the redistribution layer. in Example 15, the subject matter of any one or more of Examples 12—14 optionally include wherein the redistribution layer is coupled at the die side to a first integrated-circuit die by a first electrical bump and to a subsequent integrated-circuit die by a subsequent electrical bump, and wherein communication between the first integrated-circuit die and the subsequent integrated-circuit die is by a trace in the redistribution layer.In Example 16, the subject matter of any one or more of Examples 12—15 optionally include wherein the passive device is a capacitor that is both coupled to the die side and to he package side.In Example 17, the subject matter of any one or more of Examples 12—16 optionally include a first-side first capacitor and a first- side subsequent capacitor in the molding-mass material and adjacent the first silicon interconnect bridge and opposite the interstitial passive device, wherein the first-side first capacitor and a first-side subsequent capacitor are contacted at respective power terminals, to form a first power rail; a subsequent-side first capacitor and a subsequent-side subsequent capacitor in the molding-mass material and adjacent the subsequent silicon interconnect bridge and opposite the interstitial passive device, wherein the subsequent-side first capacitor and a subsequent-side subsequent capacitor are contacted at respective power terminals, to form a subsequent power rail.In Example 18, he subject matter of Example 17 optionally includes a package substrate including a bridge side and a land side, and a ground (VSS) plane in a dielectric material and a
power (VCC) plane in the dielectric material; where the bridge side of the package substrate is coupled to the package side through an electrical bump; and wherein the VCCplane is coupled to at least one of the first power rail and the subsequent power rail.In Example 19, the subject matter of Example 18 optionally includes a first-side third capacitor, wherein the first-side third capacitor contacts the first-side subsequent capacitor to form a VSSrail, and wherein the first-side subsequent capacitor is stacked on the first-side first capacitor and the first-side third capacitor.Example 20 is a computing system comprising: a silicon interconnect bridge in a molding-mass frame, wherein the molding-mass frame has a die side and a package side, and wherein the silicon interconnect bridge shares the die side; a first integrated-circuit die on the die side, wherein the first IC die is a logic processor: a subsequent integrated-circuit die on the die side and adjacent the first integrated-circuit die, wherein the subsequent IC die is a graphics processor; a passive device in the molding-mass frame, wherein the silicon interconnect bridge and the passive device, occupy at least some of the same vertical space encompassed by the molding-mass frame; a redistribution layer on the die side, wherein the redistribution layer is coupled to the passive device and to a through-silicon via in the silicon interconnect bridge, and wherein the through-silicon via communicates to the package side; a package substrate including a bridge side and a land side, and a voltage-reference (VSS) plane in a dielectric material and a power (VCC) plane in the dielectric material; where the bridge side of the package substrate is coupled to the package side through an electrical bump; wherein the redistribution layer is coupled at the die side to the first IC die by a first electrical bump and to the subsequent IC die by a subsequent electrical bump, and wherein communication between the first integrated-circuit die and the subsequent integrated-circuit die is by a trace in the redistribution layer; and wherein the molded silicon-interconnect bridge is part of a chipset.In Example 21, the subject mater of Example 20 optionally includes a third IC die on the redistribution layer, wherein the third IC die is a memory die; and a board coupled to the package substrate at the land side, by an electrical-bump array.In Example 22, the subject matter of Example 21 optionally includes wherein the board includes an external shell that is a dielectric material, and wherein the external shell is at least part of the exterior of an apparatus selected from a mobile computing system and a drone.The above detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show, by way of illustration, specific embodiments in which the invention can be practiced. These embodiments are also referred to herein as “examples.” Such examples can include elements in addition to those shown or described. However, the present inventors also contemplate examples in which only those
elements shown or described are provided. Moreover, the present inventors also contemplate examples using any combination or permutation of those elements shown or described (or one or more aspects thereof), either with respect to a particular· example (or one or more aspects thereof), or with respect to other examples (or one or more aspects thereof) shown or described herein.In the event of inconsistent usages between this document and any documents so incorporated by reference, the usage in this document controls.In this document, the terms “a” or “an” are used, as is common in patent documents, to include one or more than one, independent of any other instances or usages of “at least one” or “one or more.” In this document, the term “or” is used to refer to a nonexclusive or, such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated. In this document, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein.” Also, in the following claims, the terms “including” and “comprising” are open-ended, that is, a system, device, article, composition, formulation, or process that includes elements in addition to those listed after such a term in a claim are still deemed to fall within the scope of that claim. Moreover, in the following claims, the terms “first,” “second,” and “third,” etc. are used merely as labels, and are not intended to impose numerical requirements on their objects.Method examples described herein can be machine or computer-implemented at least in part. Some examples can include a computer-readable medium or machine-readable medium encoded with instructions operable to configure an electrical device to perform methods as described in the above examples. An implementation of such methods can include code, such as microcode, assembly language code, a higher-level language code, or the like. Such code can include computer readable instructions for performing various methods. The code may form portions of computer program products. Further, in an example, the code can be tangibly stored on one or more volatile, non-transitory, or non-volatile tangible computer- readable media, such as during execution or at other times. Examples of these tangible computer-readable media can include, but are not limited to, hard disks, removable magnetic disks, removable optical disks (e.g , compact disks and digital video disks), magnetic cassettes, memory cards or sticks, random access memories (RAMs), read only memories (ROMs), and the like.The above description is intended to be illustrative, and not restrictive. For example, the above-described examples (or one or more aspects thereof) may be used in combination with each other. Other embodiments can be used, such as by one of ordinary skill in the art upon reviewing the above description. The Abstract is provided to comply with 37 C.F.R. § 1.72(b), to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with
the understanding that it will not be used to interpret or limit the scope or meaning of the claims. Also, in the above Detailed Description, various features may be grouped together to streamline the disclosure. This should not be interpreted as intending that an unclaimed disclosed feature is essential to any claim. Rather, inventive subject matter may lie in less than all features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description as examples or embodiments, with each claim standing on its own as a separate embodiment, and it is contemplated that such embodiments can be combined with each other in various combinations or permutations. The scope of the disclosed embodiments should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. |
Methods and apparatus for a power management integrated circuit (PMIC) for receiving energy from multiple energy harvesting sources. The PMIC comprises a boost converter to receive a plurality of first power supplies and to generate an intermediate voltage, the boost converter having a plurality of input terminals coupled to the plurality of first power supplies, and a switched capacitor charge pump to receive the intermediate voltage and to generate a second power supply is shown. |
CLAIMSWhat is claimed is:1. A power management integrated circuit (PMIC), comprising:a boost converter to receive a plurality of first power supplies and to generate an intermediate voltage, the boost converter having a plurality of input terminals coupled to the plurality of first power supplies; anda switched capacitor charge pump to receive the intermediate voltage and to generate a second power supply.2. The PMIC of claim 1, wherein the switched capacitor charge pump is configured to operate in a step-up mode, and wherein in the step-up mode the charge pump can step-up the intermediate voltage at a ratio of at least one of 1:2 and 1:3.3. The PMIC of claim 1, further comprises a load to receive the second power supply, wherein the load includes a battery that operates as an input power supply of the boost converter if the plurality of first power supplies drops below a voltage threshold.4. The PMIC of claim 1, wherein the boost converter includes a switching inductor coupled between a first node and a second node, the first node to receive the plurality of first power supplies, and the second node coupled between the intermediate voltage and a ground.5. The PMIC of claim 1, further comprises a plurality of energy conversion devices configured to acquire energy from a plurality of energy harvesting sources and convert the acquired energy into the plurality of first power supplies.6. The PMIC of claim 1, wherein the boost converter generates a plurality of intermediate voltages coupled to a plurality of output terminals and operates in a discontinuous conduction mode, and wherein the boost converter further comprises a pulse frequency modulation controller.7. The PMIC of claim 5, wherein the plurality of energy conversion sources includes at least one of a photovoltaic (PC) cell, a thermoelectric generator (TEG), a radio frequency (RF) device, and a piezoelectric material.8. The PMIC of claim 1, wherein the switched capacitor charge pump includes at least a plurality of charging circuits, a first capacitor to store charge, and a second capacitor to receive charge from the first capacitor, wherein the second capacitor is coupled to an output terminal of the charge pump.9. The PMIC of claim 1, wherein the switched capacitor charge pump includes at least one of a charge mode and a discharging mode.10. A system for energy harvesting, comprising:a load; a plurality of energy harvesting sources; anda power management integrated circuit (PMIC) havinga boost converter to receive a plurality of first power supplies and to generate an intermediate voltage, the boost converter having a plurality of input terminals coupled to the plurality of first power supplies, anda switched capacitor charge pump to receive the intermediate voltage and to generate a second power supply.11. The system of claim 10, wherein the switched capacitor charge pump is configured to operate in a step-up mode, and wherein in the step-up mode the charge pump can step-up the intermediate voltage at a ratio of at least one of 1:2 and 1:3.12. The system of claim 10, further comprises a load to receive the second power supply, wherein the load includes a battery that can operate as an input power supply of the boost converter if the plurality of first power supplies drops below a voltage threshold.13. The system of claim 10, wherein the boost converter includes a switching inductor coupled between a first node and a second node, the first node to receive the plurality of first power supplies, and the second node coupled between the intermediate voltage and a ground.14. The system of claim 10, further comprises a plurality of energy conversion devices configured to acquire energy from a plurality of energy harvesting sources and convert the acquired energy into the plurality of first power supplies.15. The system of claim 10, wherein the boost converter generates a plurality of intermediate voltages coupled to a plurality of output terminals and operates in a discontinuous conduction mode, and wherein the boost converter further comprises a pulse frequency modulation controller.16. The system of claim 14, wherein the plurality of energy conversion sources includes at least one of a photovoltaic (PC) cell, a thermoelectric generator (TEG), a radio frequency (RF) device, and a piezoelectric material.17. The system of claim 10, wherein the switched capacitor charge pump includes at least a plurality of charging circuits, a first capacitor to store charge, and a second capacitor to receive charge from the first capacitor, wherein the second capacitor is coupled to an output terminal of the charge pump.18. The system of claim 10, wherein the switched capacitor charge pump includes at least one of a charge mode and a discharging mode.19. A method for energy harvesting, comprising:providing a power management integrated circuit (PMIC) including a boost converter and a switched capacitor charge pump; receiving a plurality of first power supplies at a plurality of input terminals of the boost converter;generating an intermediate voltage at an output of the boost converter;receiving the intermediate voltage at an input of the switched capacitor charge pump; and generating a second power supply at an output of the switched capacitor charge pump.20. The method of claim 19, further comprising receiving the second power supply at a load, wherein the load includes a battery that can operate as an input power supply of the boost converter if the plurality of first power supplies drops below a voltage threshold.21. The method of claim 19, further comprising:providing a switching inductor of the boost convert coupled between a first node and a second node of the boost converter, wherein the second node is coupled between theintermediate voltage and a ground; andreceiving the plurality of first power supplies at the first node of the boost converter.22. The method of claim 19, further comprising:acquiring energy from a plurality of energy harvesting sources using a plurality of energy conversion devices, wherein the plurality of energy conversion devices are configured to convert the acquired energy into the plurality of first power supplies.23. The method of claim 19, further comprising:generating a plurality of intermediate voltages coupled to a plurality of output terminals; operating in a discontinuous conduction mode; andproviding a pulse frequency modulation controller.24. The method of claim 19, wherein the switched capacitor charge pump further comprises at least one of a charge mode and a discharging mode; and wherein the switched capacitor charge pump further comprises at least a plurality of charging circuits, a first capacitor to store charge, and a second capacitor to receive charge from the first capacitor, wherein the second capacitor is coupled to an output terminal of the charge pump.25. The method of claim 19, wherein the switched capacitor charge pump is configured to operate in a step-up mode, and wherein in the step-up mode the charge pump can step-up the intermediate voltage at a ratio of at least one of 1:2 and 1:3. |
POWER ARCHITECTURE & MANAGEMENT SCHEME FOR IOT APPLICATIONSFIELD OF THE INVENTIONEmbodiments of the present invention relate to the field of integrated circuits; and more particularly, embodiments of the present invention relate to a power management integrated circuit (PMIC) for receiving power from multiple energy harvesting sources.BACKGROUND OF THE INVENTIONAdvances in integrated circuits and microelectronics have enabled a new generation of scalable sensor networks. For example, a smart sensor node (also referred to as a smart sensor device) is becoming more and more popular and essential for Internet of Things (IOT) applications. As such, combining sensing, signal conditioning, digital processing, data logging, and wireless digital communications into smaller and smaller integrated circuits allows nodes of these networks to be placed in remote environmental locations and embedded more and more deeply into machines and structures. But powering such a wireless sensor node for the long term remains a challenge in many applications, and the more deeply this node is embedded, the more challenging it becomes to find ways to maintain a charge on its battery or energy storage element(s) (hereinafter, collectively referred to as a "battery").Therefore, powering sensor nodes and extending their battery life are ongoing challenges. Technology for solving this challenge is energy harvesting. Energy harvesting, or energy scavenging ambient energies from the operation environment, represents a promising way to automatically store and collect energy and eliminate battery maintenance. As such, energy harvesting or scavenging from an ambient source, such as a photovoltaic (PV) cell, a radio frequency (RF) device, a piezoelectric (PZT) material, and/or a thermoelectric generator (TEG), is an alternative solution rather than using a big stationary battery, which is inefficient due to the high cost of maintenance to periodically replace or recharge the battery in remote locations. However, in many applications, the source of ambient energy may be intermittent, the kinds of energy that can most easily be harvested may also change with the environmental conditions, and the range of voltages.Furthermore, since each of these energy harvesters has its own unique powercharacteristics, the power management for an energy transducer is critical in order to harvest a maximum available power, supply a regulated voltage to a load, and charge a battery.Unfortunately, many of the conventional methods use a single source power management, and therefore do not simultaneously accumulate power/energy from multiple sources. However, some conventional methods do use multiple energy transducers, but it typically only switches between the one or more energy harvesting sources. Thus, these conventional methods do not harvest energy simultaneously. In addition, the power losses due to the conventional power management circuits are still significantly large, which cause a problem for a system on chip (SOC) integration or an application-specific integrated circuit (ASIC) integration that operates a smart sensor under size & weight constraints.Accordingly, there has been a lack of an efficient method and apparatus for receiving and managing multiple inputs from multiple energy harvesting sources and accumulating the energy from all the input sources substantially at the same time.BRIEF DESCRIPTION OF THE DRAWINGSThe invention may best be understood by referring to the following description and accompanying drawings that are used to illustrate embodiments of the invention. In the drawings:Figure 1 is a conventional circuit diagram illustrating a single source power management according to prior art.Figure 2 is a block diagram illustrating an energy harvesting PMIC system according to one embodiment.Figure 3 is a detailed circuit diagram illustrating an energy harvesting PMIC with a two- stage hybrid switching topology according to one embodiment.Figure 4 is a graph illustrating current and time values when an energy harvesting PMIC is operated according to one embodiment.Figure 5 is a detailed circuit diagram illustrating power conversion and control of a two- stage topology according to one embodiment.Figures 6A-B are a block diagram and a detailed circuit diagram, respectively, illustrating a battery-operating mode according to some embodiments.Figure 7 illustrates a computing system according to one embodiment.DETAILED DESCRIPTION OF THE PRESENT INVENTIONThe following description describes methods and apparatus for a power management integrated circuit (PMIC) for receiving energy from multiple energy harvesting sources.Specifically, methods and apparatus for an energy harvesting PMIC with a two-stage hybrid switching topology. The first stage of the energy harvesting PMIC includes a boost converter that receives multiple input power supplies to generate an intermediate voltage, where the boost converter has multiple input terminals coupled to the multiple input power supplies. The second stage of the energy harvesting PMIC includes a switched capacitor charge pump that receives the intermediate voltage to generate a second power supply, where the second power supply is greater than the intermediate voltage and can power a load and charge a battery directly. The energy harvesting PMIC and these techniques described herein also advantageously address the issue on power management for multi-source energy harvesting and increase overall system power efficiency. In addition, the energy harvesting PMIC and these techniques described herein also provide improvements to the field of energy harvesting and integrated circuits. These improvements include providing a discontinuous conduction mode (DCM) that can operate with multiple inputs and outputs, eliminating the power losses inherited with a general stand-alone charge pump conversion, and allowing a bi-directional energy flow to/from the battery when the power received from the harvesting energy sources is not sufficient.Furthermore, the energy harvesting power management, as described herein, may be configured for a smart sensor node and IOT applications. As used herein, an "IOT" (also referred to as an IOT device and an IOT application) refers to an application and/or device that includes sensing and/or control functionality as well as a WiFi.TM. transceiver radio or interface, a Bluetooth.TM. transceiver radio or interface, a Zigbee.TM. transceiver radio or interface, an Ultra- Wideband (UWB) transceiver radio or interface, a Wi-Fi-Direct transceiver radio or interface, a Bluetooth.TM. Low Energy (BLE) transceiver radio or interface, and/or any other wireless network transceiver radio or interface that allows the IOT application/device to communicate with a wide area network and with one or more additional devices.As used herein, a "smart sensor node" (also referred to as a smart sensor device) refers to a device that receives an input from the physical environment and uses built-in compute resources to perform predefined functions upon detection of the specific input and then process data before forwarding it on. For example, these nodes are used for monitoring and control mechanisms in a wide variety of environments including smart grids, battlefield reconnaissance, exploration and many other sensing applications. Furthermore, the smart sensor node is also a crucial and integral element in the IOT, where the increasingly prevailing environment provides an array of devices that can be outfitted with a unique identifier (UID) to transmit data over the Internet or similar networks.In one embodiment, a smart sensor node may be a component of a wireless sensor and an actuator network (WSAN), which includes multiple nodes, each of which is connected with one or more other sensors and sensor hubs as well as individual actuators. According to one embodiment, a smart sensor node includes, but is not limited to, a sensor, a microprocessor, and a communication device. The smart sensor node may also include transducers, amplifiers, excitation control, analog filters, and compensation. The smart sensor node also incorporates software-defined elements that provide functions such as data conversion, digital processing and communication to external devices. Therefore, a smart sensor node requires a power management, such as an energy harvesting PMIC that receives input from multiple energy sources (e.g., multiple smart sensor nodes) and harvests power from the input sources simultaneously in order to supply a regulated voltage to a load or to charge a battery of the smart sensor node.In the following description, numerous details are set forth to provide a more thorough explanation of the present invention. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without these specific details. In other instances, well- known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present invention.Various embodiments and aspects of the inventions will be described with reference to details discussed below, and the accompanying drawings will illustrate the various embodiments. The following description and drawings are illustrative of the invention and are not to be construed as limiting the invention. Numerous specific details are described to provide a thorough understanding of various embodiments of the present invention. However, in certain instances, well-known or conventional details are not described in order to provide a concise discussion of embodiments of the present inventions.Reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in conjunction with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase "in one embodiment" in various places in the specification do not necessarily all refer to the same embodiment.Bracketed text and blocks with dashed borders (e.g., large dashes, small dashes, dot-dash, and dots) may be used herein to illustrate optional operations that add additional features to embodiments of the invention. However, such notation should not be taken to mean that these are the only options or optional operations, and/or that blocks with solid borders are not optional in certain embodiments of the invention.In the following description and claims, the terms "coupled" and "connected," along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other. "Coupled" is used to indicate that two or more elements, which may or may not be in direct physical or electrical contact with each other, co-operate or interact with each other. "Connected" is used to indicate the establishment of communication between two or more elements that are coupled with each other.The embodiments can be implemented in numerous ways, including as a process, an apparatus, a system, a composition of matter, a computer readable medium such as a computer readable storage medium or a computer network wherein program instructions are sent over optical or communication links. A component such as a processor or a memory described as being configured to perform a task includes a general component that is temporarily configured to perform the task at a given time and/or a specific component that is manufactured to perform the task. In general, the order of the steps of disclosed processes can be altered within the scope of the invention.Figure 1 is a conventional circuit diagram illustrating a single source power management according to prior art. Specifically, Figure 1 illustrates an exemplary circuit diagram of a conventional buck-boost converter for a single harvesting energy source. The conventional buck- boost converter typically includes a single harvesting energy source, a large inductor, a large decoupling capacitor, one or more switches, and large battery. This conventional buck-boost converter circuit is typically only configured to operate with a single type of energy source (e.g., a single PV cell, a PZT vibration transducer TEG, etc.), and therefore can only generate a partial of the energy demanded from a sensor load. Furthermore, the conventional boost converter generally operates with a large converter ratio that causes a decrease in the power conversion efficiency.For example, in order to charge a battery load (e.g., a Li-ion battery (4.2V/ battery cell)), a boost DC/DC power converter (as shown in Figure 1) is applied to step up a low input voltage (e.g., from PV -0.5V or TEG ~ < 100m V) from a single harvesting energy source.Consequently, this large voltage conversion ratio (e.g., > 8X/PV or -40X/TEG)) is obviously a drawback in terms of power efficiency (e.g., typically < 70%). In addition, it leads to a big power inductor (e.g., > 50uH - lmH, ~ lcmXlcm in foot print) that is required to reduce power consumption and mitigate current ripple.Figure 2 is a block diagram illustrating an energy harvesting PMIC system according to one embodiment. It is pointed out that the components of Figure 2 that have the same reference numbers (or names) as components of any other figure can operate or function in any manner similar to that described herein, but are not limited to such. Further, the lines connecting the blocks represent communication between different components of a power management integrated circuit.Referring now to Figure 2. In one embodiment, the energy harvesting PMIC system 200 includes, but is not limited to, harvesting energy sources 201-202, energy harvesting PMIC 205, and battery 210. According to one embodiment, the energy harvesting PMIC system 200 provides a power management integrated circuit (e.g., energy harvesting PMIC 205) that can handle the input of multiple energy sources (e.g., energy sources 201-202) and harvest the multiple inputs simultaneously in order to charge a battery (e.g., battery 210) and supply a regulated voltage to a load (e.g., a smart sensor node). In one embodiment, energy harvesting PMIC 205 is coupled between harvesting energy sources 201-202 and battery 210 (may also be referred to as a load). Harvesting energy sources 201-202 (also referred to as energy sources) are power sources for supplying power (i.e., energy) for multi-source energy harvesting. Furthermore, harvesting energy sources 201-202 are not limited to a particular number of energy sources. For example, as shown in Figure 2, harvesting energy source 202 refers to a total number "N" of harvesting energy sources that are available, which may be 2, 3, or any other number of total harvesting energy sources.In one embodiment, harvesting energy sources 201-202 are not limited to a particular energy source. As such, harvesting energy sources 201-202 may include a thermal energy source, a mechanical energy source, and/or an electromagnetic energy source, where each energy source may be a photovoltaic (PV) cell, a radio frequency (RF) device, a piezoelectric (PZT) material, a thermoelectric generator (TEG), and/or any combination of sources. For example, harvesting energy sources 201-20 may be identical or different energy sources. In one embodiment, harvesting energy sources 201-202 supply their respective powers either simultaneously or at different times. The outputs of the corresponding power sources are connected to the input terminals of energy harvesting PMIC 205, where harvesting energy source201 is connected to a first input terminal of energy harvesting PMIC 205 and harvesting energy source 202 is connected to a second input terminal (or a "N" input terminal) of energy harvesting PMIC 205.According to one embodiment, energy harvesting PMIC 205 is configured to receive and efficiently manage power from the respective harvesting energy sources when power from harvesting energy sources 201-202 are input simultaneously (or at different times). Energy harvesting PMIC 205 is also configured to harvest power from harvesting energy sources 201-202 (i.e., accumulate power from each harvesting energy source substantially at the same time or concurrently), and distribute and supply the energy harvesting powers to battery 210.Energy harvesting PMIC 205 may include one or more circuits, electrical devices, and/or power stages that are configured to receive power from multiple energy sources and accumulate the total input power in order to provide an increased power supply to battery 210. In one embodiment, energy harvesting PMIC 205 implements a two-stage hybrid switching topology to charge battery 210 (and power a load) with multiple harvested energy sources. Energy harvesting PMIC 205 is also discussed in further detail below as shown in Figure 3.In one embodiment, energy harvesting PMIC 205 is configured to provide a regulated voltage (e.g., Vbat as shown in Figure 3) that can be efficiently stored in battery 210. Battery 210 can accumulate charge from any or all of the harvesting energy sources 201-202 via energy harvesting PMIC 205. In one embodiment, battery 210 may be a rechargeable battery (e.g., Li- ion 2.7V-4.2V), a thin film battery, and any other load/battery configuration. According to one embodiment, battery 210 may be used for an IOT smart sensor node for self-powering. In one embodiment, battery 210 may be a load that includes one or more of the following: a smart sensor node, a signal conditioning circuit, a processor, a memory, a timekeeper, a wireless communication device, a light, an actuator, and/or any combination of loads.According to one embodiment, energy harvesting PMIC 205 includes a boost converter that receives a plurality of first power supplies (e.g., energy sources 201-202) and generates an intermediate voltage from the multiple input power supplies. The boost converter includes a plurality of input terminals that are coupled to the plurality of first power supplies. Energy harvesting PMIC 205 also includes a switched capacitor charge pump that receives the intermediate voltage and generates a second power supply to charge a battery (e.g., battery 210) and power a load.In another embodiment, the energy harvesting PMIC system 200 may include: providing a PMIC (e.g., energy harvesting PMIC 205) that includes a boost converter and a switched capacitor charge pump; receiving a plurality of first power supplies (e.g., energy sources 201- 202) at a plurality of input terminals of the boost converter; generating an intermediate voltage at an output of the boost converter; receiving the intermediate voltage at an input of the switched capacitor charge pump; and generating a second power supply at an output of the switched capacitor charge pump. The second power supply of PMIC may be used to charge a battery and provide power to a load, such as an IOT smart sensor node.Note that some or all of the components as shown and described above (e.g., energy harvesting PMIC 205) may be implemented in software, hardware, and/or a combination thereof. For example, such components can be implemented as software installed and stored in a persistent storage device, which can be loaded and executed in a memory by a processor (not shown) to carry out the processes or operations described throughout this application.Alternatively, such components can be implemented as executable code programmed or embedded into dedicated hardware such as an integrated circuit (e.g., an application specific IC or ASIC), a digital signal processor (DSP), or a field programmable gate array (FPGA), which can be accessed via a corresponding driver and/or operating system from an application.Furthermore, such components can be implemented as specific hardware logic in a processor or processor core as part of an instruction set accessible by a software component via one or more specific instructions. Also note that the configuration shown in Figure 2 shall be referenced throughout the description.Figure 3 is a detailed circuit diagram illustrating an energy harvesting PMIC with a two- stage hybrid switching topology according to one embodiment. Specifically, a detailed energy harvesting PMIC system 300 illustrates an energy harvesting PMIC that includes a two-stage hybrid switching power management configuration. Figure 3 illustrates an example of interactions between different components of an energy harvesting PMIC. It is pointed out that the components of Figure 3 that have the same reference numbers (or names) as components of any other figure can operate or function in any manner similar to that described herein, but are not limited to such. Further, the lines connecting the components represent communication between different components of the detailed energy harvesting PMIC system 300.Referring now to Figure 3. In one embodiment, system 300 includes, but is not limited to, energy harvesting PMIC 205, harvesting energy sources 301-303, and battery 210. As discussed above, harvesting energy sources 301-303 are not limited to a particular number of energy sources and a particular energy source. Each harvesting energy source provides an input power supply to an input terminal of energy harvesting PMIC 205, where each input power supply may be identical or different to the other energy sources and may be provided at the same or different time as the other energy sources. In one embodiment, multiple energy conversion devices, based on sunlight, heat, piezoelectricity (vibration), and any other energy source, are configured to acquire energy from multiple harvesting energy sources (e.g., energy sources 301-303) and convert the acquired energy into one or more input power supplies.According to one embodiment, energy harvesting PMIC 205 includes, but is not limited to, input terminals 311-313, boost converter 320, switching capacitor charge pump 330, and output terminal 350. Energy harvesting PMIC 205 provides a two-stage hybrid switching topology to charge a battery (and power a load) from multiple harvested energy sources. For example, energy harvesting PMIC 205 includes a high frequency boost converter in the front-end stage, and a switching capacitor charging pump converter that operates at a low frequency (e.g., 5x-10x slower) in the back-end stage. The two-stage hybrid switching topology provides a soft-charging charge pump that provides relatively no charge sharing losses and smaller capacitors operating at a lower frequency. The two-stage hybrid switching topology also provides a low voltage boost converter that provides a higher switching frequency, a smaller inductor, and a smaller decoupling capacitor.In one embodiment, boost converter 320 receives multiple input power supplies via input terminals 311-313 and generates an intermediate power supply (e.g., intermediate voltage 325 (Vo)), which is a boosted higher power supply compared to the input power supply. According to one embodiment, switching capacitor charge pump 330 receives the intermediate power supply and generates/bumps a higher second power supply (e.g., Vbat 333) using the intermediate power supply. For example, switching capacitor charge pump 330 may receive an intermediate voltage and pump that intermediate voltage using switch capacitors (e.g., capacitors 331-332) to generate an output power supply at a fixed ratio of 1:2 or 1:3 (i.e., 1:2/1:3 refers to the output power supply that is generally twice/three times higher than the intermediate voltage). As such, output terminal 350 receives the higher second power supply (Vbat) and forwards the higher second power supply to charge battery 210 and power a load.In one embodiment, boost converter 320 includes, but is not limited to, node 321, inductor 322 (IL), node 323, and intermediate voltage 325 (Vo). According to one embodiment, boost converter 320 is configured to "boost" its output to an intermediate voltage level, which can generally increase the power conversion efficiency. In addition, since boost converter 320 operates with a low voltage circuit/components (e.g., harvesting energy sources that are typically < ~ 2.5V), a more efficient low- voltage/high-frequency silicon process can be applied. As such, this provides two improvements to a PMIC: the front-end circuit can operate in high-frequency in order to meet a desired fast dynamical response, while also maintaining a good or compatibly high power efficiency; and the value of the switching inductor can be greatly reduced due to the high frequency switching.In one embodiment, node 321 receives one or more input power supplies via input terminals 311-313, where the one or more input power supplies are received simultaneously (or at different times) and controlled by one or more field-effect transistors (FET). Inductor 321 is coupled between node 321 and 323. Furthermore, inductor 321 receives the input power supplies via node 321 and generates an output voltage level that is forwarded to node 322, which is controlled by FETs and coupled between an intermediate voltage (Vo) and a ground. Inductor 321 may be a switching inductor but is not limited to a particular type of inductor. Note that the overall power delivery efficiency of an energy harvesting PMIC is primarily dominated by the front-end boost converter.For example, using a 4.7uH switching inductor which is roughly lOx smaller than a conventional inductor (as shown in Figure 1), the energy harvesting PMIC generates an overall power efficiency that is, at a minimum, generally higher (e.g., 3%~5%) than a conventional boost converter as illustrated in Figure 1. In addition, the overall power efficiency is even greater (e.g., 7%~10% when using a luH switching inductor) when there is a demand for an even smaller foot print design. Note that the architecture is naturally "expendable" to multiple harvesting sources, since it utilizes the switching inductor (e.g., switching inductor 322) of the front-end boost converter (e.g., boost converter 320) in such a method where a total energy from all the input energy sources can be harvested & delivered effectively to charge a battery and power a load. Also, note that boost converter 320 is not limited to a particular type of boost converter and thus may include a high-efficiency buck-boost power converter, a step-up converter, a DC-to-DC power converter, and/or any boost (step-up) converter.Furthermore, according to some embodiments, boost converter 320 provides a DCM operation that includes receiving multiple input energy sources and generating multiple output power supplies. In one embodiment, boost converter 320 may include one or more outputs using inductor 322 and the multiple inputs from node 321. For example, there could be more than one high-side device connected to node 323, where each of the additional outputs may be a low voltage device (e.g., a processor). Furthermore, in one embodiment, each additional output (or all the outputs) from boost converter 320 may be regulated and configured, for example, to only supply the excess energy from the energy sources to the battery. In another embodiment, energy harvesting PMIC 205 may include a battery-operating mode (described in further detail in Figures 6A-B). In the battery-operating mode, battery 210 operates as a power source (as shown by the bi-directional dotted line) and supplies power to the energy harvesting PMIC 205 when the input power supply from harvesting energy sources 301-303 is not sufficient (i.e., the input power supplies fall below a low voltage threshold).In one embodiment, switching capacitor charge pump 330 includes, but is not limited to, intermediate voltage 325, capacitors 331-332, and supply voltage 333 (e.g., Vbat). Switching capacitor charge pump 330 operates in a step-up mode with a fixed conversion ratio (1:2, 1:3, etc.), which is self-adapted to an input source. The back-end charge pump of energy harvesting PMIC 205 also provides a higher overall power efficiency (e.g., efficiency at 95%~98%).According to one embodiment, switching capacitor charge pump 330 receives intermediate voltage 325 and generates/bumps supply voltage 333 (Vbat) using the intermediate voltage, capacitors 331-332, and multiple FETs. Furthermore, intermediate voltage 325 (Vo) should operate within a threshold range (e.g., an upper and lower voltage thresholds), which can dynamically change based on the supply voltage 333 (Vbat). For example, if supply voltage 333 (Vbat) rises above the threshold range, the conversion ratio of the switching capacitor charge pump 330 is increased (e.g., from 1:2 to 1:3). Therefore, when the conversion rate is changed, the threshold range for intermediate voltage 325 (Vo) is also changed.For example, when the switching capacitor charge pump 330 is in a 1:2 mode, the threshold for Vo is (Vbat/2) plus a voltage window/range, and in a 1:3 mode the threshold for Vo is changed to (Vbat/3) plus a voltage window/range. As such, the voltage is contained within an operating voltage range to maintain Vo within the voltage rating of the boost converter. Note that the voltage window may be the same or different in different modes. Furthermore, to avoid from switching back and forth in the presence of noise, switching capacitor charge pump 330 also includes a small hysteresis band to account for the presence of noise according to one embodiment.In one embodiment, switching capacitor charge pump 330 charges into and out of capacitors 331-332 when the FETs (or switches) are opened and closed. Note that switching capacitor charge pump 330 is not limited to a particular charge pump configuration. Switching capacitor charge pump 330 includes a charging phase, a discharging phase, and a transition state (e.g., the moment the pump is triggered from a charging phase to a discharging phase). During the charge phase according to one embodiment, capacitor 331 may operate as a flying capacitor (CFLY) and is charged to a proper voltage by configuring it to be in parallel with battery 210. Meanwhile, capacitor 332 may operate as a load capacitor (CL) and supplies a charge to a load. During the discharge phase according to one embodiment, capacitor 331 is placed in series with battery 210 and discharged into the load and capacitor 332, which effectively provides a fixed ratio of double/triple the supply voltage (Vbat) to the load. Therefore, intermediate voltage 325 (Vo) controls a transition state in switching capacitor charge pump 330 (e.g., from a charging phase to a discharge phase). The transition state is triggered when intermediate voltage 325 (Vo) reaches an upper threshold (which may also change based on the chosen conversion ratio). Furthermore, the state transition is only triggered after a complete pulse from boost converter 320, not during a pulse. Accordingly, intermediate voltage 325 (Vo) is sampled after a pulse has completed and then switching capacitor charge pump 330 decides whether to trigger a transition or not based on Vo and Vbat.Note that some or all of the components as shown and described above (e.g., energy harvesting PMIC) may be implemented in software, hardware, or a combination thereof. For example, such components can be implemented as software installed and stored in a persistent storage device, which can be loaded and executed in a memory by a processor (not shown) to carry out the processes or operations described throughout this application. Alternatively, such components can be implemented as executable code programmed or embedded into dedicated hardware such as an integrated circuit (e.g., an application specific IC or ASIC), a digital signal processor (DSP), or a field programmable gate array (FPGA), which can be accessed via a corresponding driver and/or operating system from an application. Furthermore, such components can be implemented as specific hardware logic in a processor or processor core as part of an instruction set accessible by a software component via one or more specific instructions.Figure 4 is a graph illustrating current and time values when an energy harvesting PMIC is operated according to one embodiment. Specifically, graph 400 illustrates an operation window of a front-end boost conversion stage of an energy harvesting PMIC that inputs three harvesting energy sources (e.g., PV cells). As shown in Figure 3, the current of switching inductor (II) is "time-shared" among the three input sources (e.g., energy harvesting sources 301-303).Referring now to Figure 4. According to one embodiment, a scheduler controller (not shown), which can arbitrate on a first-come-first-server (FCFS) basis, and a pulse-frequency modulation (PFM) controller (not shown) are used to implement a PFM configuration that has a discontinuous conduction mode. For example, the multiple harvesting energy sources may be selected based on FCFS basis using the scheduler, which arbitrates among the multiple energy sources. Therefore, graph 400 illustrates a constant on-time pulse triggered current (IiXmA)) versus time (μ8) that shows the "time-shared" current among the three energy sources within a selected time interval.Figure 5 is a detailed circuit diagram illustrating power conversion and control of a two- stage topology according to one embodiment. Figure 5 illustrates an example of interactions between different components of energy harvesting PMIC 205. It is pointed out that the components of Figure 5 that have the same reference numbers (or names) as components of any other figure can operate or function in any manner similar to that described herein, but are not limited to such. Further, the lines connecting the components represent communication between different components of energy harvesting PMIC 205.Referring now to Figure 5. System 500 shows a two-stage hybrid switching topology for power conditioning. In one embodiment, system 500 includes mode 1 501 and mode 2 502. System 500 is configured to transition between operation modes using multiple switches (or FETs). Mode 1 501 illustrates a charging phase in the energy harvesting PMIC. Mode 1 501 includes flying capacitor 505 and output current 504 (Io). Mode 2 502 illustrates a discharging phase (also referred to as a release phase) in the energy harvesting PMIC. Mode 2 502 includes flying capacitor 503 and output current 504 (Io). Note that flying capacitors 503 and 505 may be the same capacitor or different capacitors.According to one embodiment, system 500 illustrates a power conversion and control of a two-stage hybrid switching topology of an energy harvesting PMIC. Since the boost converter in the front-stage operates at a higher switching frequency compared to the switching capacitor charge pump in the second-stage, an output current of the boost converter or an input current at Vx (a voltage point) are close to a constant value for the switching capacitor operation/analysis. Therefore, the "soft charge" to the CELYis achieved and illustrated as output current 504 (Io), which is a constant current even during the transition of operation modes 501-502.In one embodiment, mode 1 501 illustrates a charging phase for the energy harvesting PMIC by providing output current 504 to charge flying capacitor 505 (CFLY). Meanwhile, mode 2 502 illustrates a discharging phase for the energy harvesting PMIC by putting output current 504 and flying capacitor 503 into series and supplying a load. As such, the two-stage hybrid switching topology provides two modes 501-502 that allows self-powering for an IOT smart sensor node and also improves the overall power efficiency due to the elimination of the "hard" power loss associated with a conventional switching capacitor converter. Figure 6A is a block diagram illustrating a battery-operating mode according to one embodiment. Figured 6B is a detailed circuit diagram illustrating a battery-operating mode according to one embodiment. Figures 6-B illustrate an example of interactions between different components of energy harvesting PMIC 205. It is pointed out that the components of Figures 6-B that have the same reference numbers (or names) as components of any other figure can operate or function in any manner similar to that described herein, but are not limited to such. Further, the lines connecting the components represent communication between different components of energy harvesting PMIC 205.Referring now to Figure 6A. System 600 illustrates harvesting energy sources 601-602, energy harvesting PMIC 205, boost converter 620, switched capacitor charge pump 330, battery 610, and loads 603-604. According to one embodiment, energy harvesting PMIC 205 implements battery 610 (or any other energy storage device) to operate as a power source or a load. In one embodiment, when battery 610 operates as a power source, switch capacitor charge pump 330 receives power from battery 610 and forwards the power to boost converter 610 via a "Discharge" path (as shown in Figure 6A). In one embodiment, when battery 610 operates as a load, switch capacitor charge pump 330 receives power from boost converter 620 and forwards the power to battery 610 via a "Charge" path (as shown in Figure 6A).Loads 603-604 are not limited to any particular type of load. For example, a load may include an IOT smart sensor node, a CPU, a mobile phone, etc. In one embodiment, if the power provided by harvesting energy sources 601-602 are greater than the power required to supply loads 603-604 (i.e., all the loads of system 600), battery 610 operates as the load. Furthermore, the energy (e.g., excess energy) that is not required to supply loads 603-604 is used to charge battery 610 through the "charge" path, as shown in Figure 6A. Meanwhile, if the power required to supply loads 603-604 is greater than the power provided by harvesting energy sources 601- 602, battery 610 operates as the power source and supplies power through the "discharge" path to boost converter 620, as shown in Figure 6A.Referring now to Figure 6B. Figure 6B illustrates an exemplary circuit diagram of Figure 6A. Specifically, system 650 illustrates energy harvesting PMIC 205 configured in a battery- operating mode. In the battery-operating mode, according to one embodiment, boost converter 620 receives energy from battery 610 through switched capacitor charge pump 330 at its input, and/or supplies power to battery 610 through switched capacitor charge pump 330 at its output. For example, energy/power may be supplied/flow from all the energy sources and/or the battery to all the loads and/or the battery, while regulating all the energy sources and load voltages by sourcing or supplying the difference between the available source power and the required load power from/to the battery. Note that boost converter 620 may include multiple inputs of energy sources, including an input power supply from a battery, and multiple outputs. As such, booster converter 620 can operate or function in any manner similar to that described herein (i.e., boost converter 320), but is not limited to such.Figure 7 illustrates a depiction of an exemplary computing system 700 such as a personal computing system (e.g., desktop or laptop) or a mobile or handheld computing system such as a tablet device or smartphone. As illustrated in Figure 7, the basic computing system may include a central processing unit 701 (which may include, e.g., a plurality of general purpose processing cores and a main memory controller disposed on an applications processor or multi-core processor), system memory 702, a display 703 (e.g., touchscreen, flat-panel), a local wired point- to-point link (e.g., USB) interface 704, various network I/O functions 705 (such as an Ethernet interface and/or cellular modem subsystem), a wireless local area network (e.g., Wi-Fi) interface 706, a wireless point-to-point link (e.g., Bluetooth) interface 707 and a Global Positioning System interface 708, various sensors 709_1 through 709_N (e.g., one or more of a gyroscope, an accelerometer, a magnetometer, a temperature sensor, a pressure sensor, a humidity sensor, etc.), a camera 710, a battery 711, a power management control unit 712, a speaker and microphone 713 and an audio coder/decoder 714.An applications processor or multi-core processor 750 may include one or more general purpose processing cores 715 within its CPU 701, one or more graphical processing units 716, a memory management function 717 (e.g., a memory controller) and an I/O control function 718. The general-purpose processing cores 715 typically execute the operating system and application software of the computing system. The graphics processing units 716 typically execute graphics intensive functions to, e.g., generate graphics information that is presented on the display 703. The memory control function 717 interfaces with the system memory 702. During operation, data and/or instructions are typically transferred between deeper non-volatile (e.g., "disk") storage 720 and system memory 702. The power management control unit 712 generally controls the power consumption of the system 700. For example, a power management control unit may control and manage an energy harvesting PMIC in order to receive power from one or more energy harvesting sources.Each of the touchscreen display 703, the communication interfaces 704 - 707, the GPS interface 708, the sensors 709, the camera 710, and the speaker/microphone codec 713, 714 all can be viewed as various forms of I/O (input and/or output) relative to the overall computing system including, where appropriate, an integrated peripheral device as well (e.g., the camera 710). Depending on implementation, various ones of these I/O components may be integrated on the applications processor/multi-core processor 750 or may be located off the die or outside the package of the applications processor/multi-core processor 750. Embodiments of the invention may include various processes as set forth above. The processes may be embodied in machine-executable instructions. The instructions can be used to cause a general-purpose or special-purpose processor to perform certain processes. Alternatively, these processes may be performed by specific hardware components that contain hardwired logic for performing the processes, or by any combination of programmed computer components and custom hardware components.Elements of the present invention may also be provided as a machine-readable medium for storing the machine-executable instructions. The machine-readable medium may include, but is not limited to, floppy diskettes, optical disks, CD-ROMs, and magneto-optical disks, FLASH memory, ROMs, RAMs, EPROMs, EEPROMs, magnetic or optical cards, propagation media or other type of media/machine-readable medium suitable for storing electronic instructions. For example, the present invention may be downloaded as a computer program which may be transferred from a remote computer (e.g., a server) to a requesting computer (e.g., a client) by way of data signals embodied in a carrier wave or other propagation medium via acommunication link (e.g., a modem or network connection).Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of transactions on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of transactions leading to a desired result. The transactions are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the above discussion, it is appreciated that throughout the description, discussions utilizing terms such as "processing" or "computing" or "calculating" or "determining" or "displaying" or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method transactions. The required structure for a variety of these systems will appear from the description above. In addition, embodiments of the present invention are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of embodiments of the invention as described herein.In the foregoing specification, embodiments of the invention have been described with reference to specific exemplary embodiments thereof. It will be evident that variousmodifications may be made thereto without departing from the broader spirit and scope of the invention as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.Throughout the description, embodiments of the present invention have been presented through flow diagrams. It will be appreciated that the order of transactions and transactions described in these flow diagrams are only intended for illustrative purposes and not intended as a limitation of the present invention. One having ordinary skill in the art would recognize that variations can be made to the flow diagrams without departing from the broader spirit and scope of the invention as set forth in the following claims.The following examples pertain to further embodiments:A power management integrated circuit (PMIC), comprising, a boost converter to receive a plurality of first power supplies and to generate an intermediate voltage, the boost converter having a plurality of input terminals coupled to the plurality of first power supplies; and a switched capacitor charge pump to receive the intermediate voltage and to generate a second power supply.A power management integrated circuit (PMIC), comprising, a boost converter to receive a plurality of first power supplies and to generate an intermediate voltage, the boost converter having a plurality of input terminals coupled to the plurality of first power supplies; and a switched capacitor charge pump to receive the intermediate voltage and to generate a second power supply, wherein the switched capacitor charge pump is configured to operate in a step-up mode, and wherein in the step-up mode the charge pump can step-up the intermediate voltage at a ratio of at least one of 1:2 and 1:3.A power management integrated circuit (PMIC), comprising, a boost converter to receive a plurality of first power supplies and to generate an intermediate voltage, the boost converter having a plurality of input terminals coupled to the plurality of first power supplies; a switched capacitor charge pump to receive the intermediate voltage and to generate a second power supply; and a load to receive the second power supply, wherein the load includes a battery that operates as an input power supply of the boost converter if the plurality of first power supplies drops below a voltage threshold.A power management integrated circuit (PMIC), comprising, a boost converter to receive a plurality of first power supplies and to generate an intermediate voltage, the boost converter having a plurality of input terminals coupled to the plurality of first power supplies, wherein the boost converter includes a switching inductor coupled between a first node and a second node, the first node to receive the plurality of first power supplies, and the second node coupled between the intermediate voltage and a ground; and a switched capacitor charge pump to receive the intermediate voltage and to generate a second power supply.A power management integrated circuit (PMIC), comprising, a boost converter to receive a plurality of first power supplies and to generate an intermediate voltage, the boost converter having a plurality of input terminals coupled to the plurality of first power supplies; a switched capacitor charge pump to receive the intermediate voltage and to generate a second power supply; and a plurality of energy conversion devices configured to acquire energy from a plurality of energy harvesting sources and convert the acquired energy into the plurality of first power supplies.A power management integrated circuit (PMIC), comprising, a boost converter to receive a plurality of first power supplies and to generate an intermediate voltage, the boost converter having a plurality of input terminals coupled to the plurality of first power supplies, wherein the boost converter generates a plurality of intermediate voltages coupled to a plurality of output terminals and operates in a discontinuous conduction mode, and further comprises a pulse frequency modulation controller; and a switched capacitor charge pump to receive the intermediate voltage and to generate a second power supply.A power management integrated circuit (PMIC), comprising, a boost converter to receive a plurality of first power supplies and to generate an intermediate voltage, the boost converter having a plurality of input terminals coupled to the plurality of first power supplies; a switched capacitor charge pump to receive the intermediate voltage and to generate a second power supply; and a plurality of energy conversion devices configured to acquire energy from a plurality of energy harvesting sources and convert the acquired energy into the plurality of first power supplies, wherein the plurality of energy conversion sources includes at least one of a photovoltaic (PC) cell, a thermoelectric generator (TEG), a radio frequency (RF) device, and a piezoelectric material.A power management integrated circuit (PMIC), comprising, a boost converter to receive a plurality of first power supplies and to generate an intermediate voltage, the boost converter having a plurality of input terminals coupled to the plurality of first power supplies; and a switched capacitor charge pump to receive the intermediate voltage and to generate a second power supply, wherein the switched capacitor charge pump includes at least a plurality of charging circuits, a first capacitor to store charge, and a second capacitor to receive charge from the first capacitor, wherein the second capacitor is coupled to an output terminal of the charge pump.A power management integrated circuit (PMIC), comprising, a boost converter to receive a plurality of first power supplies and to generate an intermediate voltage, the boost converter having a plurality of input terminals coupled to the plurality of first power supplies; and a switched capacitor charge pump to receive the intermediate voltage and to generate a second power supply, wherein the switched capacitor charge pump includes at least one of a charge mode and a discharging mode.A system for energy harvesting, comprising, a load; a plurality of energy harvesting sources; and a power management integrated circuit (PMIC) having a boost converter to receive a plurality of first power supplies and to generate an intermediate voltage, the boost converter having a plurality of input terminals coupled to the plurality of first power supplies, and a switched capacitor charge pump to receive the intermediate voltage and to generate a second power supply.A system for energy harvesting, comprising, a load; a plurality of energy harvesting sources; and a power management integrated circuit (PMIC) having a boost converter to receive a plurality of first power supplies and to generate an intermediate voltage, the boost converter having a plurality of input terminals coupled to the plurality of first power supplies, and a switched capacitor charge pump to receive the intermediate voltage and to generate a second power supply, wherein the switched capacitor charge pump is configured to operate in a step-up mode, and wherein in the step-up mode the charge pump can step-up the intermediate voltage at a ratio of at least one of 1:2 and 1:3.A system for energy harvesting, comprising, a load; a plurality of energy harvesting sources; a power management integrated circuit (PMIC) having a boost converter to receive a plurality of first power supplies and to generate an intermediate voltage, the boost converter having a plurality of input terminals coupled to the plurality of first power supplies, and a switched capacitor charge pump to receive the intermediate voltage and to generate a second power supply; and a load to receive the second power supply, wherein the load includes a battery that operates as an input power supply of the boost converter if the plurality of first power supplies drops below a voltage threshold. A system for energy harvesting, comprising, a load; a plurality of energy harvesting sources; and a power management integrated circuit (PMIC) having a boost converter to receive a plurality of first power supplies and to generate an intermediate voltage, the boost converter having a plurality of input terminals coupled to the plurality of first power supplies, wherein the boost converter includes a switching inductor coupled between a first node and a second node, the first node to receive the plurality of first power supplies, and the second node coupled between the intermediate voltage and a ground, and a switched capacitor charge pump to receive the intermediate voltage and to generate a second power supply.A system for energy harvesting, comprising, a load; a plurality of energy harvesting sources; a power management integrated circuit (PMIC) having a boost converter to receive a plurality of first power supplies and to generate an intermediate voltage, the boost converter having a plurality of input terminals coupled to the plurality of first power supplies, and a switched capacitor charge pump to receive the intermediate voltage and to generate a second power supply; and a plurality of energy conversion devices configured to acquire energy from a plurality of energy harvesting sources and convert the acquired energy into the plurality of first power supplies.A system for energy harvesting, comprising, a load; a plurality of energy harvesting sources; and a power management integrated circuit (PMIC) having a boost converter to receive a plurality of first power supplies and to generate an intermediate voltage, the boost converter having a plurality of input terminals coupled to the plurality of first power supplies, wherein the boost converter generates a plurality of intermediate voltages coupled to a plurality of output terminals and operates in a discontinuous conduction mode, and further comprises a pulse frequency modulation controller, and a switched capacitor charge pump to receive the intermediate voltage and to generate a second power supply.A system for energy harvesting, comprising, a load; a plurality of energy harvesting sources; a power management integrated circuit (PMIC) having a boost converter to receive a plurality of first power supplies and to generate an intermediate voltage, the boost converter having a plurality of input terminals coupled to the plurality of first power supplies, and a switched capacitor charge pump to receive the intermediate voltage and to generate a second power supply; and a plurality of energy conversion devices configured to acquire energy from a plurality of energy harvesting sources and convert the acquired energy into the plurality of first power supplies, wherein the plurality of energy conversion sources includes at least one of a photovoltaic (PC) cell, a thermoelectric generator (TEG), a radio frequency (RF) device, and a piezoelectric material.A system for energy harvesting, comprising, a load; a plurality of energy harvesting sources; and a power management integrated circuit (PMIC) having a boost converter to receive a plurality of first power supplies and to generate an intermediate voltage, the boost converter having a plurality of input terminals coupled to the plurality of first power supplies, and a switched capacitor charge pump to receive the intermediate voltage and to generate a second power supply, wherein the switched capacitor charge pump includes at least a plurality of charging circuits, a first capacitor to store charge, and a second capacitor to receive charge from the first capacitor, wherein the second capacitor is coupled to an output terminal of the charge pump.A system for energy harvesting, comprising, a load; a plurality of energy harvesting sources; and a power management integrated circuit (PMIC) having a boost converter to receive a plurality of first power supplies and to generate an intermediate voltage, the boost converter having a plurality of input terminals coupled to the plurality of first power supplies, and a switched capacitor charge pump to receive the intermediate voltage and to generate a second power supply, wherein the switched capacitor charge pump includes at least one of a charge mode and a discharging mode.A method for energy harvesting, comprising, a means for providing a power management integrated circuit (PMIC) including a boost converter and a switched capacitor charge pump; a means for receiving a plurality of first power supplies at a plurality of input terminals of the boost converter; a means for generating an intermediate voltage at an output of the boost converter; a means for receiving the intermediate voltage at an input of the switched capacitor charge pump; and a means for generating a second power supply at an output of the switched capacitor charge pump.A method for energy harvesting, comprising, a means for providing a power management integrated circuit (PMIC) including a boost converter and a switched capacitor charge pump; a means for receiving a plurality of first power supplies at a plurality of input terminals of the boost converter; a means for generating an intermediate voltage at an output of the boost converter; a means for receiving the intermediate voltage at an input of the switched capacitor charge pump; a means for generating a second power supply at an output of the switched capacitor charge pump; and a means for receiving the second power supply at a load, wherein the load includes a battery operating as an input power supply of the boost converter if the plurality of first power supplies drops below a voltage threshold.A method for energy harvesting, comprising, a means for providing a power management integrated circuit (PMIC) including a boost converter and a switched capacitor charge pump; a means for receiving a plurality of first power supplies at a plurality of input terminals of the boost converter; a means for generating an intermediate voltage at an output of the boost converter; a means for receiving the intermediate voltage at an input of the switched capacitor charge pump; a means for generating a second power supply at an output of the switched capacitor charge pump; and a means for providing a switching inductor of the boost convert coupled between a first node and a second node of the boost converter, wherein the second node is coupled between the intermediate voltage and a ground; and a means for receiving the plurality of first power supplies at the first node of the boost converter.A method for energy harvesting, comprising, a means for providing a power management integrated circuit (PMIC) including a boost converter and a switched capacitor charge pump; a means for receiving a plurality of first power supplies at a plurality of input terminals of the boost converter; a means for generating an intermediate voltage at an output of the boost converter; a means for receiving the intermediate voltage at an input of the switched capacitor charge pump; a means for generating a second power supply at an output of the switched capacitor charge pump; and a means for acquiring energy from a plurality of energy harvesting sources using a plurality of energy conversion devices, wherein the plurality of energy conversion devices are configured to convert the acquired energy into the plurality of first power supplies.A method for energy harvesting, comprising, a means for providing a power management integrated circuit (PMIC) including a boost converter and a switched capacitor charge pump; a means for receiving a plurality of first power supplies at a plurality of input terminals of the boost converter; a means for generating an intermediate voltage at an output of the boost converter; a means for receiving the intermediate voltage at an input of the switched capacitor charge pump; a means for generating a second power supply at an output of the switched capacitor charge pump; a means for generating a plurality of intermediate voltages coupled to a plurality of output terminals; a means for operating in a discontinuous conduction mode; and a means for providing a pulse frequency modulation controller.A method for energy harvesting, comprising, a means for providing a power management integrated circuit (PMIC) including a boost converter and a switched capacitor charge pump; a means for receiving a plurality of first power supplies at a plurality of input terminals of the boost converter; a means for generating an intermediate voltage at an output of the boost converter; a means for receiving the intermediate voltage at an input of the switched capacitor charge pump; and a means for generating a second power supply at an output of the switched capacitor charge pump, wherein the switched capacitor charge pump further comprises at least one of a charge mode and a discharging mode; and wherein the switched capacitor charge pump further comprises at least a plurality of charging circuits, a first capacitor to store charge, and a second capacitor to receive charge from the first capacitor, wherein the second capacitor is coupled to an output terminal of the charge pump.A method for energy harvesting, comprising, a means for providing a power management integrated circuit (PMIC) including a boost converter and a switched capacitor charge pump; a means for receiving a plurality of first power supplies at a plurality of input terminals of the boost converter; a means for generating an intermediate voltage at an output of the boost converter; a means for receiving the intermediate voltage at an input of the switched capacitor charge pump; and a means for generating a second power supply at an output of the switched capacitor charge pump, wherein the switched capacitor charge pump is configured to operate in a step-up mode, and wherein in the step-up mode the charge pump can step-up the intermediate voltage at a ratio of at least one of 1:2 and 1:3.In the foregoing specification, methods and apparatuses have been described with reference to specific exemplary embodiments thereof. It will be evident that various modifications may be made thereto without departing from the broader spirit and scope of embodiments as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. |
A method of forming a through-silicon via (TSV) includes depositing a seed layer over a diffusion barrier layer (206, 208) in a hole formed in a wafer or die substrate (204), and annealing the seed layer prior to plating metal over the seed layer to fill the hole (212). The plating process is improved by increasing the rotation of the substrate in the plating solution. |
CLAIMS What is claimed is: 1. A method for forming a through-silicon via comprising: forming a through via hole in a substrate; depositing a seed layer of conductive material in the hole; and annealing the deposited seed layer. 2. The method of claim 1, further comprising forming a conductive material over the annealed seed layer by rotating the substrate in a conductive material plating solution at a rate of at least 50 rpm. 3. The method of claim 2, wherein the seed layer is a copper seed layer and the plating solution comprises a copper plating solution. 4. The method of claim 3, wherein the seed layer is annealed at a temperature of at least 100° C for at least 15 minutes. 5. The method of claim 4, wherein the seed layer is annealed in a degas chamber of a physical vapor deposition system. 6. The method of claim 3, wherein the seed layer is annealed in a rapid thermal processing furnace for a period of between 30 seconds and 10 minutes. 7. The method of claim 1, wherein the hole is formed with a diameter of between lOmu and lOOmu, a depth of between 20mu and 200mu and an aspect ratio of between 4: 1 and 15: 1; wherein depositing the seed layer comprises depositing a diffusion layer on a wall surface of the hole, and depositing a copper seed layer on the diffusion layer; and wherein the seed layer is annealed at a temperature of between 50° C and 500° C. 8. The method of claim 7, further comprising: contacting the annealed copper seed layer with a plating solution having a temperature of between 22° C and 500° C, a PH of between 0 and 6, and copper ions in a concentration of at least 50 g/L for a plating period; wherein the current density during said plating period is between about O. lmA/cm 2and 20 mA/ cm 2; and rotating said substrate at about 100 rpm during said plating period. 9. The method of claim 8, wherein the seed layer is annealed at a temperature of 200° C for a period of at least 30 minutes in an annealing furnace; and wherein the annealed copper seed layer is contacted with a plating solution having a temperature of about 25° C and a PH of between about 3 and 5, and a copper ion concentration of between about 60 and 100 g/1, during a plating period of less than about 17 minutes. 10. The method of claim 8, wherein the seed layer is annealed in a rapid thermal processing furnace for a period of between 30 seconds and 10 minutes. 11. A method for forming a through-silicon via comprising: forming a through via hole in a substrate; depositing a copper seed layer of conductive material in the hole; and annealing the deposited seed layer at a temperature of at a temperature of between 50° C and 500° C. 12. The method of claim 11, wherein the seed layer is annealed at a temperature of at least 200° C for at least 30 minutes. 13. The method of claim 11, wherein the seed layer is annealed in a rapid thermal processing furnace for a period of between about 30 seconds and 10 minutes at a temperature of between 50° C and 500° C. 14. The method of claim 13, further comprising plating the seed layer with copper by rotating the substrate in a bath of copper plating solution at a rate of at least 50 rpm. |
THROUGH-SILICON VIA FILLING [0001] This relates to semiconductor structures in general and to through- silicon via structures in particular. BACKGROUND [0002] A through-silicon via (TSV) (also referred to as through-substrate via) is a vertical electrical connection passing completely through an integrated circuit or microelectromechanical system (MEMS) substrate such as a silicon wafer or die. TSV technology is important in creating three-dimensional (3D) packages and 3D integrated circuits (IC). It provides interconnection of vertically aligned electronic devices through internal wiring that significantly reduces complexity and overall dimensions of a multi-chip electronic circuit. [0003] A typical TSV process includes formation of TSV holes and deposition of a diffusion barrier layer and a conductive seed layer. A conductive material is then electroplated into TSV holes. Copper is typically used as the conductive material as it supports high current densities experienced at complex integration, such as 3D packages and 3D integrated circuits, and increased device speed. Furthermore, copper has good thermal conductivity and is available in a highly pure state. [0004] TSV holes typically have high aspect ratios and depositing copper into such structures can be challenging. CVD deposition of copper requires complex and expensive precursors, while PVD deposition often results in voids and limited step coverage. Electroplating is a more common method of depositing copper into TSV structures; however, electroplating also presents a set of challenges because of the TSV's large size and high aspect ratio. Typically, an electroplating solution for TSVs includes copper sulfate as a source of copper ions, sulfuric acid for controlling conductivity, copper chloride for nucleation of suppressor molecules, and several other additives. Methodology and apparatus for filling TSV holes are disclosed in U.S. Patent 8,043,967 which is hereby incorporated by reference. SUMMARY [0005] A high volume copper electroplating method in through-silicon via (TSV) holes having large sizes and high aspect ratios is disclosed. [0006] Two modifications of prior processes can significantly reduce TSV plating/filing times and can also improve the quality of TSV's that are produced. The first is to anneal the seed layer prior to plating of the TSV hole in seed layer formation. The second is to increase the rotation rate of the substrate substantially during the plating period. [0007] Prior to electroplating a TSV hole, a copper seed layer is applied to the interior wall of the hole and surrounding field. The seed layer in some embodiments is applied over a barrier layer. The substrate containing the TSV is placed in an annealing furnace after the seed layer is applied. The seed layer is annealed at a temperature of at least 150°Cfor at least 30 minutes. In one embodiment the annealing temperature is at least about 100°C and the annealing period is at least about 30 minutes. Annealing of the seed layer produces a seed layer surface that can be fill/plated with few or any voids and reduced overburden and at a rate that increases product throughput by about a factor of 6 as compared to a seed layer that has not been annealed. [0008] The plating solution for copper deposition inside the TSV holes may have a relatively low concentration of sulfuric acid and high concentration of copper ions. TSV deposition processes may benefit from faster copper migration through the plating solution and, in particular, to the bottom of the TSV hole. Bath species must rely on diffusion and migration to reach the via bottom and these are relatively slow processes. The species diffusion/migration times are impacted by, solution conductivity (bath and pre -wet), current density, solution temperature and species concentrations. Species that are transported to via bottom first are protons, accelerator B, CI, & Cu. Species that are transported to the via bottom much later are believed to be the large molecular weight leveler compound, and suppressor molecules. The solution may be maintained at temperatures between about 22°C to 80°C. Copper is electroplated into the TSV hole in a substantially void free manner and, in certain embodiments, over a period of less than about 17 minutes. A relatively fast rotation speed improves the plating process. In some embodiments the speed is between about 50 rpm and 100 rpm. In one embodiment the rotation speed is at least about 100 rpm. [0009] In certain embodiments, the method includes plating a TSV of at least 10 micrometers in diameter and at least 20 micrometers in depth. In some embodiments, a TSV may be between about 10 and 80 micrometers in diameter and between about 20 and 200 micrometers in depth. The TSV holes may have aspect ratio of between about 4: 1 to about 15: 1. [0010] The method may include contacting a structure having a TSV hole with a plating solution having a pH between about 0 and 5 and copper ions in a concentration of at least about 50 grams per liter. In a more specific embodiment, the plating solution has a pH between about 0 and 3. In one embodiment, the solution contains between about 50 grams per liter and 200 grams per liter of copper ions. In a more specific embodiment, the concentration of copper ions in the plating solution is between about 60 grams per liter and 100 grams per liter. The source of the copper ions may be copper methane sulfonate, copper sulfate, copper pyrophosphate, copper propanesulfonate, or a combination thereof. [0011] In one specific embodiment, the plating solution has a temperature of about 25°C. Also as indicated, the plating solution may contain very little to no chloride ions. In one embodiment, the plating solution contains chloride ions in a concentration of between 0 and 120ppm. In one embodiment, the concentration of chloride ions may be about 70 ppm. [0012] The current density during plating process may be between about 0.1 and 20 niA/cm 2over the plating surface. In other embodiments, the current density during the plating process may be between about 0.1 and 10 niA/cm 2. [0013] In one embodiment of a semiconductor processing apparatus the apparatus includes one or more electroplating baths and a controller for executing a set of instructions. The apparatus may also include a source or supply of plating solution. In certain embodiments, the plating solution has a pH between about 0 and 3 and copper ions in a concentration of at least about 50 grams per liter. The instructions may include contacting a structure having a TSV hole with the plating solution, and while contacting the structure, plating copper into the through-silicon via hole to completely fill the through-silicon via in a substantially void free manner and over a period of less than about 17 minutes. The apparatus may also include a temperature controller for maintaining a temperature of the plating solution at about 25°C.while plating copper into the TSV hole. The apparatus may also include an assembly for rotating the wafer at a selected rate while it is in the one or more electroplating baths. In one embodiment the selected rate is at least about 100 rpm. BRIEF DESCRIPTION OF THE DRAWINGS [0014] FIG. 1 is a schematic representation of a through-silicon via (TSV) at various processing stages starting with TSV hole formation, followed by lining with a diffusion barrier layer and seed layer, then annealing, then electroplating, then thinning, then forming a solder bump, and then interconnecting with another TSV. [0015] FIG. 2 is a process flow diagram illustrating several operations of TSV processing in accordance with the present invention. [0016] FIG. 3 is a schematic representation of an electroplating apparatus. [0017] FIG. 4 is a schematic representation of a wafer processing apparatus. DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS [0018] A through-silicon via (TSV)(also referred to as through-substrate via) is a vertical electrical connection passing completely through an integrated circuit or microelectromechanical system (MEMS) substrate such as a silicon wafer or a die. TSV technology may be used in three-dimensional (3D) packages and 3D integrated circuits, sometimes collectively referred to as 3D stacking. For example, a 3D package may contain two or more integrated circuits (ICs) stacked vertically so that they occupy less space. Traditionally, stacked ICs are wired together along their edges, but such wiring increases the stack's dimensions and usually requires extra layers between the ICs. TSVs provide connections through the body of the ICs leading to smaller stacks. Similarly, a 3D single IC may be built by stacking several silicon wafers and interconnecting them vertically. Such stacks behave as a single device and can have shorter critical electrical paths leading to faster operation. [0019] Electronic circuits using TSVs may be bonded in several ways. One method is "wafer-to-wafer", where two or more semiconductor wafers having circuitry are aligned, bonded, and diced into 3D ICs. Each wafer may be thinned before or after bonding. The thinning process includes removal of the wafer material to expose the bottom part of the TSV. TSVs may be formed into the wafers either before bonding or else created in the stack after bonding and may pass through the silicon substrates between active layers and an external bond pad. Another method is "die-to-wafer" where only one wafer is diced and then the singled dies are aligned and bonded onto die sites of the second wafer. The third method is "die-to-die" where multiple dies are aligned and bonded. Similar to the first method, thinning and connections may be built at any stage in the last two methods. [0020] FIG. 1 is a schematic representation of a TSV at various processing stages. A TSV may be used with both dies and wafers, generally referred here as semiconductor substrate 104. Examples of the material suitable for a semiconductor substrate 104 include, but are not limited to silicon, silicon on insulator, silicon on sapphire, and gallium arsenide. It is to be understood that the term "through-silicon via" or "TSV" as used in this disclosure is not limited to silicon substrates, but refers to vias formed through substrates of any semiconductor or other material of a type used for substrates of integrated circuits, microelectromechanical system (MEMS) devices and the like. [0021] In a first cross section 100, a TSV hole 106 is formed in the semiconductor substrate 104. The depth of the TSV hole 106 must be sufficient to expose the bottom 108 after the subsequent thinning operation. Typically, TSV holes may be between about 5 to 400 microns deep, however the present invention may be practiced with the TSV holes of other sizes as well. The diameter of TSV holes may vary between about 1 to 100 microns. The TSV holes typically have a very high aspect ratio, which is defined as the ratio of the TSV hole depth to the TSV hole diameter (usually at the opening). In certain embodiments, the TSV hole aspect ratio may vary between about 3:1 to 10: 1. TSV size also depends on which stage of the overall 3D stacking process includes TSV formation. A TSV can be formed before ("via first") or after ("via last") stacking In the "via-first" configuration, the TSV may be formed before or after creating CMOS structures. In the "via-last" configuration, the TSV may be formed before or after bonding. Moreover, in both configurations, thinning may be performed before or after bonding. [0022] TSV holes may be formed using various methods further discussed in the context of FIG. 2. For example, TSV holes may be etched using a method optimized for high aspect ratio holes. TSV holes may have a slight positive slope and/or a taper near their openings. Such TSV profiles may improve diffusion of metal ions within TSV holes and reduce electroplating time. Returning to FIG. 1, the TSV hole 106 may be formed through a top surface 102, which is often referred to as a wafer field. The top surface 102 may be an active surface of a wafer or a die and include electronic devices. Alternatively, the TSV hole may be formed through the back surface of a wafer or a die where the circuitry is not present. [0023] The cross section 110 shows deposition of a diffusion barrier layer 114 and a seed layer 116 on the sides and the bottom of the TSV hole 106. Suitable materials for the diffusion barrier layer 114 include tantalum, tantalum nitride, tungsten, titanium, and titanium tungsten. In a typical embodiment, the diffusion barrier layer 114 is formed by a physical vapor deposition (PVD) process, such as sputtering, although other techniques such as chemical vapor deposition (CVD) or atomic layer deposition (ALD) may be employed. The seed layer 116 is then deposited to provide a uniform conductive surface for current passage during an electroplating operation. As with the barrier layer deposition, a PVD method may be employed for this operation, although other processes such as electroless deposition may be employed as well. Homogeneity of the seed layer 116 may be important to ensure same conductivity and uniform deposition rate. Copper may be a suitable material for the seed layer. [0024] Cross-sectional view 120 shows that the seed layer 116' after the substrate 104 has been annealed. Annealing of the substrate 104 in one embodiment is performed in an annealing furnace at a temperature of about 200°C and for an annealing period of about 30 minutes. The annealed seed layer 116' has larger grains and a rougher surface than the pre- annealed seed layer 116 shown in section 110. Annealing the copper seed layer promotes proper balancing between accelerator and leveler additives in subsequent operations which promotes bottom up via plating and prevents voids. The cross-sectional configuration 120 represents a unique intermediate product produced using the disclosed method. [0025] The next cross-sectional view 130 depicts conductive material 124 as deposited into the TSV hole 106. In embodiments described herein, the conductive material 124 may be electroplated copper. In a typical electroplating process, the substrate 104 is submerged into the plating solution containing metal ions. Current is then generated through the seed layer 116 causing metal ions to flow towards and deposit on the seed layer. Additional details of electroplating are discussed in the context of FIG. 2. Some of the electroplated metal may deposit on the top surface 110 forming an overburden 126. The overburden 126 is not desirable and may have to be removed in post electroplating processes, such chemical mechanical polishing, electroplanarization process, or thinning. Such overburden may be substantially reduced or eliminated by annealing as shown in cross section 120 and described with reference to Fig. 2 below. [0026] The next cross section 140 illustrates the substrate 104 after post- electroplating processes to remove overburden. For example, the substrate 104 may go through edge bevel removal, electro-planarization, chemical-mechanical polishing (CMP), thinning and others. As shown, the overburden 126 is removed. The substrate 104 may be thinned forming a new bottom surface 136 and exposing the TSV end 138. A top of the substrate 104 may also be thinned forming a new top surface 134. [0027] The next cross section 150 shows a solder bump 144 attached to one end of the TSV 142. Examples of materials suitable for forming solder bumps include, but are not limited to, lead based solder materials (such as lead, lead/tin alloys, and others), non-lead based solder materials (such as tin/silver, tin/copper/silver, and copper alloys) and the like. Finally, illustration 160 shows a simple electronic stack where the first die 152 is interconnected with the second die 154 through a solder joint 158. The first die 152 may have the first TSV 156. Similarly, the second die 154 may have the second TSV 160. The first TSV 156, the second TSV 160, or both TSVs may have solder bumps that were used to interconnect the two TSVs and to form the solder joint 158. The stack may include additional dies and additional TSVs. For example, the second TSV may be further interconnected to another TSV in a third stack and so on. Similarly, the first die may have a plurality of TSVs some of which may be connected to TSVs of the second die, while others may be connected to TSVs of other dies. When two adjacent dies have a plurality of interconnections, the corresponding TSVs may need to be aligned. A stack including several dies may also be coupled to a heat spreader to assist in dissipation of the heat generated by the stack. [0028] FIG. 2 is a process flow diagram 200 of one method of forming TSVs. A wafer or a die is provided in operation 202. A TSV hole is then formed in a wafer or a die (block 204). The TSV holes may be formed together with circuit line paths (trenches and Damascene vias) or in a separate operation. In one embodiment, TSV holes are etched, e.g., plasma etched or reactive ion etched. The mask may be a photoresist, for example, in a "via- first" configuration, or an washable hard mask. Precise profile control (taper, tilt and sidewall roughness) is essential to ensure the quality of subsequent layer deposition and fill processes. In most cases, the TSVs are etched blind into the substrate, and then revealed by thinning in a post electroplating operation 212. [0029] Plasma etching is an ion-enhanced chemical process, which uses RF powered plasma sources for the creation of ions and chemically reactive species. Many etching compositions employed to etch silicon include fluorine chemistry. One example employs sulfur hexafluoride together with sidewall passivation based on oxygen and/or hydrogen bromide In another example, sulfur hexafluoride plasma is used together with a polymerizing gas such as octafluorocyclobutane In yet another embodiment, TSV holes may be formed (block 204) by laser drilling or laser ablation. For example, a 355 nm wavelength UV YAG laser may be used to form vias as little as 25 micrometers in diameter. In a typical example, one hundred pulses may form an approximately 750 micrometers deep TSV. [0030] To prevent conductive metal later deposited into the TSV hole from migrating into the surrounding dielectric layer a diffusion barrier layer may be deposited as indicated at block 206. The deposition therefore occurs before electroplating conductive metal (210). As indicated above, a diffusion barrier layer may be deposited by, for example, a physical vapor deposition process. The thickness and properties of the barrier layer depend upon the type of material employed for the barrier layer. In a typical example employing tantalum nitride, the barrier is deposited to a thickness of between about 5 and 50 nanometers on the TSV sidewalls. (In some process embodiments, the barrier deposition step is omitted.) After depositing the barrier layer, the next operation is depositing a seed layer 208 to provide uniform current deposition during subsequent electroplating; see block 210. As indicated above, the seed layer is typically PVD-formed copper, although other seed layers such as ruthenium may be employed in some embodiments. The seed layer generally should be continuous on all surfaces in the TSV structure in order to avoid localized corrosion dissolution and low local plating rates and to achieve maximum adhesion of the plated copper to the dielectric. A smooth etched surface of the TSV may facilitate deposition of continuous seed layer coverage since rough and irregular etch profiles can locally shadow some TSV surfaces during PVD deposition. In some embodiments, in order to avoid oxidation by air, the copper seed layer may be at least about 2 nm thick, but thickness as high as 200 nm is also acceptable for a large TSV structure. [0031] Next, as illustrated by block 210 the wafer or die is heated, as in an annealing furnace, to anneal the copper seed layer. Annealing furnaces are conventional and well known in the art. The furnace annealing temperature for the wafer/die may be in a range of 50° C to 500° C. In one embodiment the annealing furnace temperature is about 200°C. The annealing period may be from 20 minutes to 90 minutes. In one embodiment, in which the annealing furnace temperature is about 200°C, the annealing period is about 30 minutes. Annealing devices other than annealing ovens may also be used. Most commercial PVD systems, such as the Applied Materials Endura have degas chambers used to preheat the wafers prior to processing. Here this chamber may also be used to anneal the copper seed layer after deposition, before being unloaded from the tool during an annealing period of 30 seconds up to 10 minutes at annealing temperatures of about 50° C to 500° C. In addition to conventional furnaces, rapid thermal processing (RTP) furnaces may also be used to reduce annealing time. The annealing time using an RTP furnace may be from 30 seconds up to 10 minutes at temperatures from about 50° C to 500° C. [0032] The wafer is then electroplated with conductive metal that fills the entire volume of the TSV holes (block 212). Voids and seams are highly undesirable. In typical embodiments, copper is used in the electroplating operation. Electroplating into TSV holes has presented challenges. In conventional plating processes, the deposition rate may be faster near the opening, where the seed layer has the greatest thickness (lowest resistance) and more metal ions are present. Moreover, deposition may take several hours to supply enough metal ions to fill an entire TSV hole. Applicants have discovered that annealing the seed layer causes the formation of larger metal grains and a rougher surface in the seed layer, which selectively effects the diffusion rate of accelerator additive and leveler additive. More specifically, it causes a high diffusion rate of accelerator additive and a low diffusion rate of leveler additive into the bottom of a TSV hole as compared to the field. This in turn causes bottom up filling of the TSV's which substantially eliminates voids in the TSV fill, minimizes overburden, decreases contamination and reduces deposition times. Applicants have discovered that by the addition of annealing step 210 that product throughput increases significantly, on the order of 6 times the throughput of an otherwise identical process, except that the annealing step is not performed and the rotation rate of the wafer is below about 50 rpm. [0033] A typical technology for plating TSVs uses plating solution with approximately 10 gram per liter concentration of sulfuric acid. Such high acid concentration increases the conductivity of the plating solution, thereby providing for more uniform current distribution. However, a high concentration of highly mobile hydrogen ions impedes the transfer of much larger copper ions by migration. One way to express relative contribution of ions to the total deposition current flow is using transference number. The transferred number for copper ions in a typical electroplating process described above is less than 0.1. Therefore, less than 10% of the overall current flow through the solution in a TSV is carried by migration of cupric ions, while the remainder of the current is carried by other ions, such as hydrogen ions. Such low transference number is attributed to the combined effect of high mobility and concentration of hydrogen ions and much lower mobility, and often relatively low concentration of copper ions. [0034] In one embodiment a plating solution that is substantially free from acid may be used. For example, plating solutions with pH values in the range of 2-6 may be used. In a specific embodiment, a plating solution with pH values in the range of 3-5 is used. In such compositions, more copper ions are transported to the surface than in lower pH acidic solutions. [0035] To further facilitate copper deposition, the plating solution may also include high concentrations of copper ions. For example, the concentration of copper ions may be between about 0.8 M to 3.0 M. Such plating solutions at low pH, as specified above, may result in the copper ions transference number increasing to a level of not less than about 0.2. In one specific embodiment, the copper ions transference number may be at least about 0.4. The source of copper ions may be copper sulfate, copper methane sulfonate, copper gluconate, copper sulfamate, copper nitrate, copper phosphate, copper chloride and others. While generally higher concentrations of copper ions are desirable, these concentrations are usually limited by solubility of the copper containing salt used. For example, copper sulfate may be only dissolved up to approximately 80 grams/liter (1.25 Molar) (based on copper ion weight) in a typical plating solution formulation at room temperature. [0036] In a more specific embodiment, the plating solution has a temperature of about 25° C. Also as indicated, the plating solution may contain very little to no chloride ions. In one embodiment, the plating solution contains chloride ions in a concentration of between 0 and 120 ppm. In a more specific embodiment, the concentration of chloride ions may be 70 ppm. [0037] To assist in plating process one or more levelers, brighteners or accelerators, inhibitors, suppressors, enhancers, and/or surfactants may be used. Accelerators may include a polar sulfur, oxygen, or nitrogen functional group that help to increase deposition rates and may promote dense nucleation leading to films with a fine grain structure. Accelerators may be present at a low concentration level, for example 0-200 ppm. While the accelerator may produces high deposition rates within the TSV hole, the accelerator may be transported away from the substrate top surface (field region) and/or consumed by reaction with oxygen in the bulk solution. Suppressors are additives that reduce the plating rate and are usually present in the plating bath at higher concentrations, for example 5-1,000 ppm. They are generally polymeric surfactants with high molecular weight, such as polyethylene glycol (PEG). The suppressor molecules slow down the deposition rate by adsorbing on the surface and forming a barrier layer to the copper ions. Because of their large size and low diffusion rate, suppressors are less likely to reach the lower part of the TSV than the wafer field resulting in lower concentrations at the bottom of the TSV. Therefore, most suppressing effects , using conventional processes, occur on the surface of the substrate (field region), helping to reduce overburden and avoid TSV hole "closing". Levelers are the additives whose purpose is to reduce surface roughness. They are present, if at all, in very small concentrations, such as 1- 100 ppm, and their blocking effects at the surface are highly localized. As a result, levelers selectively reduce deposition mainly on the high spots allowing the low spots to level out. This behavior can also be used to enhance the plating rate of copper at the base of the TSV relative to the growth rate on the wafer field. In some cases, levelers may contain functional groups which include nitrogen atoms which exhibit a tendency to form complexes with Cu(I) ions at the wafer interface. Finally, chloride ions may be present in the plating bath at a concentration of no greater than about 300 ppm. In a specific embodiment, the chloride concentration is no greater than about 50 ppm or even no greater than about 2 ppm. As discussed above, annealing the copper seed layer produces a beneficial balancing of suppressors and levelers resulting in a number of beneficial effects. [0038] During TSV plating the substrate may be rotated and vibrated to provide agitation around the boundary layer. Although conventionally a rotational speed of between about 20 rpm and about 50 rpm has been used, applicants have discovered that increasing the rotation speed to about 100 rpm improves the plating process. Additionally, the dissolution cycle may be performed at high current density for very short intervals leading to removal of peaks and widening of TSV openings. Furthermore, the deposition interval may be mixed with equilibration interval that allows for copper ion concentration within the TSV to equilibrate. [0039] Returning to FIG. 2, after electro-filling conductive material into the TSV holes, the wafer may go through one or more post electro fill processing operations (block 214). If overburden is present, it will need to be removed in one of these operations. For example, chemical mechanical polishing (CMP) may be used. Other operations may include electroplanarization and/or chemical etching. Moreover, a wafer, a die, or a stack containing a TSV may be thinned to expose the bottom of the TSV to be used for other interconnections. Thinning may be carried out by any processes, for example grinding, etching, or CMP. [0040] Electroplating hardware is now discussed generally to provide context for the TSV plating process described herein. [0041] The apparatus includes one or more electroplating cells in which the wafers are processed. To optimize the rates and uniformity of electroplating, additives are added to the electrolyte; however, an electrolyte with additives may react with the anode in undesirable ways. Therefore anodic and cathodic regions of the plating cell are sometimes separated by a membrane so plating solutions of different composition may be used in each region. Plating solution in the cathodic region is called catholyte; and in the anodic region, anolyte. A number of engineering designs can be used in order to introduce anolyte and catholyte into the plating apparatus. For example, the plating apparatus used may be as described in U.S. Patent 8,043,967 incorporated by reference above, or may be as described below with reference to Figs. 3 and 4, or may use other engineering designs now known or later developed. [0042] Fig. 3 is a schematic representation of an electrochemical deposition ("ECD") assembly 308. The ECD assembly 308 includes an ECD chamber 310 that has a catholyte side 312 and an anolyte side 314 separated by a membrane 316. The ECD chamber 310 is in fluid communication with an anolyte tank 320 that may have nitrogen purge (not shown). A first liquid conduit 322 has a first end 324 connected to an ECD chamber outlet 326 and a second end 328 positioned in tank 320 below the surface 329 of the anolyte fluid therein. Fluid flows through first conduit 322 is in direction 323. A flow control valve assembly 330 may be operably installed on first conduit 322 to control the flow rate of anolyte fluid between the anolyte side 314 of ECD chamber 310 and anolyte tank 320. A second fluid conduit 332 has a first end 334 positioned in anolyte tank 320 below the surface 319 of anolyte therein. The second conduit has a second end 336 connected to an inlet 338 at the bottom of the anolyte side 324 of ECD chamber 310. Fluid flows through the second conduit in direction 333 from anolyte tank 320 to ECD chamber 310. A third conduit 340 has a first end 342 connected to an outlet 344 at an upper portion of the anolyte side 314 of ECD chamber 310. Conduit 340 has a T-secton 346 from which a first branch line 348 and a second branch line 354 extend. The first branch line 348 has a distal end 350 connected to a vent orifice 352 at the top of anolyte tank 320 above the surface 329 of anolyte in the tank 320. The second branch line 354 has a distal end 356 which positioned in tank 320 below the surface 329 of anolyte therein. Fluid flow in the third conduit and line 354 is in direction 341 from the ECD chamber 310 to the anolyte tank 320. [0043] The ECD chamber 310 operates as follows. A wafer is inserted into the chamber (310) with the wafer on the catholyte side (312). The wafer may be rotated inside the chamber during processing to reduce a diffuse double layer thickness. A diffuse double layer is one that builds between the surface of the wafer and the plating solution as the wafer is inserted. Copper ions from the plating solution must diffuse through this layer in order to reach the substrate to form the copper film. Electrical contact to the wafer is made in the chamber 310 so as to supply electrical current to the copper seed deposited on the wafer. As current is provided to the copper seed, a source of electrons is provided. Positively charged copper ions in the plating solution are plated on the surface of the copper seed and with time build up to form a film. This build up of copper is used to fill the TSV. [0044] Fig. 4 is a schematic illustration of a copper plating assembly 370. The assembly 370 may include four copper plating modules 372, 374, 376, 378. The assembly 370 may also include a spin, rinse, dry module 380, a bevel etch module 382 and a loader 384. A batch of wafers can be loaded at the loader (384) and individual wafers are processed subsequently in each of the cells. Each of the copper plating cells (372, 374, 376, 378) is used to electrochemically deposit copper onto the wafer. Wafers can be processed in parallel in either of the four copper plating cells. A schematic of one of these cells 372, 374, 376, 378 is shown in Fig. 3 and described above. The bevel etch module removes plated copper from the edge of the wafer to prevent cross-contamination downline as the wafer is processed in other tools. The spin, rinse, dry module is used to clean the wafer post-processing to remove additional residue from the plating solution used in the copper plating cells (372, 374, 376, 378). [0045] Those skilled in the art will appreciate that many other embodiments and variations are possible within the scope of the claimed invention. |
Apparatus having corresponding methods comprise: a first squelch circuit configured to detect possible squelch signals in a communication signal; and a second squelch circuit configured to i) operate in a low-power state responsive to the first squelch circuit detecting none of the possible squelch signals in the communication signal, and ii) operate in a high-power state responsive to the first squelch circuit detecting one of the possible squelch signals in the communication signal. |
WHAT IS CLAIMED IS: 1. An apparatus comprising: a first squelch circuit configured to detect possible squelch signals in a communication signal; and a second squelch circuit configured to i) operate in a low-power state responsive to the first squelch circuit detecting none of the possible squelch signals in the communication signal, and ii) operate in a high-power state responsive to the first squelch circuit detecting one of the possible squelch signals in the communication signal. 2. The apparatus of claim 1, wherein the second squelch circuit is further configured to: iii) determine an out-of-band (OOB) signaling sequence based on a squelch signal in the communication signal responsive to operating in the high-power state. 3. The apparatus of claim 2, wherein the first squelch circuit comprises: a first squelch detector configured to detect one of the possible squelch signals in the communication signal responsive to an amplitude of the one of the possible squelch signals being greater than a threshold amplitude. 4. The apparatus of claim 3, wherein the first squelch circuit further comprises: a signal detector configured to negate a control signal responsive to the first squelch detector detecting one of the possible squelch signals in the communication signal; wherein the second squelch circuit is further configured to operate in the high- power state responsive to the control signal being negated. 5. The apparatus of claim 3, wherein the first squelch circuit further comprises: a signal detector configured to negate a first control signal responsive to the first squelch detector detecting one of the possible squelch signals in the communication signal; and logic configured to negate a second control signal responsive to i) the first control signal being negated, and ii) an enable signal being asserted; wherein the second squelch circuit is further configured to operate in the high- power state responsive to the second control signal being negated. 6. The apparatus of claim 5, wherein: the enable signal represents a link status of a link, wherein the link provides the communication signal to the apparatus. 7. The apparatus of claim 2, wherein the second squelch circuit comprises: a second squelch detector configured to detect the squelch signal in the communication signal responsive to i) the first squelch detector detecting one of the possible squelch signals in the communication signal, ii) an amplitude of the squelch signal being greater than a first threshold amplitude, and iii) the amplitude of the squelch signal being less than a second threshold amplitude, wherein the second threshold amplitude is greater than the first threshold amplitude. 8. The apparatus of claim 7, wherein the second squelch circuit further comprises: an OOB signal detector configured to determine the OOB signaling sequence based on the squelch signal responsive to the second squelch detector detecting the squelch signal in the communication signal. 9. The apparatus of claim 1, wherein the communication signal is selected from the group consisting of: a serial ATA (SATA) signal; a PCI Express (PCIe) signal; and a Universal Serial Bus (USB) signal. 10. One or more integrated circuits comprising the apparatus of claim 1. 11. An analog front end comprising the apparatus of claim 1. 12. A communications device comprising the analog front end of claim 11. 13. The apparatus of claim 1, wherein the apparatus is compliant with all or part of the Serial ATA International Organization: Serial ATA Revision 3.0 specification. 14. A method comprising: detecting possible squelch signals in a communication signal in a first squelch circuit; operating a second squelch circuit in a low-power state responsive to detecting none of the possible squelch signals in the communication signal in the first squelch circuit; and operating the second squelch circuit in a high-power state responsive to detecting one of the possible squelch signals in the communication signal in the first squelch circuit. 15. The method of claim 14, further comprising: determining an out-of-band (OOB) signaling sequence based on one of the possible squelch signals responsive to operating in the high-power state. 16. The method of claim 15, further comprising: detecting one of the possible squelch signals in the communication signal responsive to an amplitude of the one of the possible squelch signals being greater than a threshold amplitude. 17. The method of claim 16, further comprising: negating a control signal responsive to detecting one of the possible squelch signals in the communication signal; and operating the second squelch circuit in the high-power state responsive to the control signal being negated. 18. The method of claim 16, further comprising: negating a first control signal responsive to the first squelch detector detecting one of the possible squelch signals in the communication signal; negating a second control signal responsive to i) the first control signal being negated, and ii) an enable signal being asserted; and operating the second squelch circuit in the high-power state responsive to the second control signal being negated. 19. The method of claim 18, wherein: the enable signal represents a link status of a link, wherein the link provides the communication signal. 20. The method of claim 15, further comprising: detecting a squelch signal in the communication signal responsive to i) detecting one of the possible squelch signals in the communication signal, ii) an amplitude of the squelch signal being greater than a first threshold amplitude, and iii) the amplitude of the squelch signal being less than a second threshold amplitude, wherein the second threshold amplitude is greater than the first threshold amplitude. 21. The method of claim 20, further comprising: determining the OOB signaling sequence based on the squelch signal responsive to detecting the squelch signal in the communication signal. 22. The method of claim 14, wherein the communication signal is selected from the group consisting of: a serial ATA (SATA) signal; a PCI Express (PCIe) signal; and a Universal Serial Bus (USB) signal. |
DUAL SQUELCH DETECTORS AND METHODS FOR LOW POWER STATES CROSS-REFERENCE TO RELATED APPLICATIONS This disclosure claims priority to U.S. Utility Patent Application No. 13/848,817, filed on March 22, 2013, and the benefit of U.S. Provisional Patent Application Serial No. 61/615784, filed on March 26, 2012, entitled "DUAL SQUELCH DETECTOR ARCHITECTURE FOR SATA ATA LOW POWER STATES," the disclosures thereof are incorporated by reference herein in their entirety. FIELD The present disclosure relates generally to the field of digital communication. More particularly, the present disclosure relates to reducing power consumption in communication devices employing squelch detectors. BACKGROUND This background section is provided for the purpose of generally describing the context of the disclosure. Work of the presently named inventor(s), to the extent the work is described in this background section, as well as aspects of the description that may not otherwise qualify as prior art at the time of filing, are neither expressly nor impliedly admitted as prior art against the present disclosure. The Serial ATA (SATA) interface defines power states that a SATA device or host (both occasionally referred to herein as "SATA devices") can enter to reduce power consumption. Out-of-band (OOB) signals received by a squelch detector are used to communicate during these low-power states. In the lowest- power states, power consumption is limited by the power used by the squelch detector. The SATA specification defines minimum and maximum amplitudes at which to reject and detect the OOB signals, as well as minimum and maximum durations for elements of the OOB signals used to determine OOB signaling sequences such as COMINIT, COMRESET, and COMWAKE. The squelch detector must consume power to correctly measure these amplitudes and durations, significantly increasing the power consumption of SATA devices in low-power states. SUMMARY [0005] In general, in one aspect, an embodiment features an apparatus comprising: a first squelch circuit configured to detect possible squelch signals in a communication signal; and a second squelch circuit configured to i) operate in a low-power state responsive to the first squelch circuit detecting none of the possible squelch signals in the communication signal, and ii) operate in a high- power state responsive to the first squelch circuit detecting one of the possible squelch signals in the communication signal. [0006] Embodiments of the apparatus can include one or more of the following features. In some embodiments, the second squelch circuit is further configured to: iii) determine an out-of-band (OOB) signaling sequence based on a squelch signal in the communication signal responsive to operating in the high-power state. In some embodiments, the first squelch circuit comprises: a first squelch detector configured to detect one of the possible squelch signals in the communication signal responsive to an amplitude of the one of the possible squelch signals being greater than a threshold amplitude. In some embodiments, the second squelch circuit comprises: a second squelch detector configured to detect the squelch signal in the communication signal responsive to i) the first squelch detector detecting one of the possible squelch signals in the communication signal, ii) an amplitude of the squelch signal being greater than a first threshold amplitude, and iii) the amplitude of the squelch signal being less than a second threshold amplitude, wherein the second threshold amplitude is greater than the first threshold amplitude. In some embodiments, the communication signal is selected from the group consisting of: a serial ATA (SATA) signal; a PCI Express (PCIe) signal; and a Universal Serial Bus (USB) signal. [0007] In general, in one aspect, an embodiment features a method comprising: detecting possible squelch signals in a communication signal in a first squelch circuit; operating a second squelch circuit in a low-power state responsive to detecting none of the possible squelch signals in the communication signal in the first squelch circuit; and operating the second squelch circuit in a high-power state responsive to detecting one of the possible squelch signals in the communication signal in the first squelch circuit. [0008] Embodiments of the method can include one or more of the following features. Some embodiments comprise determining an out-of-band (OOB) signaling sequence based on one of the possible squelch signals responsive to operating in the high-power state. Some embodiments comprise detecting one of the possible squelch signals in the communication signal responsive to an amplitude of the one of the possible squelch signals being greater than a threshold amplitude. Some embodiments comprise detecting a squelch signal in the communication signal responsive to i) detecting one of the possible squelch signals in the communication signal, ii) an amplitude of the squelch signal being greater than a first threshold amplitude, and iii) the amplitude of the squelch signal being less than a second threshold amplitude, wherein the second threshold amplitude is greater than the first threshold amplitude. In some embodiments, the communication signal is selected from the group consisting of: a serial ATA (SATA) signal; a PCI Express (PCIe) signal; and a Universal Serial Bus (USB) signal. [0009] The details of one or more implementations are set forth in the accompanying drawings and the description below. Other features will be apparent from the description and drawings, and from the claims. DESCRIPTION OF DRAWINGS FIG. 1 shows elements of a computing system according to one embodiment. FIG. 2 shows detail of a SATA analog front end according to one embodiment. FIG. 3 shows detail of a SATA dual squelch detector according to one embodiment. FIG. 4 shows a process for the low-power squelch circuit of FIG. 3 according to one embodiment. FIG. 5 shows a process for the high-performance squelch circuit of FIG. 3 according to one embodiment. FIG. 6 shows detail of a SATA dual squelch detector according to an embodiment where the high-performance squelch circuit enters the high-power state only when an enable signal is asserted. The leading digit(s) of each reference numeral used in this specification indicates the number of the drawing in which the reference numeral first appears. DETAILED DESCRIPTION Embodiments of the present disclosure feature dual squelch detectors, and corresponding methods, that significantly lower the power required for squelch detection in low-power states. Although the disclosed embodiments are discussed in terms of Serial ATA (SATA) devices, the techniques disclosed herein apply to other sorts of signals as well, including PCI Express (PCIe) signals, Universal Serial Bus (USB) signals, and the like. FIG. 1 shows elements of a computing system 100 according to one embodiment. Although in the described embodiments the elements of the computing system 100 are presented in one arrangement, other embodiments may feature other arrangements. For example, elements of the computing system 100 can be implemented in hardware, software, or combinations thereof. [0019] Referring to FIG. 1, the computing system 100 includes a SATA host 102 connected to a SATA device 104 by a cable 106. The SATA host 102 can be implemented, for example, as a personal computer or the like. The SATA device 104 can be implemented, for example, as a hard disk drive or the like. The cable 106 can be implemented, for example, as a flexible printed cable or the like. Both the SATA host 102, and the SATA device 104, include a respective SATA analog front end 108A,B that is connected to the cable 106. Together the SATA analog front ends 108A,B and the cable 106 provide a SATA link. [0020] FIG. 2 shows detail of a SATA analog front end 202 according to one embodiment. Although in the described embodiments the elements of the SATA analog front end 202 are presented in one arrangement, other embodiments may feature other arrangements. For example, elements of the SATA analog front end 202 can be implemented in hardware, software, or combinations thereof. The SATA analog front end 202 can be used as one or both of the SATA analog front ends 108A,B of FIG. 1. [0021] Referring to FIG. 2, the SATA analog front end 202 includes a SATA receiver 204, a SATA transmitter 206, and a SATA dual squelch detector 208. The SATA receiver 204, and the SATA transmitter 206, can be implemented according to conventional techniques. The SATA dual squelch detector 208 can be implemented as described below. [0022] The SATA transmitter 206 receives data (Tx Data), and transmits a differential communication signal 212 on conductors Tx+ and Tx- that represents the data Tx Data over a SATA link. The SATA receiver 204 receives a differential communication signal 214 on conductors Rx+ and Rx- that represents data (Rx Data) over the SATA link, and recovers the data Rx Data from the differential communication signal 214. The SATA dual squelch detector 208 detects squelch signals on the conductors Rx+ and Rx-, and determines out-of- band (OOB) signaling sequences 216 based on the squelch signals. The OOB signaling sequences 216 can be used by a SATA host 102 or a SATA device 104 to recover from low-power states. [0023] FIG. 3 shows detail of a SATA dual squelch detector 302 according to one embodiment. Although in the described embodiments the elements of the SATA dual squelch detector 302 are presented in one arrangement, other embodiments may feature other arrangements. For example, elements of the SATA dual squelch detector 302 can be implemented in hardware, software, or combinations thereof. The SATA dual squelch detector 302 can be used as the SATA dual squelch detector 208 of FIG. 2. [0024] Referring to FIG. 3, the SATA dual squelch detector 302 includes two squelch circuits: a high-performance squelch circuit 304, and a low-power squelch circuit 306. The high-performance squelch circuit 304 is capable of operating in either a high-power state or a low-power state responsive to a control signal 316 provided by the low-power squelch circuit 306. In particular, the high- performance squelch circuit 304 operates in the high-power state responsive to negation of the control signal 316, and operates in the low-power state responsive to assertion of the control signal 316. The high-performance squelch circuit 304 detects squelch signals, and determines out-of-band (OOB) signaling sequences 216 based on the squelch signals, only while operating in the high-power state. [0025] The high-performance squelch circuit 304 includes a high-performance squelch detector 308 and an out-of band (OOB) signal detector 310. The high- performance squelch detector 308, and the out-of band (OOB) signal detector 310, are each capable of operating in either a high-power state or a low-power state responsive to the control signal 316 provided by the low-power squelch circuit 306. [0026] In particular, the high-performance squelch detector 308, and the OOB signal detector 310, operate in the high -power state responsive to negation of the control signal 316, and operate in the low-power state responsive to assertion of the control signal 316. The high-performance squelch detector 308 detects squelch signals only while operating in the high-power state. The high-performance squelch detector 308 detects a squelch signal based on the amplitude of the squelch signal and two predetermined amplitude thresholds. In particular, the high-performance squelch detector 308 detects a squelch signal only when the amplitude of the squelch signal falls between the predetermined amplitude thresholds. In one embodiment, the predetermined threshold amplitudes may be 75mV and 200mV. In some embodiments, the high-performance squelch detector 308 detects squelch signals in compliance with all or part of the Serial ATA International Organization: Serial ATA Revision 3.0 specification, the disclosure thereof incorporated by reference herein in its entirety. The OOB signal detector 310 determines OOB signaling sequences 216 based on squelch signals only while operating in the high-power state. In particular, the OOB signal detector 310 determines OOB signaling sequences 216 based on minimum and maximum durations for elements of the squelch signal. In some embodiments, the OOB signal detector 310 determines OOB signaling sequences 216 in compliance with all or part of the Serial ATA International Organization: Serial ATA Revision 3.0 specification. The low-power squelch circuit 306 controls the power state of the high- performance squelch circuit 304 by asserting and negating the control signal 316. In particular, the low-power squelch circuit 306 negates the control signal 316 responsive to detecting a possible squelch signal, and asserts the control signal 316 otherwise. In this manner, the high-performance squelch circuit 304 is placed in the high-power state only when a possible squelch signal is detected. The low-power squelch circuit 306 includes a low-power squelch detector 312 and a signal detector 314. The low-power squelch detector 312 detects a possible squelch signal based on the amplitude of the possible squelch signal and a predetermined threshold amplitude. In particular, the low-power squelch detector 312 detects a possible squelch signal only when the amplitude of the possible squelch signal is greater than a predetermined threshold amplitude. A signal exceeding the predetermined threshold amplitude may, or may not, be a squelch signal, and so is referred to herein as a "possible squelch signal." In one embodiment, the predetermined threshold amplitude may be lOOmV. The signal detector 314 negates the control signal 316 when the low-power squelch detector 312 detects a possible squelch signal in the inbound differential communication signal 214. [0031] FIG. 4 shows a process 400 for the low-power squelch circuit 306 of FIG. 3 according to one embodiment. Although in the described embodiments the elements of process 400 are presented in one arrangement, other embodiments may feature other arrangements. For example, in various embodiments, some or all of the elements of process 400 can be executed in a different order, concurrently, and the like. Also some elements of process 400 may not be performed, and may not be executed immediately after each other. In addition, some or all of the elements of process 400 can be performed automatically, that is, without human intervention. [0032] Referring to FIG. 4, at 402, process 400 begins. At 404, the low-power squelch detector 312 monitors the inbound differential communication signal 214 for possible squelch signals. In particular, the low-power squelch detector 312 detects a possible squelch signal when the amplitude of the possible squelch signal is greater than a predetermined threshold amplitude. [0033] At 406, responsive to the low-power squelch detector 312 detecting no possible squelch signals, at 408 the signal detector 314 asserts, or continues to assert, the control signal 316. But at 406, responsive to the low-power squelch detector 312 detecting a possible squelch signal, at 410 the signal detector 314 negates the control signal 316. [0034] FIG. 5 shows a process 500 for the high-performance squelch circuit 304 of FIG. 3 according to one embodiment. Although in the described embodiments the elements of process 500 are presented in one arrangement, other embodiments may feature other arrangements. For example, in various embodiments, some or all of the elements of process 500 can be executed in a different order, concurrently, and the like. Also some elements of process 500 may not be performed, and may not be executed immediately after each other. In addition, some or all of the elements of process 500 can be performed automatically, that is, without human intervention. [0035] Referring to FIG. 5, at 502, process 500 begins. At 504, the high- performance squelch detector 308, and the OOB signal detector 310, monitor the control signal 316. At 506, responsive to detecting the control signal being asserted, at 508, the high-performance squelch detector 308, and the OOB signal detector 310, operate in the low-power state. In the low-power state, the circuits in the high-performance squelch detector 308, and the OOB signal detector 310, can be powered off, except for those circuits required to monitor the control signal 316, and to power on the remaining circuits responsive to detecting the control signal 316 being negated. Then, at 504, the high-performance squelch detector 308, and the OOB signal detector 310, continue to monitor the control signal 316. [0036] At 506, responsive to detecting the control signal being negated, at 510, the high-performance squelch detector 308, and the OOB signal detector 310, operate in the high-power state. In the high-power state, the circuits in the high- performance squelch detector 308, and the OOB signal detector 310, are powered on and fully functional. Then, at 512, the high-performance squelch detector 308 monitors the inbound differential communication signal 214 for squelch signals. In particular, the high-performance squelch detector 308 detects a squelch signal when the amplitude of the squelch signal is greater than a predetermined minimum threshold amplitude and less than a predetermined maximum threshold amplitude. [0037] At 514, responsive to the high-performance squelch detector 308 detecting no squelch signal during a predetermined interval, at 508, the high- performance squelch detector 308, and the OOB signal detector 310, operate in the low-power state. Then, at 504, the high-performance squelch detector 308, and the OOB signal detector 310, continue to monitor the control signal 316. [0038] At 514, responsive to the high-performance squelch detector 308 detecting a squelch signal during the predetermined interval, at 516, the OOB signal detector 310 determines an OOB signaling sequence 216 based on the squelch signal. For example, the OOB signal detector 310 determines a SATA OOB signaling sequence 216 such as COMINIT, COMRESET, and COMWAKE. Then, at 508, the high-performance squelch detector 308, and the OOB signal detector 310, operate in the low-power state. Then, at 504, the high- performance squelch detector 308, and the OOB signal detector 310, continue to monitor the control signal 316. [0039] In some embodiments, the high-performance squelch circuit 304 enters the high-power state only when an enable signal is asserted. FIG. 6 shows detail of a SATA dual squelch detector 602 according to one such embodiment. Although in the described embodiments the elements of the SATA dual squelch detector 602 are presented in one arrangement, other embodiments may feature other arrangements. For example, elements of the SATA dual squelch detector 602 can be implemented in hardware, software, or combinations thereof. The SATA dual squelch detector 602 can be used as the SATA dual squelch detector 208 of FIG. 2. [0040] Referring to FIG. 6, the SATA dual squelch detector 602 is similar to the SATA dual squelch detector 302 of FIG. 3, but with the addition of logic 604. Logic 604 negates a control signal 606 only when the control signal 316 is negated, and an enable signal 608 is asserted. The high-performance squelch circuit 304 enters the high-power state only when the control signal 606 is negated. In particular, the high-performance squelch detector 308, and the OOB signal detector 310, enter the high-power state only when the control signal 606 is negated. The enable signal 608 can represent, for example, a link status of the SATA link providing the differential communication signal 214 signal. Various embodiments of the present disclosure can be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations thereof. Embodiments of the present disclosure can be implemented in a computer program product tangibly embodied in a computer- readable storage device for execution by a programmable processor. The described processes can be performed by a programmable processor executing a program of instructions to perform functions by operating on input data and generating output. Embodiments of the present disclosure can be implemented in one or more computer programs that are executable on a programmable system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device. Each computer program can be implemented in a high-level procedural or object-oriented programming language, or in assembly or machine language if desired; and in any case, the language can be a compiled or interpreted language. Suitable processors include, by way of example, both general and special purpose microprocessors. Generally, processors receive instructions and data from a read-only memory and/or a random access memory. Generally, a computer includes one or more mass storage devices for storing data files. Such devices include magnetic disks, such as internal hard disks and removable disks, magneto-optical disks; optical disks, and solid-state disks. Storage devices suitable for tangibly embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM disks. Any of the foregoing can be supplemented by, or incorporated in, ASICs (application- specific integrated circuits). As used herein, the term "module" may refer to any of the above implementations. A number of implementations have been described. Nevertheless, various modifications may be made without departing from the scope of the disclosure. Accordingly, other implementations are within the scope of the following claims. |
A technique for designing circuits including receiving a data object (514) representing a circuit for a first process technology, the circuit including a first sub-circuit, the first sub-circuit including a first electrical component and a second electrical component arranged in a first topology; identifying the first sub-circuit in the data object (518) by comparing the first topology to a stored topology, the stored topology associated with the first process technology; identifying a first set of physical parameter values associated with first electrical component and the second electrical component of the first sub-circuit; determining a set of performance parameter values (520) for the first sub-circuit based on a first machine learning model of the first sub-circuit and the identified set of physical parameters; converting the identified first sub-circuit to a second sub-circuit (522) for the second process technology based on the determined set of performance parameter values; and outputting the second sub-circuit (526). |
CLAIMSWhat is claimed is:1. A method comprising: receiving a first set of sub-circuit physical parameters for electrical components of a subcircuit, and an indication of a first process technology; determining a first variation of sub-circuit physical parameters for the electrical components of the structural sub-circuit, the first variation including at least one sub-circuit physical parameter that vary from sub-circuit physical parameters of the first set of sub-circuit physical parameters; simulating the first variation of sub-circuit physical parameters in the first process technology to generate a first set of sub-circuit performance parameter values associated with the first variation; training a machine learning (ML) model of the structural sub-circuit based on a set of variations, the set of variations including the first variation and set of sub-circuit physical parameters associated with the first variation, for the first process technology; and storing the trained ML model.2. The method of claim 1, wherein determining the first variation of sub-circuit physical parameters for the electrical components of the sub-circuit is based on a practical range of subcircuit physical parameter values for the first process technology.3. The method of claim 1, wherein the ML model of the sub-circuit comprises one of a linear regression, large margin classifier, principle component analysis, tree based, or neural network machine learning model.4. The method of claim 1, wherein training the ML model includes identifying a set of variables for input to the ML model.5. The method of claim 4, wherein the set of variables for input to the ML model is based on: one of: the sets of sub-circuit physical parameters or generated sub-circuit performance parameters, and one of: the one or more parameters associated with the first process technology or one or more parameters associated with a second process technology.6. The method of claim 1, wherein the sets of variations of sub-circuit physical parameters are identified to show non-linear behavior of the sub-circuit.7. The method of claim 1, wherein sets of variations of sub-circuit physical parameters for the sub-circuit are simulated using a simulation program with integrated circuit emphasis (SPICE) circuit model of the sub-circuit.8. The method of claim 1, wherein the trained ML model of the sub-circuit is stored in a library of trained ML models for the first process technology.9. The method of claim 8, wherein the library of trained ML models includes a trained ML model for each structural sub-circuit of a set of predetermined structural sub-circuits.10. A non-transitory program storage device comprising instructions stored thereon to cause one or more processors to: receive a first set of sub-circuit physical parameters for electrical components of a subcircuit, and an indication of a first process technology; determine a first variation of sub-circuit physical parameters for the electrical components of the structural sub-circuit, the first variation including at least one sub-circuit physical parameter that vary from sub-circuit physical parameters of the first set of sub-circuit physical parameters; simulate the first variation of sub-circuit physical parameters in the first process technology to generate a first set of sub-circuit performance parameter values associated with the first variation; train a machine learning (ML) model of the structural sub-circuit based on a set of variations, the set of variations including the first variation and set of sub-circuit physical parameters associated with the first variation, for the first process technology; and store the trained ML model.11. The non-transitory program storage device of claim 10, wherein determining the first variation of sub-circuit physical parameters for the electrical components of the sub-circuit is based on a practical range of sub-circuit physical parameter values for the first process technology.12. The non-transitory program storage device of claim 10, wherein the ML model of the subcircuit comprises one of a linear regression, large margin classifier, principle component analysis, tree based, or neural network machine learning model.13. The non-transitory program storage device of claim 10, wherein training the ML model includes identifying a set of variables for input to the ML model.14. The non-transitory program storage device of claim 13, wherein the set of variables for input to the ML model is based on:
one of: the sets of sub-circuit physical parameters or generated sub-circuit performance parameters, and one of: the one or more parameters associated with the first process technology or one or more parameters associated with a second process technology.15. The non-transitory program storage device of claim 10, wherein the sets of variations of sub-circuit physical parameters are identified to show non-linear behavior of the sub-circuit.16. The non-transitory program storage device of claim 10, wherein sets of variations of subcircuit physical parameters for the sub-circuit are simulated using a simulation program with integrated circuit emphasis (SPICE) circuit model of the sub-circuit.17. The non-transitory program storage device of claim 10, wherein the trained ML model of the sub-circuit is stored in a library of trained ML models for the first process technology.18. The non-transitory program storage device of claim 8, wherein the library of trained ML models includes a trained ML model for each structural sub-circuit of a set of predetermined structural sub-circuits.19. An electronic device, comprising: a memory; and one or more processors operatively coupled to the memory, wherein the one or more processors are configured to execute instructions causing the one or more processors to: receive a first set of sub-circuit physical parameters for electrical components of a sub-circuit, and an indication of a first process technology; determine a first variation of sub-circuit physical parameters for the electrical components of the structural sub-circuit, the first variation including at least one sub-circuit physical parameter that vary from sub-circuit physical parameters of the first set of subcircuit physical parameters; simulate the first variation of sub-circuit physical parameters in the first process technology to generate a first set of sub-circuit performance parameter values associated with the first variation; train a machine learning (ML) model of the structural sub-circuit based on a set of variations, the set of variations including the first variation and set of sub-circuit physical parameters associated with the first variation, for the first process technology; and store the trained ML model.20. The electronic device of claim 19, wherein determining the first variation of sub-circuit physical parameters for the electrical components of the sub-circuit is based on a practical range of sub-circuit physical parameter values for the first process technology. |
PROCESS AWARE COMPACT REPRESENTATION OF INTEGRATED CIRCUITSBACKGROUND[0001] Analog circuits are often used to sense, interact with, and/or control real-world signals. Real world signals or information are analog as they are a continuous quantity. For example, temperature varies across an infinite range (e.g., has infinite values) rather than just by discrete integer values. In comparison, digital circuits operate on discrete values, ones and zeros, which are used to represent analog signals or information. To help digital circuits handle analog signals or information, digital circuits can interact with or incorporate analog circuits. For example, a temperature sensor may include one or more analog circuits to sample a temperature, one or more hybrid circuits to convert the sampled temperature to a digital value, and one or more digital circuits to process the digital value. Similarly, a digital circuit may process an audio file, a hybrid circuit may perform a digital to analog conversion, an analog circuit may amplify the analog signal, and a speaker may output the actual sound encoded in the audio file. It may be understood that, as used herein, an analog circuit may refer to either analog or hybrid circuits (e.g., mixed signal circuits), which may include both analog and digital portions.[0002] As integrated circuits advance, a number of components that can be fit in an area of a semiconductor die has increased rapidly. This reduction in size, also known as die shrink, helps reduce costs and improve performance of the resulting circuit chips. While, die shrinking and semiconductor scaling techniques are relatively straight forward for digital circuits, scaling analog circuits is much more difficult. For example, analog circuits may be more substantially affected by voltage headroom, gain degradation, signal to noise ratio adjustments, etc., as compared to digital circuits. Circuit geometry and configurations, in an analog or hybrid sub-circuit, such as a differential pair, may influence the performance of not only the differential pair, but may also influence the performance of other sub-circuits, such as a current mirror, in another part of the overall circuit. Additionally, different process nodes or semiconductor process technologies may influence how the circuit geometry and configuration affect the performance. Depending on the purpose of the overall circuit, this performance difference may be unacceptable. Scaling as between different sized process nodes may also affect sub-circuits differently such that each sub-
circuit, or even individual components, may have a different scaling factor. Some analog circuits may need extensive manual changes or redesigns when attempting to scale a design between process nodes.SUMMARY[0003] This disclosure relates to techniques for designing circuits. More particularly, but not by way of limitation, aspects of the present disclosure relate a method including receiving a data object representing a circuit for a first process technology, the circuit including a first sub-circuit, the first sub-circuit including a first electrical component and a second electrical component, the first electrical component and the second electrical component arranged in a first topology, identifying the first sub-circuit in the data object by comparing the first topology to a stored topology, the stored topology associated with the first process technology, identifying sub-circuit physical parameter values associated with the first electrical component and the second electrical component of the first sub-circuit, determining a set of sub-circuit performance parameter values for the first sub-circuit based on a first machine learning (ML) model of the first sub-circuit and the identified sub-circuit physical parameters, converting the identified first sub-circuit to a second sub-circuit for a second process technology based on the determined set of sub-circuit performance parameter values, and outputting the second sub-circuit.[0004] Another aspect of the present disclosure relates to a non-transitory program storage device including instructions stored thereon to cause one or more processors to receive a data object representing a circuit for a first process technology, the circuit including a first sub-circuit, the first sub-circuit including a first electrical component and a second electrical component, the first electrical component and the second electrical component arranged in a first topology, identify the first sub-circuit in the data object by comparing the first topology to a stored topology, the stored topology associated with the first process technology, identify sub-circuit physical parameter values associated with the first electrical component and the second electrical component of the first subcircuit, determine a set of sub-circuit performance parameter values for the first sub-circuit based on a first machine learning (ML) model of the first sub-circuit and the identified sub-circuit physical parameters, convert the identified first sub-circuit to a second sub-circuit for a second process technology based on the determined set of sub-circuit performance parameter values, and output the converted first sub-circuit.[0005] Another aspect of the present disclosure relates to an electronic device including a memory;
and one or more processors operatively coupled to the memory, wherein the one or more processors are configured to execute instructions causing the one or more processors to receive a data object representing a circuit for a first process technology, the circuit including a first sub-circuit, the first sub-circuit including a first electrical component and a second electrical component, the first electrical component and the second electrical component arranged in a first topology, identify the first sub-circuit in the data object by comparing the first topology to a stored topology, the stored topology associated with the first process technology, identify sub-circuit physical parameter values associated with the first electrical component and the second electrical component of the first subcircuit, determine a set of sub-circuit performance parameter values for the first sub-circuit based on a first machine learning (ML) model of the first sub-circuit and the identified sub-circuit physical parameters, convert the identified first sub-circuit to a second sub-circuit for a second process technology based on the determined set of sub-circuit performance parameter values, and output the converted first sub-circuit.[0006] Another aspect of the present disclosure relates to a method comprising receiving a data object representing a circuit, the circuit including a sub-circuit, the sub-circuit including a first electrical component and a second electrical component, the first electrical component and the second electrical component arranged in a first topology, receiving a set of stored topologies, identifying the first electrical component, second electrical component, and connections of the first electrical component and second electrical component, determining, based on the connections of the first electrical component, a coupling between the first electrical component and a second electrical component, determining the first topology based on a comparison between the identified first electrical component, the identified second electrical component, the determined coupling between the first electrical component and the second electrical component, and topologies of the set of stored topologies, and outputting the identified first topology.[0007] Another aspect of the present disclosure relates to a non-transitory program storage device comprising instructions stored thereon to cause one or more processors to receive a data object representing a circuit, the circuit including a sub-circuit, the sub-circuit including a first electrical component and a second electrical component, the first electrical component and the second electrical component arranged in a first topology, receive a set of stored topologies, identify the first electrical component, second electrical component, and connections of the first electrical component and second electrical component, determine, based on the connections of the first electrical
component, a coupling between the first electrical component and a second electrical component, determine the first topology based on a comparison between the identified first electrical component, the identified second electrical component, the determined coupling between the first electrical component and the second electrical component, and topologies of the set of stored topologies, and output the identified first topology.[0008] Another aspect of the present disclosure relates to an electronic device, comprising a memory, and one or more processors operatively coupled to the memory, wherein the one or more processors are configured to execute instructions causing the one or more processors to receive a data object representing a circuit, the circuit including a sub-circuit, the sub-circuit including a first electrical component and a second electrical component, the first electrical component and the second electrical component arranged in a first topology, receive a set of stored topologies, identify the first electrical component, second electrical component, and connections of the first electrical component and second electrical component, determine, based on the connections of the first electrical component, a coupling between the first electrical component and a second electrical component, determine the first topology based on a comparison between the identified first electrical component, the identified second electrical component, the determined coupling between the first electrical component and the second electrical component, and topologies of the set of stored topologies, and output the identified first topology.[0009] Another aspect of the present disclosure relates to a method comprising receiving a data object representing a circuit for a process technology, the circuit including a first sub-circuit and the first sub-circuit including a first electrical component and a second electrical component, the first electrical component and the second electrical component arranged in a first topology, identifying the first sub-circuit in the circuit by comparing the first topology to a stored topology, the stored topology associated with the first process technology, identifying a first set of physical parameter values associated with first electrical component and the second electrical component of the first sub-circuit, determining a set of performance parameter values for the first sub-circuit based on a first machine learning (ML) model of the first sub-circuit and the identified set of physical parameter values, converting the identified first sub-circuit to a second sub-circuit for the process technology based on the determined set of performance parameter values, the second sub-circuit having a third electrical component and a fourth electrical component arranged in a second topology, and outputting the second sub-circuit.
[0010] Another aspect of the present disclosure relates to a non-transitory program storage device comprising instructions stored thereon to cause one or more processors to receive a data object representing a circuit for a process technology, the circuit including a first sub-circuit and the first sub-circuit including a first electrical component and a second electrical component, the first electrical component and the second electrical component arranged in a first topology, identify a type of the first sub-circuit based on connections of the first electrical component and the second electrical component, identify the first sub-circuit in the circuit by comparing the first topology to a stored topology, the stored topology associated with the first process technology, identify a first set of physical parameter values associated with first electrical component and the second electrical component of the first sub-circuit, determine a set of performance parameter values for the first subcircuit based on a first machine learning (ML) model of the first sub-circuit and the identified set of physical parameter values, convert the identified first sub-circuit to a second sub-circuit for the process technology based on the determined set of performance parameter values, the second subcircuit having a third electrical component and a fourth electrical component arranged in a second topology, and output the second sub-circuit.[0011] Another aspect of the present disclosure relates to an electronic device, comprising a memory, and one or more processors operatively coupled to the memory, wherein the one or more processors are configured to execute instructions causing the one or more processors to receive a data object representing a circuit for a process technology, the circuit including a first sub-circuit and the first sub-circuit including a first electrical component and a second electrical component, the first electrical component and the second electrical component arranged in a first topology, identify a type of the first sub-circuit based on connections of the first electrical component and the second electrical component, identify the first sub-circuit in the circuit by comparing the first topology to a stored topology, the stored topology associated with the first process technology, identify a first set of physical parameter values associated with first electrical component and the second electrical component of the first sub-circuit, determine a set of performance parameter values for the first subcircuit based on a first machine learning (ML) model of the first sub-circuit and the identified set of physical parameter values, convert the identified first sub-circuit to a second sub-circuit for the process technology based on the determined set of performance parameter values, the second subcircuit having a third electrical component and a fourth electrical component arranged in a second topology, and output the second sub-circuit.
[0012] Another aspect of the present disclosure relates to a method comprising receiving an indication of a sub-circuit type and a set of sub-circuit performance parameter values, determining a sub-circuit topology based on the sub-circuit type and the set of sub-circuit performance parameters values, determining a set of sub-circuit physical parameter values based on a first machine learning (ML) model of the sub-circuit topology and the set of sub-circuit performance parameter values, generating a data object representing a sub-circuit based on the determined set of sub-circuit physical parameters values and the determined sub-circuit topology, and outputting the data object.[0013] Another aspect of the present disclosure relates to a non-transitory program storage device comprising instructions stored thereon to cause one or more processors to receive an indication of a sub-circuit type and a set of sub-circuit performance parameter values, determine a sub-circuit topology based on the sub-circuit type and the set of sub-circuit performance parameters values, determine a set of sub-circuit physical parameter values based on a first machine learning (ML) model of the sub-circuit topology and the set of sub-circuit performance parameter values, generate a data obj ect representing a sub-circuit based on the determined set of sub-circuit physical parameters values and the determined sub-circuit topology, and output the data object.[0014] Another aspect of the present disclosure relates to an electronic device, comprising a memory, and one or more processors operatively coupled to the memory, wherein the one or more processors are configured to execute instructions causing the one or more processors to receive an indication of a sub-circuit type and a set of sub-circuit performance parameter values, determine a sub-circuit topology based on the sub-circuit type and the set of sub-circuit performance parameters values, determine a set of sub-circuit physical parameter values based on a first machine learning (ML) model of the sub-circuit topology and the set of sub-circuit performance parameter values, generate a data object representing a sub-circuit based on the determined set of sub-circuit physical parameters values and the determined sub-circuit topology, and output the data object.[0015] Another aspect of the present disclosure relates to a method comprising receiving a first set of sub-circuit physical parameters for electrical components of a sub-circuit, and an indication of a first process technology, determining a first variation of sub-circuit physical parameters for the electrical components of the structural sub-circuit, the first variation including at least one sub-circuit physical parameter that vary from sub-circuit physical parameters of the first set of sub-circuit physical parameters, simulating the first variation of sub-circuit physical parameters in the first process technology to generate a first set of sub-circuit performance parameter values associated
with the first variation, training a machine learning (ML) model of the structural sub -circuit based on a set of variations, the set of variations including the first variation and set of sub-circuit physical parameters associated with the first variation, for the first process technology, and storing the trained ML model.[0016] Another aspect of the present disclosure relates to a non-transitory program storage device comprising instructions stored thereon to cause one or more processors to receive a first set of subcircuit physical parameters for electrical components of a sub-circuit, and an indication of a first process technology, determine a first variation of sub-circuit physical parameters for the electrical components of the structural sub-circuit, the first variation including at least one sub-circuit physical parameter that vary from sub-circuit physical parameters of the first set of sub-circuit physical parameters, simulate the first variation of sub-circuit physical parameters in the first process technology to generate a first set of sub-circuit performance parameter values associated with the first variation, train a machine learning (ML) model of the structural sub-circuit based on a set of variations, the set of variations including the first variation and set of sub-circuit physical parameters associated with the first variation, for the first process technology, and store the trained ML model. [0017] Another aspect of the present disclosure relates to an electronic device, comprising a memory, and one or more processors operatively coupled to the memory, wherein the one or more processors are configured to execute instructions causing the one or more processors to receive a first set of sub-circuit physical parameters for electrical components of a sub-circuit, and an indication of a first process technology, determine a first variation of sub-circuit physical parameters for the electrical components of the structural sub-circuit, the first variation including at least one sub-circuit physical parameter that vary from sub-circuit physical parameters of the first set of subcircuit physical parameters, simulate the first variation of sub-circuit physical parameters in the first process technology to generate a first set of sub-circuit performance parameter values associated with the first variation, train a machine learning (ML) model of the structural sub-circuit based on a set of variations, the set of variations including the first variation and set of sub-circuit physical parameters associated with the first variation, for the first process technology, and store the trained ML model.[0018] Another aspect of the present disclosure relates to a method comprising receiving an initial set of parameters, the initial set of parameters associated with a sub-circuit, interacting a first parameter of the initial set of parameters with other parameters of the initial set of parameters to
generate a set of interacted parameters, adding the interacted parameters to the initial set parameters to generate a candidate set of parameters, performing a linear regression on parameters of the candidate set of parameters against a set of expected parameter values to determine a predictive value for parameters of the candidate set of parameters, removing parameters of the candidate set of parameters based on a comparison between the predicative value and a predetermined predictive threshold, determining an accuracy of the candidate set of parameters based on the linear regression, comparing the accuracy of the candidate set of parameters to a predetermined accuracy level, wherein if the accuracy of the candidate set of parameters reaches the predetermined accuracy level, outputting the candidate set of parameters, and wherein if the accuracy of the candidate set of parameters does not reached a predetermined accuracy level, repeating the steps of: interacting a second parameter of the initial set of parameters with other parameters of the candidate set of parameters, adding the interacted parameters to the candidate set of parameters, performing the linear regression, removing parameters, determining the accuracy, comparing the accuracy, until: the accuracy of the second candidate set of parameters has reached the predetermined accuracy, or each parameter of the initial set of parameters has been interacted with other parameters of the candidate set a predetermined number of times, and outputting the candidate set of parameters.[0019] Another aspect of the present disclosure relates to a non-transitory program storage device comprising instructions stored thereon to cause one or more processors to receive an initial set of parameters, the initial set of parameters associated with a sub-circuit, interact a first parameter of the initial set of parameters with other parameters of the initial set of parameters to generate a set of interacted parameters, add the interacted parameters to the initial set parameters to generate a candidate set of parameters, perform a linear regression on parameters of the candidate set of parameters against a set of expected parameter values to determine a predictive value for parameters of the candidate set of parameters, remove parameters of the candidate set of parameters based on a comparison between the predicative value and a predetermined predictive threshold, determine an accuracy of the candidate set of parameters based on the linear regression, compare the accuracy of the candidate set of parameters to a predetermined accuracy level, wherein if the accuracy of the candidate set of parameters reaches the predetermined accuracy level, output the candidate set of parameters, and wherein if the accuracy of the candidate set of parameters does not reached a predetermined accuracy level, repeat the steps of: interact a second parameter of the initial set of parameters with other parameters of the candidate set of parameters, add the interacted parameters
to the candidate set of parameters, perform the linear regression, remove parameters, determine the accuracy, compare the accuracy, until: the accuracy of the second candidate set of parameters has reached the predetermined accuracy, or each parameter of the initial set of parameters has been interacted with other parameters of the candidate set a predetermined number of times; and output the candidate set of parameters.[0020] Another aspect of the present disclosure relates to an electronic device, comprising: a memory, and one or more processors operatively coupled to the memory, wherein the one or more processors are configured to execute instructions causing the one or more processors to receive an initial set of parameters, the initial set of parameters associated with a sub-circuit, interact a first parameter of the initial set of parameters with other parameters of the initial set of parameters to generate a set of interacted parameters, add the interacted parameters to the initial set parameters to generate a candidate set of parameters, perform a linear regression on parameters of the candidate set of parameters against a set of expected parameter values to determine a predictive value for parameters of the candidate set of parameters, remove parameters of the candidate set of parameters based on a comparison between the predicative value and a predetermined predictive threshold, determine an accuracy of the candidate set of parameters based on the linear regression, compare the accuracy of the candidate set of parameters to a predetermined accuracy level, wherein if the accuracy of the candidate set of parameters reaches the predetermined accuracy level, output the candidate set of parameters, and wherein if the accuracy of the candidate set of parameters does not reached a predetermined accuracy level, repeat the steps of: interact a second parameter of the initial set of parameters with other parameters of the candidate set of parameters, add the interacted parameters to the candidate set of parameters, perform the linear regression, remove parameters, determine the accuracy, compare the accuracy, until: the accuracy of the second candidate set of parameters has reached the predetermined accuracy, or each parameter of the initial set of parameters has been interacted with other parameters of the candidate set a predetermined number of times, and output the candidate set of parameters.BRIEF DESCRIPTION OF THE DRAWINGS[0021] For a detailed description of various examples, reference will now be made to the accompanying drawings in which:[0022] FIG. 1 illustrates an example of circuit design evolution, in accordance with aspects of the present disclosure.
[0023] FIG. 2 is a block diagram of an analog circuit, in accordance with aspects of the present disclosure.[0024] FIGS. 3A-3B are a circuit diagram of an illustrative circuit block, in accordance with aspects of the present disclosure.[0025] FIG. 4 is a circuit diagram illustrating a sub-circuit, in accordance with aspects of the present disclosure.[0026] FIG. 5 is a block diagram of an example embodiment of a technique for automated analog and mixed signal circuit design and validation, in accordance with aspects of the present disclosure. [0027] FIG. 6 is a block diagram of an example embodiment of a technique for automated analog and mixed signal circuit design and validation, in accordance with aspects of the present disclosure. [0028] FIGS. 7A-7B illustrate an example set of known topologies of input or gain stage for a given process technology, in accordance with aspects of the present disclosure.[0029] FIG. 8 is a system diagram illustrating an overview of technique for designing a new analog circuit from an original analog circuit, in accordance with aspects of the present disclosure.[0030] FIG. 9 is a chart illustrating sets of performance parameters for certain sub-circuits, in accordance with aspects of the present disclosure.[0031] FIG. 10 illustrates an example neural network ML model, in accordance with aspects of the present disclosure.[0032] FIG. 11 illustrates a series of ML model parameters for threshold stepwise selection, in accordance with aspects of the present disclosure.[0033] FIG. 12 is a flow diagram illustrating an overview of a technique for designing circuits, in accordance with aspects of the present disclosure.[0034] FIG. 13 is a flow diagram illustrating a technique for designing circuits, in accordance with aspects of the present disclosure.[0035] FIG. 14 is a flow diagram illustrating a technique for designing circuits, in accordance with aspects of the present disclosure.[0036] FIG. 15 is a flow diagram illustrating a technique for designing circuits, in accordance with aspects of the present disclosure.[0037] FIG. 16 is a flow diagram illustrating a technique for designing circuits, in accordance with aspects of the present disclosure.[0038] FIGS. 17A-17B are flow diagrams illustrating a technique for designing circuits, in
accordance with aspects of the present disclosure.[0039] FIG. 18 is a block diagram of an embodiment of a computing device, in accordance with aspects of the present disclosure.DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS[0040] Specific embodiments of the invention will now be described in detail with reference to the accompanying figures. In the following detailed description of embodiments of the invention, numerous specific details are set forth in order to provide a more thorough understanding of the invention. However, it will be apparent to one of ordinary skill in the art that the invention may be practiced without these specific details. In other instances, well-known features have not been described in detail to avoid unnecessarily complicating the description.[0041] As digital circuits become ever more common in our lives, interfaces between these digital circuits and the real, analog world becomes ever more prevalent. As improved manufacturing process technologies for fabricating semiconductors are developed, digital circuit sizes have steadily shrunk, allowing digital circuits to take advantage of new, smaller, process technology nodes. Generally, process technology nodes refer to a size of a transistor gate length of a particular semiconductor manufacturing process technology. However, the pace at which analog circuits have been shrinking has not kept pace as analog circuits often require extensive redesign between different semiconductor manufacturing process technologies and/or process technology nodes (hereinafter referred to as process technology), rather than a relatively simple size shrink. Additionally, circuits may be modified to enhance functionality. For example, a circuit may be modified to adjust an operating voltage of the circuit to help reduce power requirements, or the circuit may be modified to expand an operating range for the circuit. For a particular process technology, aspects of each electrical component of the analog circuit and how the electrical component may interact with characteristics of the manufacturing process may influence the performance of the overall circuit in non-linear and difficult to predict ways. This makes simply resizing or copying a circuit from one process technology to another difficult. Similarly, these interactions make modifying the functionality of a circuit difficult to implement.[0042] Manufacturing process technologies for fabricating semiconductors have evolved as digital and analog circuits have become more common. FIG. 1 illustrates an example 100 of circuit design evolution, in accordance with aspects of the present disclosure. In this example 100, a circuit 102 (individually, 102A, 102B, and 102C, and collectively 102), includes three sub-circuit blocks, such
as bandgap 104 (individually, 104 A, 104B, and 104C, and collectively 104), operational amplifier 106 (individually, 106A, 106B, and 106C, and collectively 106), and driver 108 (individually, 108A, 108B, and 108C, and collectively 108). In this example the circuit 102A may be currently implemented in a first process technology 110. The circuit 102 may be converted from the first process technology 110 to a second process technology 112 while maintaining the same overall operating specifications, such as an operating voltage. For example, in this case, the circuit 102 may be converted from the first process technology 110 to the second process technology 112 while maintaining a 3.3 v operating voltage.[0043] Additionally, in certain cases, the circuit 102 may be redesigned, for example to enhance functionality. In this example, circuit 102B may be redesigned as circuit 102C to reduce the operating voltage while using the same process technology, here the second process technology 114. In certain cases, redesigning the circuit 102 may include updated design specifications, for example, of electrical devices of certain sub-circuit blocks, such as the operational amplifier 106C and the bandgap 104C. In other cases, re-architecting, for example to adjust a circuit layout, may be included, such as shown for the driver 108C.[0044] Presently, modifying a circuit design or converting the circuit design from one process to another process technology is largely a manual process. For example, a circuit designer may have a set of design specifications that a circuit should meet. These design specifications may be based on expected performance of the circuit, so, for example, an amplifier circuit may have design specifications for output resistance, distortion, impedance, etc. The designer may then convert each electronic component of the circuit, taking into considering the physical parameters of the electronic component in the original process technology and determining physical parameters of the electronic components in the target process technology. This determination is largely based on experience and intuition. After the electronic components are converted to the target process technology, the completed circuit may be simulated on circuit simulation software, such as simulation program with integrated circuit emphasis (SPICE), as against the design specifications. If the converted circuit does not meet the design specifications, the circuit designer may adjust the circuit, such as by changing physical parameters of certain electronic components, and simulating the circuit again. This adjustment is also largely based on experience and intuition and is generally an iterative process. It may be understood that electrical components, as used herein, refers to components or devices which make up a circuit, such as transistors, resistors, capacitors, inductors, diodes, etc.
[0045] To help accelerate efforts to transition analog circuits from one process to the another, as well as the development of new and improved analog circuits, an implementation of an automated analog and mixed signal circuit design and validation is desired.[0046] In certain cases, while a circuit may be presented visually, for example by a circuit design or simulation program, the underlying representation of the circuit may be in the form of one or more netlists or in a hardware description language (HDL). A netlist, or HDL, generally is a list of the electrical components of a circuit and a list of nodes each electronic component is connected with. In certain cases, attributes, structural information, physical parameters, or other information may also be included in the netlist. Moreover, in certain embodiments, the netlist or HDL is stored in a data object.Sub-Circuits[0047] FIG. 2 is a block diagram 200 of an analog circuit, in accordance with aspects of the present disclosure. Analog circuit 202 is any type of analog or hybrid analog-digital circuit that comprises a plurality of electrical components. In most embodiments, analog circuit 202 will process, generate, transmit, or receive an analog signal by one or more of the plurality of electrical components in the analog circuit 202. Analog circuit 202 may be a part of a larger circuit (e.g., an integrated circuit) or analog circuit 202 may be the entire circuit (e.g., an integrated circuit). Typically, the analog circuit 202 consists of one or more circuit blocks 204. Often, circuits are designed such that particular portions of the circuit perform certain tasks. For example, a circuit 202 may be divided into portions, or circuit blocks 204, which perform a particular function. Circuit blocks 204 may be any type of Intellectual Property (“IP”) block, IP core, functional block, or collection of components. In certain embodiments, circuit block 204 is circuit 202. In addition, circuit blocks 204 may provide one or more functions for analog integrated circuit 202. In certain embodiments, circuit block 204 is analog integrated circuit 202.These circuit blocks 204 may be described, for example, in the netlist in a manner similar to software functions and referenced by, for example, another netlist or circuit block describing a larger portion of the circuit. In certain cases, circuit blocks 204 may include other circuit blocks.[0048] Analog circuit 202 may comprise one or more sub-circuits, and circuit block 204 may also comprise one or more sub-circuits. In certain embodiments, a sub-circuit may be the same as circuit block 204 and/or analog circuit 202. In other embodiments, circuit block 204 may comprise a subset of the one or more sub-circuits of circuit block 204. A sub-circuit refers to a portion of a circuit that
is less than the whole circuit (e.g., a subset of a circuit). In alternative embodiments, the sub-circuit may refer to the whole circuit.[0049] A sub-circuit may comprise one or more of the plurality of electrical components in the analog circuit 202. Sub-circuits may be classified into sub-circuit types. A non-exhaustive list of sub-circuit types may include, but are not limited to a current mirror, a current divider, a current source, a current reference, a driver circuit, level-shift stage, gain stage, operational amplifier, a current mirror operational amplifier, inverting or non-inverting amplifier, a filter (e.g., a band pass filter, a low pass filter, or a high pass filter), an RC circuit, a resistor ladder, a voltage ladder, a power amplifier, a clock source, an analog-to-digital converter (“ADC”), a digital-to-analog converter (“DAC”), a voltage follower, a voltage regulator, a darlington transistor or pair, a boost circuit (e.g. , step-up circuit), a buck circuit (e.g, a step-down circuit), a mixer, a modulator, an inverter, a signal conditioner, an integrator, a differentiator, an input stage, an output stage, or any other identifiable sub-circuit type used in analog circuits.[0050] FIGS. 3A and 3B are a circuit diagram of an illustrative circuit block 300, in accordance with aspects of the present disclosure. As shown in FIGS. 3A and 3B, the circuit block 300 may be further divided into sub-circuits 302. In this example, circuit block 300 performs the function of amplifying a signal. Sub-circuits 302 are portions of the circuit block 300 intended to perform a purpose such as providing a reference voltage, copying a current, filtering a signal, etc. Sub-circuits 302 include a set of electrical components which are structured to operate together to perform the purpose and the set of electrical components may influence one or multiple output parameters of the circuit block. Sub-circuits may often function as building blocks of the overall circuit block, performing functions common across many circuit blocks. In certain cases, sub-circuits may be classified by their functions into types or categories. Classifying sub-circuits of a circuit block helps allow circuit blocks to be analyzed for conversion and/or creation at a sub-circuit level, helping to break down a circuit block into more easily analyzed components. Examples of sub-circuit types include current mirror 304, input stages 306, output stages 308, passives 310, voltage ladders, resistor ladders, etc. In certain cases, miscellaneous blocks 312 may also be identified for the circuit block. These miscellaneous blocks 312 may include, for example, nested circuit blocks 314, single electrical components 316 which may not have been included with other identified sub-circuits, unidentified sub-circuits which may need further analysis, etc.[0051] FIG. 4 is a circuit diagram illustrating a sub-circuit 400, in accordance with aspects of the
present disclosure. In this example, the sub-circuit 400 is a type of current mirror and includes two electrical components, a first transistor 402 and a second transistor 402. Each electrical component has certain physical parameters, which describe measurable physical characteristics about the electrical component, such as a channel width (W) and a channel length (L), input and output currents, impedance, operating region (e.g., conditions), N-type/P-type, etc. The sub-circuit physical parameters may refer to the physical parameters of electrical components of the sub-circuit and the sub-circuit physical parameters may also include operating information (e.g., operating point, bias point, quiescent point, Q-point, etc.). The operating point represents a current or voltage at the terminals of the electrical component for the electrical component to operate. Each electrical component may perform a particular role, for example, the current (IREF) that flows through the first transistor 404 is mirrored through the second transistor 402 as a function of the ratio (N) of the sizes of the transistors 402 and 404 and the first transistor 404 may act as a current to voltage convertor while the second transistor 402 may act as a voltage to current convertor. Based on the electrical components of the sub-circuit and their associated physical parameters, the overall subcircuit may be associated with a variety of sub-circuit performance parameters. While a variety of sub-circuit performance parameters may be determined, not all sub-circuit performance parameters are important for a given sub-circuit type.[0052] A sub-circuit may have a plurality of sub-circuit parameters. Sub-circuit parameters may comprise sub-circuit physical parameters of the sub-circuit, sub-circuit operational parameters of the sub-circuit, sub-circuit performance parameters of the sub-circuit, or a combination of physical parameters, operation parameters, and performance parameters of the sub-circuit. There may be a variety of sub-circuit parameters, which may describe how a particular sub-circuit performs in a variety of ways. The physical parameters of the electrical components and how electrical components of the sub-circuit are connected are factors which influence the sub-circuit parameters, but the relationship between these factors and the sub-circuit parameters are often non-linear and vary depending on the process technology. Sub-circuit parameters may be determined for a particular sub-circuit using circuit simulation software, such as SPICE simulation. For example, operating information of a sub-circuit may be determined using circuit simulation software. The operating information takes into account external influences to the circuit, for example, characteristics of a supply current and determines what the state (e.g., bias current) of electrical devices of the sub-circuit and/or circuit. In certain cases, determining operating information using circuit simulation software
may be performed relatively quickly as compared to determining sub-circuit performance parameters of a sub-circuit using circuit simulation software.[0053] Of the of sub-circuit parameters, a set of sub-circuit performance parameters may be identified as being more relevant to describing the performance of a particular sub-circuit with respect to physical parameters of electrical components of the sub-circuit. This set of sub-circuit performance parameters may be determined to be more relevant based on the function of the particular sub-circuit. In certain cases, sub-circuit performance parameters of the set of sub-circuit performance parameters included for a particular type of sub-circuit may be predetermined. In certain cases, this predetermination of the set of sub-circuit performance parameters for a particular type of sub-circuit may be made based on expert knowledge and/or experience as to what sub-circuit performance parameters are more relevant for the type of sub-circuit.[0054] In certain cases, the sub-circuit performance parameters for a sub-circuit type may be predetermined algorithmically. For example, where a circuit including the sub-circuit type in question has been successfully converted from a first process technology to a second process technology, the sub-circuit type may be modeled, such as in circuit simulation software, as designed in the first process technology and modeled again as designed in the second process technology. A variety of sub-circuit performance parameters may be determined for both models and then compared to determine which performance parameters are most closely maintained after the conversion. This process may be repeated with multiple examples of the sub-circuit type, either in the same or different circuits, as well as with different topologies of the sub-circuit type to obtain a representative sample to determine the set of performance parameters that are most relevant for converting the sub-circuit type.[0055] Sub-circuit performance parameters included in the set of sub-circuit performance parameters may differ for different types of sub-circuits as the purpose played by different types of sub-circuits are different. As an example, a set of sub-circuit performance parameters for current mirrors may include current matching, output impedance, operating region, and a width and length of the transistors. The sub-circuit performance parameters included in this set of sub-circuit performance parameters may differ from sub-circuit performance parameters included in another set of sub-circuit performance parameters associated with an input state sub-circuit type. In certain cases, if a set of sub-circuit performance parameters has not been defined for a particular sub-circuit type, performance parameters of the electrical components may be used instead of sub-circuit
performance parameters.[0056] FIG. 5 is a block diagram 500 of an example embodiment of a technique for automated analog and mixed signal circuit design and validation, in accordance with aspects of the present disclosure. The example embodiment illustrated in block diagram 500 provides an overview of an exemplary technique for converting a circuit design from a first process technology to a second process technology, aspects of which are discussed in more detail below. An analog circuit may be divided into one or more circuit blocks. These circuit blocks are often designed to perform a certain function and comprise one or more sub-circuits. These sub-circuits include one or more electrical components which are structured to operate together, and a set of known sub-circuits may be identified. These known sub-circuits may be a number of arrangements of the electrical component (e.g., topologies) known to be sufficiently robust to be useable in a circuit for a process technology. Each component of a sub-circuit may be associated with certain range of physical parameters. Sets of sub-circuit physical parameters may be identified, each set having a different combination of physical parameters for the electrical components. These known sub-circuits may be modeled, for example as a netlist for use with a circuit simulator, for each set of sub-circuit physical parameters. This modeling may be based on a netlist, which generally is a list of the electrical components of a circuit and a list of nodes each electronic component is connected with. At block 502, models of these known sub-circuits may be simulated using circuit simulation software, such as SPICE. Each set of sub-circuit physical parameters may be simulated to identify certain sub-circuit performance parameters associated with a given set of sub-circuit physical parameters for the first process technology. A ML model for each sub-circuit of the known sub-circuits (or those sub-circuits supported by the particular embodiment) may be trained at block 504 to create a set of trained ML models for a process technology. In this embodiment, these trained ML models in the ML model library 506 may receive, as input, a set of sub-circuit physical parameters for electronic components of the sub-circuit for the first process technology and predict, as output, a set of sub-circuit performance parameters for the first process technology. These trained ML models may be stored in a ML model library 506. In certain cases, the ML model library 506 may be created once for a process technology and reused as needed.[0057] Similarly, for a second process technology a set of trained ML models may be configured to receive, as input, a set of sub-circuit performance parameters and predict a set of sub-circuit physical parameters for electronic components of the sub-circuit. As described above, each
component of a sub-circuit may be associated with certain range of physical parameters and sets of sub-circuit physical parameters may be identified, each set having a different combination of physical parameters for the electrical components. A set of known sub-circuits may be modeled, for example, as netlists, for each set of sub-circuit physical parameters. Each set of sub-circuit physical parameters may be simulated to identify certain sub-circuit performance parameters associated with a given set of sub-circuit physical parameters for the second process technology at block 508. At block 510, a ML model for each sub-circuit of the known sub-circuits (or those sub-circuits supported by the particular embodiment) may be trained to create a set of trained ML models for the second process technology. In this embodiment, these trained ML models in the ML model library 512 may receive, as input, a set of sub-circuit performance parameters for a second process technology and predict, as output, sub-circuit physical parameters for electronic components of the sub-circuit for the second process technology. This set of trained ML models maybe stored in the ML model library.[0058] Thus, this example includes two sets of ML models. The first set of ML models take subcircuit physical parameters for a first process technology and predicts certain sub-circuit performance parameters for a particular sub-circuit. The second set of ML models take the certain sub-circuit performance parameters for the particular sub-circuit and predict sub-circuit physical parameters for electrical components of the particular sub-circuit for the second process technology. [0059] In this example, a representation 514 of a circuit, such as a netlist describing the circuit, may be parsed to identify one or more circuit blocks at block 516. A circuit block may be parsed to identify sub-circuits of the circuit block at block 518. A sub-circuit type may also be identified. At block 520 for each identified sub-circuit, sub-circuit physical parameters for components of the subcircuit are identified and input to a ML model corresponding to the identified sub-circuit for the first process technology (e.g., stored in ML model library 506) to predict certain sub-circuit performance parameters. These predicted certain sub-circuit performance parameters are then input to a second ML model corresponding to the identified sub-circuit for the second process technology (e.g., stored in ML model library 512) to predict certain sub-circuit physical parameters for components of the sub-circuit in the second process technology. At block 522, a representation of the sub-circuit, such as a netlist, is created for each identified sub-circuit based on the predicted certain sub-circuit physical parameters for components of each sub-circuit and the sub-circuits may be connected into circuit blocks, which in turn are connected to form an overall circuit, thus converting the original
circuit to a new circuit in the second process technology. At block 524, this new circuit may be simulated to verify that the new circuit meets the design specification, and if the design specifications are met, the representation of the new circuit may be output at block 526.[0060] FIG. 6 is a block diagram 600 of an example embodiment of a technique for automated analog and mixed signal circuit design and validation, in accordance with aspects of the present disclosure. The example embodiment illustrated in block diagram 600 provides an overview of an exemplary technique to create a new circuit or optimize an existing circuit, aspects of which are discussed in more detail below. As discussed in conjunction with FIG. 5, analog circuits may be divided into circuit blocks and sub-circuits. Known sub-circuits may be modeled, for example as a netlist for use with a circuit simulator, for sets of sub-circuit physical parameters. At block 502, models of these models may be simulated using circuit simulation software, such as SPICE. Each set of sub-circuit physical parameters may be simulated to identify certain sub-circuit performance parameters associated with a given set of sub-circuit physical parameters for the first process technology. A ML model for each sub-circuit of the known sub-circuits (or those sub-circuits supported by the particular embodiment) may be trained at block 504 to create a set of trained ML models for a process technology. In this embodiment, certain trained ML models in the ML model library 506 may receive, as input, a set of sub-circuit physical parameters for electronic components of the sub-circuit for the first process technology and predict, as output, a set of sub-circuit performance parameters. Additionally, other trained ML models in the ML model library may receive, as input, sets of sub-circuit performance parameters for the first process technology and predict, as output, a set of sub-circuit physical parameters for electronic components of the subcircuit for the first process technology. These trained ML models may be stored in a ML model library 506.[0061] At block 516, a circuit block may be identified from a representation of a circuit 514. For example, an algorithm attempting to optimize an existing circuit may parse the representation of a circuit, for example stored as a data object such as a netlist, to identify a circuit block. As another example, a user attempting to create a new circuit may identify a circuit block 516 they are working on. At block 518, one or more sub-circuits of a circuit block may be identified. For example, an algorithm may parse a circuit block to identify sub-circuits of the circuit block. As another example, the user may identify a sub-circuit type that they are attempting to design. The user may alternatively or additionally identify other sub-circuits of the circuit block. At block 520, a set of performance
parameter values for the sub-circuit may be identified. For example, an algorithm may, for each identified sub-circuit, identify sub-circuit physical parameters for components of the sub-circuit and input these sub-circuit physical parameters to a ML model corresponding to the identified sub-circuit for the first process technology (e.g., stored in ML model library 506) to predict a set of sub-circuit performance parameters. As another example, the user may identify certain sub-circuit performance parameters for the sub-circuit being created.[0062] At block 602, one or more sub-circuit performance parameters may be provided for optimization. The one or sub-circuit more performance parameters for optimization may be provided along with the other sub-circuit performance parameters of the set of sub-circuit performance parameters. For example, an algorithm may optimize one or more sub-circuit performance parameters from the set of sub-circuit performance parameters identified at block 520 to help enhance the performance of the sub-circuit. Alternatively, the set of sub-circuit performance parameters identified at block 520 may be provided, for example, to attempt to optimize a topology of the sub-circuit. As another example, a user may provide the set of sub-circuit performance parameters and identified sub-circuit type for the sub-circuit being created. In certain cases, an indication of a sub-circuit type and/or sub-circuit topology may also be provided. Alternatively, the sub-circuit type may be inferred, for example, based on the sub-circuit performance parameters included in the set of performance parameters. In yet other cases, the sub-circuit may be optimized based on properties of the components within the topologies, for example, such as based on size or number of components within topologies of the sub-circuit type.[0063] The topology of a sub-circuit refers to a specific arrangement of electrical components of a sub-circuit. For a sub-circuit type, there may be many practical topologies for implementing the sub-circuit. For example, FIGS. 7A-7B illustrates a set of different topologies for an input (or gain) stage sub-circuit type.[0064] At block 604 an optimized sub-circuit may be identified. For example, based on the subcircuit topology and optimized sub-circuit performance parameters, new sub-circuit physical parameters may be determined for electrical components of the sub-circuit by selecting an appropriate ML model based on the sub-circuit topology and inputting the optimized sub-circuit performance parameters to the ML model to obtain new sub-circuit physical parameters for the subcircuit topology. In certain cases, the sub-circuit topology of the optimized sub-circuit may be the same as the original sub-circuit topology. In other cases, the sub-circuit topology may be optimized.
For example, the optimized sub-circuit performance parameters may be input into multiple ML models of the sub-circuit type to generate multiple sets of sub-circuit physical parameters for multiple sub-circuit topologies of the sub-circuit type. A sub-circuit topology of the multiple subcircuit topologies may then be selected by an optimization function. The optimization function may be any known optimization technique, such as cost function, loss function, etc. As an example, the optimization function may select a sub-circuit topology based on a least number of electrical components with sub-circuit physical parameters of those electrical components within a certain range, the range selected for ease of manufacture based on the first process technology. At block 524, this new optimized circuit may be simulated to verify that the new circuit meets the design specification, and if the design specifications are met, the representation of the new circuit may be output at block 526.[0065] In certain cases, one or more known sub-circuits may be identified. While there may be multiple ways to design a particular set of electrical components to perform the specific purpose of a sub-circuit, in practice, there may be a limited number of practical electrical component arrangements (e.g., topologies) sufficiently robust to be useable for expected environmental conditions (e.g., temperature range, humidity range, operating voltage, etc.) for a given process technology. For example, FIGS. 7A-7B illustrate an example set 700 of known topologies of input or gain stage for a given process technology, in accordance with aspects of the present disclosure. Of note, the set 700 of known topologies is not exhaustive. Rather, the set 700 may include topologies known to be workable and/or practically useable. In certain cases, the set 700 of known topologies for a particular sub-circuit may be predetermined, at least in part, based on expert knowledge and/or experience as to what topologies are workable and/or practically useable.[0066] In certain cases, the set 700 of known topologies may not be fixed and additional topologies may be added as needed. For example, as additional topologies are identified, these additional topologies may be added manually. In other cases, additional topologies may be identified, for example, by noting components and their connections of a new topology candidate that are not identified as a part of a known topology and matching this new topology candidate against a listing of other topology candidates previously not recognized as a part of a known topology. If there is a match, these candidate topologies may be surfaced to a user. Alternatively, a set of sub-circuit performance parameters may be algorithmically determined for the candidate topology, as described above. If the set of sub-circuit performance parameters matches the set of sub-circuit performance
parameters for the corresponding type of sub-circuit, the candidate topology may be added to the set 700 of known topologies. In certain cases, sets of known topologies may be organized based on different types of sub-circuits, or a single set of known sub-circuits may include topologies for all of the types of sub -circuits.Sub-Circuit Identification[0067] FIG. 8 is a system diagram illustrating an overview of technique 800 for designing a new analog circuit from an original analog circuit, in accordance with aspects of the present disclosure. In certain cases, technique 800 may be implemented in software as one or more software programs which may include various modules. While technique 800 is described in the context of an embodiment organized with multiple modules, rules, tools, libraries, etc., it may be understood that this organization has been chosen for clarity and other embodiments may perform the techniques described in technique 800 using differing organizations. In technique 800, an existing analog circuit is described by a first data object representing an original circuit802. A data object may be a location or region of storage or memory that contains value or group of values. A data object may include an electronic file in a file system, a block storage, or any other type of electronic storage that can store data. The original circuit may be a schematic, an electrical diagram, netlist, HDL, or any type of representation or design of a circuit (e.g, circuit design). In addition, the original circuit may be a subset of a larger circuit (e.g., an integrated circuit). The first data object representing the original circuit 802 may be any type of electronic representation or circuit design of a circuit. The first data object representing the original circuit 802 may be associated with a first process technology, such as a current circuit manufacturing process. An indication of the current circuit manufacturing process may be obtained in any way. For example, the indication may be input by a user and/or extracted from the first data object. In certain embodiments, technique 800 may identify the first process technology from the first data object representing an original circuit 802, a circuit design associated with the circuit, a circuit block in the original circuit, a sub-circuit in the original circuit, or one or more electrical components in the original circuit.[0068] The first data object representing the original circuit 802 may include a representation of electrical components and the interconnections between the electrical components. Accordingly, first data object representing the original circuit 802 describes how this circuit is designed in the current process technology. In certain cases, the first data object representing the original circuit 802 may be described as one or more netlists, HDL, or any other electronic representation of a circuit. A netlist
is an electronic representation of electrical components in a circuit and the connection between the electrical components in the circuit. In certain embodiments, the netlist may also include nodes that represent the connection between a first electrical component and a second electrical component in the circuit. The netlist may include multiple circuit blocks and may organize the circuit by each circuit block. In certain cases, the netlist, and corresponding circuit blocks, may be organized into portions that perform a particular task or function. In some embodiments, technique 800 may include a component that identifies circuit blocks in the first data object representing the original circuit 802. A circuit block parser 803 may parse the first data object to identify individual circuit blocks. A circuit block may be further parsed by a sub-circuit parser 804 to identify sub-circuits of the circuit block based on a set of sub-circuit parsing rules 806. In other embodiments, technique 800 may identify sub-circuits using the original circuit represented by the first data object. In certain embodiments, the original circuit in the first data object representing 802 is a circuit block.[0069] The sub-circuit parsing rules 806 may be based, at least in part, on the electrical components of the sub-circuit, physical parameters of the electrical components, how the electrical components of the sub-circuit are connected, what purpose the electrical components serve, what other sub-circuits or electrical components that the identified sub-circuit is connected to, etc. In certain cases, the sub-circuit parsing rules 806 may first attempt to identify a sub-circuit based on the electrical components and connections of the electrical components. In the netlist, each electrical component is identified by type (e.g., a transistor (such as an NMOS transistor or PMOS transistor), capacitor, resistor, inductor, or any type of electrical component or device) and connections (e.g., coupling) of the electrical component are provided. The parsing rules 806 may, for example, parse the netlist to group a first electrical component with one or more other electrical components that the first electrical component is connected to, and attempt to match this group of electrical components as against a set of known topologies, an example of which is shown in FIGS. 7A-7B. As an example, the rules may indicate that if a certain electrical component is a transistor with a source connected to a certain electrical component or sub-circuit, a drain connected to another certain electrical component or sub-circuit, and a gate connected to another transistor, the other transistor having certain connection, then this certain set of electrical component is a particular topology of an input or gain stage. In certain cases, a role (e.g., branch, diode connected, gain stage, cascade stage, etc.) and physical parameters (e.g., width (W), length (L), W/L ratio, etc.) of the electrical components may also be considered and recorded. For example, a current mirror sub-circuit block may include a
first transistor which is diode connected and a slave current source second transistor. Although both the first and second transistors belong to the same sub-circuit block, the first and second transistors may play different roles in the sub-circuit blocks, may have different electrical component parameters, and may influence performance parameters of the structural block in different ways. The parsing may be repeated with additional other electrical components until either only one or no matching known topology is left. If only one matching topology is left, then the sub-circuit can be identified based on the matching topology. If no matching topology is left, then the last additional other electrical component added may be dropped. By dropping this last additional other electrical component, multiple matching known topologies may be left, and conflict resolution may be performed to determine which known topology of the multiple matching known topologies is a best match. In certain embodiments, the netlist may identify one or more sub-circuits, and sub-circuit parsing rules may identify the sub-circuit using the identification of the sub-circuit by the netlist. [0070] Conflict resolution may take into account the electrical components of the group of electrical components as well as one or more connections (e.g., inputs and outputs) of the group of electrical components. In certain cases, the connections as between the electrical components of the sub-circuit may be considered and if a unique match still cannot be found, then connections as between electrical components of the sub-circuit and other sub-circuits and/or other electrical components may be considered as well. For example, referring to FIGS. 2A and 2B, current mirror 220 may be identified as a current mirror as it includes a pair of transistors 222, which are connected, via a pair of resistors 224, to ground 226. Similarly, input stage 228 also includes a pair of transistors 230 But here, the pair of transistors 230 are connected to power supply line VDD 232 via other electrical components, allowing the input stage 228 to be identified as an input stage. These one or more connections may be compared to connections of the multiple matching known topologies to identify the best matching known topology. In certain cases, if no matching known topology is found, the group of electrical components may be flagged for later review and/or the electrical components may be individually analyzed. In certain cases, rather than performing a sub-circuit identification, each electrical component may be individually identified based on connections to the electrical component as well as a role of the electrical component in a functional circuit block.Sub-Circuit Performance Parameters[0071] Once a sub-circuit has been identified, a set of sub-circuit performance parameters may be determined based on the identification. In certain embodiments, the set of sub-circuit performance
parameter may be determined based on the identified function of the sub-circuit, the circuit block, or the analog circuit. FIG. 9 is a chart illustrating sets of sub-circuit performance parameters for certain sub-circuits 900, in accordance with aspects of the present disclosure. How a sub-circuit performs may be described by numerous sub-circuit performance parameters, such as transconductance (Gm), channel conductance (GDS), minimum drain to source voltage at which current saturates (Vosat), drain current mismatch (Idmm), threshold voltage mismatch (Vtmm), output impedance (r0), voltage at a bulk substrate, voltage at a drain, etc. In certain cases, each type of subcircuit may be associated with a set of sub-circuit performance parameters.[0072] In certain embodiments, the sets of sub-circuit performance parameters may be defined per type of sub-circuit. Specific sub-circuit performance parameters included in the set of performance sub-circuit performance parameters may vary from one type of sub-circuit to another. Certain subcircuit performance parameters 904 may be more relevant for a particular sub-circuit type than for another sub-circuit type. For example, while current mirrors may have a certain transconductance value, the transconductance value of a current mirror may be relatively less important to the function of current mirrors 902. Rather, sub-circuit performance parameters 904 more relevant to the function of current mirrors 902, such as channel conductance, minimum drain to source voltage at which current saturates, and Idmm, may be included in the set of sub-circuit performance parameters for current mirrors. As another example, the set of sub-circuit performance parameters for a differential pair 906 may include the sub-circuit performance parameters 904 of transconductance (Gm), channel conductance (GDS), and threshold voltage mismatch (Vtmm). The sub-circuit performance parameters of the set of sub-circuit performance parameters for a particular sub-circuit may be predetermined. In certain cases, the specific sub-circuit performance parameters of the set of sub-circuit performance parameters for a particular sub-circuit may be determined, at least in part, based on expert knowledge and/or experience. In other embodiments, the relevant sub-circuit performance parameters in the set of sub-circuit performance parameters are dynamically identified by the identified sub-circuit, the identified function of the sub-circuit, the circuit block, the function of the circuit block, the circuit, or the function of the circuit. In addition, the relevant sub-circuit performance parameters in the set of sub-circuit performance parameters for a type of sub-circuit may vary based on the identified subcircuit.[0073] Returning to FIG. 8, operating simulations 808 ( e.g ., operating point simulations) may be performed, in accordance with aspects of the present disclosure. For the operating simulations 808,
the circuit, or portions thereof, may be simulated in circuit simulation software to determine subcircuit operational parameters for one or more sub-circuits of the circuit. For example, a circuit block and/or sub-circuit of the original circuit 802 may be simulated in a circuit simulator, such as a SPICE simulator, to determine the sub-circuit operating point information for the sub-circuit. The subcircuit operational parameters, which may include operating point information and bias point information, refers to a voltage or current (e.g., drain source voltage (VDS), gate source voltage (Vgs), etc.) at a particular point of an electrical component with no input signal applied. In certain embodiments, operational parameters may include information or parameters corresponding to one or more operating points or bias points of an electrical component, a sub-circuit, a circuit block, or a circuit.[0074] In certain cases, the operational parameters may be based on identified sub-circuits. For example, sub-circuit operational parameters may be generated for identified sub-circuits based on a simulation of the circuit block and/or the sub-circuit. In certain cases, the operational parameters may also be generated on an electrical component level. For example, if certain electrical components of the original circuit 808 were not included in an identified sub-circuit, operational parameters may be generated for the electrical component. In other cases where electrical components are identified, operational parameters may be generated for the electrical components of the original circuit 808. In certain cases, the operational parameters may be used, along with the sub-circuit physical parameters (obtained, for example, from the data object) and sub-circuit type information by a first process technology characterization module 810 to determine sub-circuit performance parameter values for the set of sub-circuit performance parameters associated with the identified sub-circuit or electrical component for the first process technology associated with the original circuit.[0075] The first circuit process technology characterization module 810, in certain cases, creates, trains, stores, and provides machine learning models for predicting sub-circuit performance parameters based on operating information and sub-circuit physical parameters. The first process technology characterization module 810 may include trained machine learning (ML) models 812 in a ML library 506. In certain cases, there may be ML models corresponding to the known topologies that the technique 800 is configured to operate on. The trained ML model 812 may be stored and represented in a data object. The trained ML model 812 may be stored in the ML library 506. The ML library 506 may store and provide access to a plurality of ML models. In certain embodiments,
the trained ML model 812 may be any set of rules, instructions, algorithms, or any type of data obj ect that recognizes patterns.[0076] A ML model 812 may be trained 504 based on a set of simulated sub-circuits 502. In certain cases, a ML model 812 may be trained based on variations of the sub-circuit for a first (e.g., source) process technology. For example, a first sub-circuit topology of the known sub-circuit topologies may be simulated 502 using a variety of sub-circuit physical parameters and operational parameters for the first process technology. This simulation may be performed using a circuit simulator, such as a SPICE simulation. The simulation generates a set of sub-circuit performance parameters corresponding to variants of the sub-circuit physical parameters and operational parameters for the first topology in the first technology process. The ML model for the first sub-circuit topology may then be trained 504 using the variants of the sub-circuit physical parameters and operational parameters to predict the corresponding sub-circuit performance parameter for that ML model for the first process technology. The simulated sub-circuits 502 and the results of the simulated subcircuits 502 may be stored and represented in a data object.[0077] In certain cases, the ML model 812 may be stored in a ML model library 506. The ML model 812 may use a variety of ML modeling techniques, including linear regression models, large margin classifiers (e.g., support vector machines), principal component analysis, tree-based techniques (e.g., random forest or gradient boosted trees), or neural networks. Linear regression models may be ML models which assumes a linear relationship between input parameters and output. Large margin classifiers may be ML models which returns a distance (e.g., margin) for an output is from a decision boundary. Support vector machines ML models plot data items in n- dimensional space based on the n features of the data input to find a hyperplane that differentiates the data items into different classes. Principal component analysis ML models create a matrix of how features of the data items relate and determine which features are more important. Random forest tree ML models create a large group of decision trees for a class prediction given a data item and generates a prediction from the group of decision trees. The prediction that is the most common in the group of decision trees is the class prediction. Gradient boosted tree ML models use a set of linked and layered decision trees where predictions are based on a weighted sum of predictions made by each layer of the group of decision trees. Neural network ML models use a set of linked and layered functions (e.g., node, neuron, etc.) which are weighted to evaluate input data. Neural network ML modeling techniques may include fully connected (where every neuron of a layer is connected
to every other node of the layer), fully connected with regularization (where a regularization function is added to a fully connected neural network to help avoid over fitting), and fully connected with dropout (which removes nodes to simplify the network) and optimizers, such as adaptive moment estimation optimizer enhanced neural networks (which reduces data parameters of the network using gradient descent algorithms).[0078] A particular type of sub-circuit implemented in a given process technology may be associated with a practical range of sub-circuit physical parameters (e.g., physical parameters) and operational parameters for the first process technology. The practical range of sub-circuit physical parameters may be provided, for example, by a user and the practical range may be based on limitations of a process technology. For example, a current mirror sub-circuit implemented in the first process technology may have a range of acceptable input reference currents (e.g., 10hA-20mA), a minimum and maximum transistor width (e.g., 1 pm-100pm) and length (e.g., .1 pm-10pm) for electrical components of the sub-circuit, etc. In other cases, the practical range of sub-circuits may be automatically determined, for example, by analyzing a range of parameters associated with a process technology or by simulating the circuit and/or sub-circuit across the range of parameters until the circuit and/or sub-circuit fails in the simulation, etc. A particular sub-circuit topology may then be simulated 502 across a selection of the practical range of sub-circuit physical parameters (e.g., physical parameters) and sub-circuit operational parameters to generate sub-circuit performance parameters (e.g, performance parameters) associated with the particular sub-circuit topology for the first process technology. For example, a particular circuit mirror topology, such as that shown in FIG. 4, may be simulated 502 with varying combinations of sub-circuit physical parameters and sub-circuit operational parameters, such as W/L of electrical components, input and output currents, impedance, operating region (e.g., conditions), N-type/P-type, etc. to generate subcircuit performance parameters (e.g, performance parameters) associated with the respective physical parameters and respective sub-circuit operational parameters. The sub-circuit performance parameters include, but are not limited to those sub-circuit performance parameters discussed in conjunction with FIG. 9, such as transconductance (Gm), channel conductance (GDS), minimum drain to source voltage at which current saturates (Vosat), drain current mismatch (Idmm), threshold voltage mismatch (Vtmm), output impedance (r0), voltage at a bulk substrate, voltage at a drain, etc. In certain embodiments, the sub-circuit may be simulated using varying combinations of sub-circuit parameters (including physical parameters and performance parameters) and operational parameters
to generate additional sub-circuit performance parameters. Moreover, in certain cases, the combinations of sub-circuit physical parameters, sub-circuit performance parameters, and subcircuit operational parameters are not exhaustive, but rather combinations of the sub-circuit physical parameters, sub-circuit performance parameters, and sub-circuit operational parameters are selected and simulated to cover the Gaussian and Uniform distribution encompassing the cases typically identified in analog semiconductor technology manufacturing variations. For example, operating points may be selected substantially uniformly across the practical range of sub-circuit physical parameters with additional operating points selected in ranges of sub-circuit physical parameters most commonly used (or expected to be used) for a given sub-circuit or circuit.[0079] In certain cases, the set of sub-circuit physical parameters, sub-circuit operational parameters, and generated sub-circuit performance parameters resulting from the simulations may be used to train a ML model 504 corresponding to the simulated sub-circuit topology.Use of ML Models[0080] A ML model for a particular sub-circuit topology in a particular process technology may be trained based on the sub-circuit physical and operational parameters, and corresponding generated sub-circuit performance parameters. As discussed above, multiple sets of sub-circuit physical parameters, operation parameters, and corresponding generated sub-circuit performance parameters are obtained across a practical range of sub-circuit physical parameters. These sets of parameters may be divided into a training set and a test set. The ML model 812 may be trained using the training set and the training 504 of the ML model 812 may be verified by the test set. To train the ML model 812, certain parameters may be provided as the input parameters to the ML model 812, the ML model 812 then makes certain predictions based on the input parameters, and these predictions are compared to the known correct output parameters found from the simulation. Based on this comparison, the ML model 812 may be adjusted, for example by adjusting node weights, to allow the ML model 812 to make predictions that closely match the known correct output parameters. The ML model training 504 may then be verified by using the ML model 812 to make predictions using the test set and then comparing the predictions output by the ML model 812 to the known correct output associated with the test set.[0081] The sub-circuit parameters (including sub-circuit physical parameters, sub-circuit performance parameters, and sub-circuit operational parameters) for the particular sub-circuit topology may be used to train a ML model 812 for the particular sub-circuit topology for the first
process technology. For example, the sub-circuit physical parameters and sub-circuit operational parameters from the simulated particular sub-circuit topology may be used as a training set to train a ML model 812 to predict certain sub-circuit performance parameters when presented with a set of sub-circuit physical parameters and sub-circuit operational parameters for the particular sub-circuit topology in the first process technology. This ML model 812 may be tested using the test set to verify the training. For example, sub-circuit physical parameters and operational parameters of the test set may be input to the ML model 812 to produce predicted sub-circuit performance parameters. These predicted sub-circuit performance parameters are then compared against the known sub-circuit performance parameters that were generated by simulating the sub-circuit using the associated subcircuit physical parameters and operation parameters to verify that the ML model 812 is producing accurate predictions. Techniques for training the ML model 812 are discussed in greater detail below. [0082] Once trained, this ML model 812 for the particular sub-circuit topology may be stored in the ML model library 506 along with other ML models for other sub-circuit topologies for the first process technology. In certain cases, the ML model library 506 may include trained ML models for identified sub-circuit topologies supported by an embodiment of technique 800.[0083] Given sub-circuit operational parameters, along with sub-circuit physical parameters for an identified sub-circuit of the original circuit 802, the source circuit process technology characterization module 810 may locate the corresponding trained ML model 812 for the identified sub-circuit from the ML model library 506 and use the located ML model to predict certain subcircuit performance parameters 818 for the identified sub-circuit.[0084] In certain cases, a second process technology characterization module 820 is similar to the first process technology characterization module 810. For example, the second circuit process technology characterization module 820 may also include trained ML models 822 in a ML library 512. In certain cases, there may be ML models corresponding to the known topologies that the technique 800 is configured to operate on. The trained ML model 822 may be stored and represented in a data object. The trained ML model 822 may be stored in the ML library 512. The ML library 512 may store and provide access to a plurality of ML models. In certain embodiments, the trained ML models 822 may be any set of rules, instructions, algorithms, or any type of data object that recognizes patterns. It may be understood that the second circuit process technology characterization module may include ML models associated with any number of circuit process technologies. In certain cases, the second circuit process technology characterization module may include ML models
associated with the first process technology, for example to help optimize a sub-circuit.[0085] A ML model 822 may be trained 510 based on a set of simulated sub-circuits 508. In certain cases, the ML model 822, may be trained based on variations of a sub-circuit for a second (e.g., target) process technology. For example, the first sub-circuit topology of the known sub-circuit topologies may be simulated 508 using a variety of sub-circuit physical parameters and operational parameters for the second process technology. This simulation may be performed using a circuit simulator, such as a SPICE simulation. The simulation generates a set of sub-circuit performance parameters corresponding to each variant of the sub-circuit physical parameters and sub-circuit operational parameters for the first topology in the second process technology. The ML model 822 for the first sub-circuit topology may then be trained 510 using the variants of the sub-circuit physical parameters and operational parameters to predict the corresponding sub-circuit performance parameter for that ML model for the second process technology. The simulated sub-circuits 508 and the results of the simulated sub-circuits 508 may be stored and represented in a data object. It may be understood that for a given process technology multiple sets of sub-circuit physical parameters, sub-circuit operational parameters, and corresponding generated sub-circuit performance parameters are obtained across a practical range sub-circuit physical parameters and that the same multiple sets may be used for ML model training 504 or ML model training 510. In certain cases, the ML model 822 may be stored in a ML model library 512. The ML model 822 may also use a variety of ML modeling techniques, including linear regression models, large margin classifiers (e.g., support vector machines), principal component analysis, tree-based techniques (e.g., random forest or gradient boosted trees), or neural networks. A particular sub-circuit topology may be simulated 508 across a selection of the practical range of certain sub-circuit physical parameters and operational parameters to generate additional sub-circuit performance parameters associated with the particular sub-circuit topology for the second process technology. The practical range of sub-circuit physical parameters may be provided, for example, by a user and the practical range may be based on limitations of a process technology. For example, a particular circuit mirror topology, such as that shown in FIG. 3, may be simulated with varying combinations of sub-circuit physical parameters and sub-circuit operational parameters, such as W/L of electrical components, input and output currents, impedance, operating region (e.g., conditions), N-type/P-type, etc. to generate a set of subcircuit performance parameters associated with the respective sub-circuit physical parameters (e.g., physical parameters and/or operational parameters) for the second process technology. In other
cases, the practical range of sub-circuits maybe automatically determined, for example, by analyzing a range of parameters associated with a process technology or by simulating the circuit and/or subcircuit across the range of parameters until the circuit and/or sub-circuit fails in the simulation, etc. The sub-circuit physical parameters, operational parameters, and generated sub-circuit performance parameters resulting from the simulations for the particular sub-circuit topology may then be used to train a ML model 510 for the particular sub-circuit topology for the second process technology. For example, the sub-circuit physical parameters, sub-circuit operational parameters, and corresponding generated performance sub-circuit performance parameters resulting from the simulated circuit mirror topology may be used as a training set to train a ML model to predict subcircuit physical parameters and sub-circuit operational parameters when presented with a set of subcircuit performance parameters for the particular circuit mirror topology in the second process technology.[0086] The ML model 822 may be trained using the training set and the training 510 of the ML model 822 may be verified by the test set. To train the ML model 822, certain parameters may be provided as the input parameters to the ML model 822, the ML model 822 then makes certain predictions based on the input parameters, and these predictions are compared to the known correct output parameters found from the simulation. Based on this comparison, the ML model 822 may be adjusted, for example by adjusting node weights, to allow the ML model 822 to make predictions that closely match the known correct output parameters. For example, after training, the ML model 812 may predict sub-circuit physical parameters and sub-circuit operational parameters when receiving a set of sub-circuit performance values for a circuit mirror topology in the second process technology.[0087] This ML model may be tested using the test set to verify the training. The training 510 may be verified by using the ML model 822 to make predictions using the test set and then comparing the predictions output by the ML model 822 to the known correct output associated with the test set. For example, sub-circuit physical parameters and sub-circuit operational parameters of the test set may be input to the ML model to produce predicted sub-circuit performance parameters. These predicted sub-circuit performance parameters are then compared against the known sub-circuit performance parameters generated by simulating the sub-circuit using the associated sub-circuit physical parameters and sub-circuit operation parameters to verify that the ML model is producing accurate predictions.
[0088] Once trained, this ML model for the particular sub-circuit topology may be stored in the set of trained ML models 822 (e.g., another ML model library) along with other ML models for other sub-circuit topologies for the second process technology. In certain cases, the set of trained ML models 822 may include trained ML models for each identified sub-circuit. In certain cases, the model library 812 and another model library 822 may be combined into a single model library. [0089] As indicated above, sub-circuit parameters (including sub-circuit physical parameters, subcircuit performance parameters, and sub-circuit operational parameters) for a particular sub-circuit topology for the second process technology may be used to train ML model 822 for the particular sub-circuit topology for the second process technology. For example, certain sub-circuit performance parameters may be used as a training set to train a ML model 812 to predict certain other sub-circuit physical parameters and sub-circuit operational parameters. This ML model 812 may be tested using the test set to verify the training. For example, the predicted other sub-circuit physical parameters and sub-circuit operational parameters are then compared against the known sub-circuit physical parameters and operational parameters, used for simulating the sub-circuit to generate the sub-circuit performance parameters, to verify that the ML model 822 is producing accurate predictions of the particular sub-circuit topology. Once trained, the ML model 822 for the particular sub-circuit topology may be stored in the ML model library 512 along with other ML models for other sub-circuit topologies for the second process technology. In certain cases, the ML model library 512 may include trained ML models for identified sub-circuit topologies supported by an embodiment of technique 800.[0090] Thus, given sub-circuit performance parameters for an identified sub-circuit of the original circuit, as represented by the data object 802, the second process technology characterization module 820 may locate the corresponding trained ML model 822 for the identified sub-circuit from the ML model library 512 and use the trained ML model 822 to predict 828 sub-circuit physical parameters and/or operational parameters for the identified sub-circuit. Once a set of sub-circuit physical parameters have been determined for the components of the identified sub-circuit, the data object representation of the identified sub-circuit is converted to the second process technology using the set of sub-circuit physical parameters for the corresponding components of the sub-circuit. For example, a netlist for the converted sub-circuit may be generated using the determined sub-circuit physical parameters.[0091] Formatting tool 830 may correct formatting, connection drawing, and/or mapping issues
that may arise during the conversion. In certain cases, the formatting tool 830 may extract certain formatting, connection, and or mapping information from the original circuit design 802 for use to correct the converted data object (e.g., netlist). In certain cases, this netlist may be connected to or appended on another netlist, such as a netlist for a converted version of the circuit block and other circuit blocks, if needed, to output a data obj ect representing a new circuit 832 for the second process technology. In certain cases, the converted sub-circuit, converted circuit block, and/or new circuit in the data object representing the new circuit 832 may be simulated for example in a circuit simulator to verify that the performance of new circuit in the data object representing the new circuit 832 is within a certain range of performance of the original circuit 802 This range of performance may vary depending on the intended purpose of the new circuit 832 and this range of performance may be defined in a variety of ways, such as by a circuit designer, engineer, etc. As an example, a new control circuit may be tested to ensure that the new control circuit has an output voltage or current within a certain range, such as a percentage, of a target voltage/current for a given input setting. [0092] As indicated above, the trained ML models may be stored in ML model libraries, for various process technologies. The ML model libraries may refer to any number of data structure, object, or process used to organize, store, and/or retrieve the ML model libraries from a non- transitory data storage medium. For example, ML library 506 and ML library 512 may be logical ML libraries within a single ML library (not shown) that includes a plurality of ML libraries associated with various process technologies. In certain cases, these ML model libraries may also be used as a part of designing new analog circuits. For example, an analog chip designer may want a particular sub-circuit with certain sub-circuit performance parameters. Rather than manually determining the physical parameters of each electrical component of the sub-circuit, a trained ML model corresponding to a particular topology of a sub-circuit may be selected from the ML model library. The sub-circuit performance parameters may be provided to the selected trained ML model and appropriate sub-circuit physical parameters determined by the selected trained ML model. [0093] In certain cases, one or more techniques may be used to select the trained ML model from the ML model library. For example, as executing a trained ML model is often substantially quicker than training the ML model, a given set of sub-circuit performance parameters may be provided to any number of, or all, of the trained ML models of the ML model library corresponding to a selected sub-circuit type. A trained ML model corresponding to a certain topology for the selected sub-circuit may then be selected from the trained ML models that were capable of producing appropriate sub-
circuit physical parameters. For example, the trained ML model selected may corresponding to the trained ML model with the fewest electrical components for the provided physical parameters. As another example, ML model libraries may be used to select a particular topology for a sub-circuit by providing appropriate sub-circuit parameters, such as sub-circuit performance parameters, to the ML model library (or logic associated with the ML model library). A sub-circuit type may be provided, or may be inferred based on, for example, specific sub-circuit performance parameters provided. Various (or all) ML models of different topologies associated with sub-circuit type may then be run using the provided sub-circuit performance parameters to determine a set of topologies of the subcircuit type that may be appropriate for use. A specific topology of the sub-circuit type may then be selected from the set of topologies. In certain cases, this selection may be performed by a user. In some cases, one or more topologies for the sub-circuit type may be selected or suggested to the user. Topologies of the set of topologies for the sub-circuit may be analyzed, for example, based on complexity, a cost function associated with various physical parameters, overall size, etc., to provide the selection or suggestion.[0094] In the example discussed above, physical parameters may be used by ML models of a first ML library 506 to predict sub-circuit performance parameters of a particular sub-circuit designed for a first process technology. These sub-circuit performance parameters may then be used by ML models of a second ML library 512 to generate sub-circuit physical parameters of the particular subcircuit designed for a second process technology. Thus, each ML library is associated with a particular process technology. Using different ML libraries for each process technology helps enable various scenarios, such as conversions of a circuit from one process technology to another process technology, designing circuits with portions using one process technology and other portions using another process technology, searching across many process technologies to determine which process technology is most appropriate (e.g., in terms of cost, performance, etc.) for a particular circuit, etc.. In certain cases, for example when such flexibility is not required, a single ML model which is trained to directly convert sub-circuit physical parameters of a sub-circuit in the first process technology to sub-circuit physical parameters of the sub-circuit in the second process technology may be used in place of the first and second ML models.[0095] It may be understood that while discussed with respect to a sub-circuit, other sub-circuits, such as electrical components, may also be simulated across a range of sub-circuit physical parameters to predict similar sub-circuit performance parameters for training ML models for the sub-
circuits.Example ML Model[0096] FIG. 10 illustrates an example neural network ML model 1000, in accordance with aspects of the present disclosure. In certain embodiments, modeling analog circuits with ML models can be performed by using sub-circuit parameters as input parameters (e.g., features) of a ML model. In alternative embodiments, modeling analog circuits with ML models can be performed using subcircuit physical parameters as the parameters of a ML model. The example neural network ML model 1000 is a simplified example presented to help understand how such neural network ML model 1000 may be trained. It may be understood that each implementation of a ML model may be trained or tuned in a different way, depending on a variety of factors including, but not limited to, a type of ML model being used, parameters being used for the ML model, relationships as among the parameters, desired speed of training, etc. In this simplified example, sub-circuit physical parameters values of W and L are parameter inputs 1002 and 1004 to the ML model 1000. Each layer (e.g., first layer 1006, second layer 1008, and third layer 1010) includes a plurality of nodes (e.g., neurons) and generally represents a set of operations performed on the parameters, such as a set of matrix multiplications. For example, each node represents a mathematical function that takes, as input (aside from the nodes of the first layer 1006), output from a previous layer and a weight. The weight is typically adjusted during ML model training and fixed after the ML model training. The specific mathematical function of the node can vary depending on ML model implementation. While the current example addresses three layers, in certain cases the ML model may include any number of layers. Generally, each layer transforms M number of input parameters to N number of output parameters. The parameter inputs to the first layer 1006 are output as input to the second layer 1008 with a set of connections. As each node of a layer (such as first layer 1006) outputs to each node in a subsequent layer (such as second layer 1008), ML model 1000 is a fully connected neural network. Other embodiments may utilize a partially connected neural network or another neural network design which may not connect each node of a layer to each node of a subsequent layer.[0097] In this example, first layer 1006 represents a function based on a set of weights that are applied to the input parameters (e.g., input parameters 1002 and 1004) to generate output from first layer 1006 that is input to the second layer 1008. Different weights may be applied for the input received from each node of the previous layer by the subsequent layer. For example, for a node of the second layer 1008, the node applies weights to input received from nodes of the first layer 1006
and the node may apply a different weight to input received from each node of the first layer 1006. Nodes compute one or more functions based on the inputs received and corresponding weights and outputs a number. For example, the node may use a linear combination function which multiplies an input values from a node of the previous layer with a corresponding weight and sums across the results of the multiplication, coupled with a non-linear activation function which acts as a floor for the resulting number for output. It may be understood that any known weighted function may be applied by the node within the scope of this disclosure. This output number may be input to subsequent layers, or if the layer is a final layer, such as third layer 1010 in this example, the number may be output as a result (e.g., output parameter). In certain cases, the functions applied by nodes of a layer may differ as between layers. The weights applied by a node may be adjusted during training based on a loss function, which is a function that describes how accurately the predictions of the neural network are as compared to the expected results, an optimization algorithm, which helps determine weight settings adjustments based on the loss function, and a backpropagation of error algorithm, which applies the weight adjustments back through the layers of the neural network. Any optimization algorithm, (e g., gradient descent, mini-batch gradient descent, stochastic gradient descent, adaptive optimizers, momentum, etc ), loss function (e.g., mean-squared error, crossentropy, maximum likelihood, etc.), and backpropagation of error algorithm (e.g., static or recurrent backpropagation) may be used within the scope of this disclosure.[0098] Certain ML models, such as a neural network, may include hyperparameters. Hyperparameters of the ML model may refer to parameters that control the operation of the ML model which cannot be derived through training, such as a number of nodes in a layer, number of layers, learning rate, etc.Enhanced ML Modeling Techniques[0099] As indicated above, as there may be multiple topologies for multiple types of sub-circuits, enhancing ML modeling techniques to efficiently generate ML models for these topologies and subcircuits may be helpful. Generating ML models for analog and hybrid circuits which accurately predict parameters of these circuits can be challenging as analog and hybrid circuits can respond in highly non-linear ways as physical parameters of electrical components are varied. Additionally, modeling such behavior, for example as a neural network ML model, using current ML modeling techniques may require substantial training time and/or manual tuning of parameters of the model to achieve a desired accuracy. This in turn may make bulk generation of ML models challenging. To
help streamline bulk ML model creation, ML model creation may be enhanced by including interaction parameters as input parameters in the ML model and performing dimensionality reduction on the input parameters using threshold stepwise selection.[0100] In certain cases, properties of the process technologies may influence the behavior of the analog circuits. To help address this, one or more process technology parameters which describe the behavior of the process technology may be included as input parameters to the ML model. Examples of such process technology parameters may include oxide thickness, channel doping concentration, electron mobility, etc.[0101] To further help address the non-linearities, parameter interactions, such as interaction parameter 1012, may be added as input parameters to the ML model 1000. Interaction parameters represent, for example, one or more functions which describe how different input parameters may interact together. For example, a function A*B may have input parameters A and B. An interaction parameter C could be created, where C=A*B, which would represent how the model responds to changes based on the multiplication of parameters A and B. As another example, an interaction parameter D could be created, where D = L[AB, which represents how the model responds to changes based on the square root of the multiplication of parameters A and B. In certain cases, these interactions may be based on circuit theory equations. As an example, an input parameter to a ML model may be based on the equation for determining transconductance of a CMOS transistor. In this example, a ML model, MLgm, may have input parameters such that MLgm= f(W,L, T,NCH,T0X, ID, lDS), where the input parameters respectively represent the electrical component width, electrical component length, temperature, N-channel doping concentration, oxide thickness, electrical component drain current bias, and the voltage across the drain-source terminals of the electrical component. One known nonlinear parameter interaction is the first order equation for transconductance, gm= J2 gnC0XID. The parameter interaction, J(^/^)/D, may be determined from the input parameter data for W, L, and ID, and provided as an input parameter, /;, to the ML model as an input parameterf(W,L, T, NCH, Tox, ID, IDS, fi ) . Adding the nonlinear interaction as an input to the ML model helps by preemptively providing known interactions which may help reduce an amount of training needed by the ML model.
[0102] In certain embodiments, attempting to characterize the non-linearities based on circuit theory, such as by including known circuit theory equations (e.g., the first order equation for transconductance), can introduce higher-order interaction terms and may increase the dimensionality (e.g., number of parameters input into the ML model). To help reduce the number of parameters for input into the ML model, dimensionality reduction may be performed. Dimensionality reduction removes parameters as inputs to the ML model if it is determined that those parameters do not impact the model behavior. Dimensionality reduction may help reduce the number of variables, identify an optimal set of parameters to be input into the ML model, and/or help reduce the processing time of the ML model. In certain cases, dimensionality reduction may be performed using threshold stepwise selection.Threshold Stepwise Selection[0103] Threshold stepwise selection helps build higher order parameter interactions by iterating over the input parameter interactions in a stepwise fashion, determining the significance the parameter interactions have on the model behavior, and removing any parameter interactions that do not meet a threshold. In this manner, higher order interactions can be determined while minimizing the dimensionality of the ML model. FIG. 11 illustrates a series of ML model parameters for threshold stepwise selection 1100, in accordance with aspects of the present disclosure. Threshold stepwise selection begins with an initial set 1102 of parameters. In this example, variables A, B, C, and D, of the initial set 1102, represent generic input parameters to a ML model (e.g., sub-circuit physical parameters, sub-circuit performance parameters, constants, parameters descriptive of a process technology, etc.), such that a result of the ML model, R, is a function of A, B, C, and D: R — f (A, B, C, D). In certain cases, R may represent the expected results of the ML model, (e.g., sub-circuit physical parameters or sub-circuit performance parameters) of a sub-circuit being modeled by the ML model. The initial set 1102 of parameters may also include interaction parameters 1012. In a first step, a first parameter may be interacted with each of the other parameters to generate a second set 1104 of parameters. This interaction as between parameters may be based on the mathematical function of nodes in the ML model. For example, assuming generic parameter A represents input parameter 1002 and B represents input parameter 1004, of FIG. 10, then AB may represent an interaction corresponding to a function as applied in the second layer 1008 of ML model 1000 to input parameter 1002 and input parameter 1004, without the weight. Thus, higher order parameter values represent interaction values obtained from the component parameters. For
example, assuming the interaction is multiplication, ifA-2 and B= 3, AB- 6. IfC- 4, the ABC = 24. If A is an equation, such as a known circuit theory equation, the equation may be evaluated to obtain a value and this value used for the interaction.[0104] In this example, the parameter A is interacted with parameters B, C, and D to generate parameters AB, AC, and AD, as shown in the second set 1104 of parameters, such that R for the second set 1104 of parameters would correspond to R = f(A,B, C, D,AB,AC,AD). While parameter A is interacted in this example, it may be understood that any parameter of the initial set 1102 may be interacted with the other parameters of the first set 1102. In certain cases, the interaction may be based on mathematical functions applied as between parameters in a node of a neural network. A linear regression may then be performed on the parameters of the second set 1104. The linear regression is a linear function which attempts to model a relationship between the parameters of the second set 1104 and results of the linear regression may be compared to expected results of the ML model (e.g., as determined by a circuit simulation of the sub-circuit topology being modeled by the ML model). A statistical significance test (e.g., a null hypothesis test) may be used to determine a statistical significance value (e.g., a null hypothesis p-value) to predict each parameter’s contribution of the second set 1104 of parameters to the linear regression results. The statistical significance value may then be compared against a defined threshold for the statistical significance value. The threshold for the statistical value may be determined as a fixed value as an input to the threshold stepwise selection algorithm, or the threshold may be determined through known techniques such as Bayesian hyperparameter optimization. Bayesian hyperparameter optimization is a technique for determining hyperparameters of a ML model. The hyperparameters of the ML model may refer to parameters that control the operation of the ML model which cannot be derived through training, such as a number of nodes in a layer, number of layers, learning rate, etc. In this example, the hyperparameter to be optimized may be the threshold for the statistical significance. In a third step, parameters that do not meet the threshold for statistical significance, in this example, parameters C, AB, and AC of the second set 1104 of parameters, may be discarded.[0105] In a fourth step, the first through third steps may be repeated with each parameter of the initial set 1102 of parameters to obtain a fourth set 1110 of parameters. For example, a second parameter of the initial set 1102 may be interacted with parameters of the second set 1104 (without the parameters that did not meet the threshold for statistical significance). This interaction may be substantially similar to those interactions performed to generate the second set 1104 of parameters.
A linear regression may be performed on parameters of a third set 1106 in substantially the same way as performed on parameters of the second set 1104, and parameters that do not meet the threshold for statistical significance, in this example, parameters BA and BAD of the third set 1106 of parameters, may be discarded. This interaction/linear regression/discarding parameters may be repeated for each parameter of the initial set 1102 to obtain resulting parameters of a round of stepwise threshold selection, such as the fourth set 1110 of parameters. In certain cases, this step iterates over all of the initial set 1102 of parameters even if subsequent steps have determined the parameter to not be significant in the modeling problem. Even though the parameter alone may not contribute to the model result, the parameter’s interaction with other parameters may have significance. Including all of the initial set 1102 of parameters regardless of individual significance when looping through the interactions helps ensure that significant interactions of all parameters are not lost.[0106] The resulting parameters in the fourth set 1110 may be compared to the expected results (e.g., obtained via circuit simulations) to determine an accuracy of the resulting parameters in the fourth set 1110. If the accuracy meets a threshold accuracy value, then the fourth set 1110 of parameters may be used as input parameters for the ML model for the sub-circuit. The threshold accuracy value may be determined in any way, for example, by experimentation, experience, etc. [0107] In a fifth step, if the accuracy does not meet the threshold accuracy value, the first through fourth steps may be repeated by interacting the initial set 1102 of parameters and resulting parameters (such as parameters of the fourth set 1110) until the threshold accuracy value is met by resulting parameters from a round of threshold stepwise selection, such as a final set 1108 of parameters. Doing so may result in higher order interactions parameters such as interaction parameters CBD and ABCD of the final set 1108, in this example. In certain cases, a number of repetitions in this fifth step may be limited, for example based on a predetermined number of rounds, if accuracy of the resulting parameters stops increasing, if the parameters of the resulting parameters are unchanged, etc. Of note, in this example, higher order parameters may be represented by interaction parameters resulting from interacted parameters (e.g., parameters represented in FIG. 11 by multiple letters). As shown, threshold stepwise selection allows for higher order parameters to be developed while still limiting the total number of parameters through dimensionality reduction. The final set 1108 of parameters, as determined by the threshold stepwise selection step, represent the set of parameters that are most statistically significant for the sub-circuit being modeled. By using the parameters determined to be
most statistically significant by the threshold stepwise selection step as a starting point (e.g., as initial parameters for input to the ML model), an amount of time needed to train the a ML model to obtain a certain level of accuracy may be reduced.[0108] In certain cases, if the desired threshold accuracy value is not met by threshold stepwise selection, threshold stepwise selection may be applied in conjunction with stacked models to help improve accuracy. A stacked model uses information derived from an initial model, such as the final set 1108 of parameters output from threshold stepwise selection, as inputs to help guide subsequent modeling techniques. For example, if after applying a predetermined number of rounds of threshold stepwise selection, the desired threshold accuracy value is not met, the parameters selected during the last round of threshold stepwise selection may be used as used as input to a ML model, such as a neural network trained on the sub-circuit physical parameters and simulated sub-circuit performance parameters. This ML model may then be further tuned using any known ML tuning technique. For example, Bayesian hyperparameter optimization may also be applied to the ML model to tune the hyperparameters of the ML model. Bayesian hyperparameter optimization is a technique for determining hyperparameters of a ML model based on a probability model of how a hyperparameter influences the accuracy of the ML model as different hyperparameters are adjusted based on a validation score. The validation score may be determined by adjusting the hyperparameter of the ML model, training the ML model to generate predictions of the ML model with the adjusted hyperparameter, and evaluating these predictions against expected results to calculate the validation score.[0109] FIG. 12 is a flow diagram illustrating an overview of a technique for designing analog circuits 1200, in accordance with aspects of the present disclosure. At block 1202, a data object representing a circuit for a first process technology may be received, the circuit including a first subcircuit, the first sub-circuit including a first electrical component and a second electrical component, the first electrical component and the second electrical component arranged in a first topology. For example, the analog circuit may be described as a netlist, which is a list of electrical components and connections of those electrical components. At block 1204, the first sub-circuit may be identified in the data object by comparing the first topology to a stored topology, the stored topology associated with the first process technology. For example, the functional circuit block may be a portion of the analog circuit which represents a set of circuits that perform a function, such as amplifying a signal, comparing two signals, creating a clock signal, etc., and functional circuit block may be located by
the boundaries of a function in the netlist, such as the beginning and end of a function. The netlist may be parsed to locate these functional circuit blocks. The functional circuit blocks include one or more sub-circuits. Sub-circuits may be made of one or more electrical components which together perform a specific purpose in the functional circuit block. There may be a relatively limited number of arrangements of electrical components capable of practically performing the purpose of a subcircuit. These arrangements of electrical components may be predetermined, for example based on chip design experience, as a set of predetermined sub-circuits. In certain cases, this set of predetermined sub-circuits may not be exhaustive and may contain sub-circuits determined to be more likely to be found in analog circuit. In certain cases, the first sub-circuit may be identified based on a set of rules. In certain cases, these rules may be based, at least in part, on connections of the first sub-circuit.[0110] At block 1206, sub-circuit physical parameter values associated with the first electrical component and the second electrical component of the first sub-circuit are identified. For example, the netlist may include physical parameters associated with electrical components of the circuit. Additionally, operating point simulations may be used to obtain operating parameters for the subcircuit. At block 1208, a set of sub-circuit performance parameter values for the first sub-circuit are determined based on a first machine learning (ML) model of the first sub-circuit and the identified sub-circuit physical parameters For example, different types of sub-circuits may be associated with different sets of performance parameters. Examples of performance parameters include transconductance, channel conductance, minimum drain to source voltage, threshold voltage mismatch, etc. In certain cases, performance parameter values for a set of physical parameters associated with the identified first sub-circuit may be determined based on a first ML model of the identified sub-circuit. For example, physical parameters associated with the identified first subcircuit may be input to a first trained ML model of the identified sub-circuit for the first process technology to determine performance parameter values for the identified first sub-circuit.[0111] At block 1210, the identified first sub-circuit to a second sub-circuit for a second process technology is converted based on the determined set of sub-circuit performance parameter values. For example, a second ML model may be selected based on the type of the identified first sub-circuit. The second ML model may be configured to determine a second set of sub-circuit physical parameters associated with a third electrical component and a fourth electrical component of the second sub-circuit based on a second ML model, for the second process technology, and the set of
sub-circuit performance parameter values, and associate sub-circuit physical parameters of the second set of sub-circuit physical parameters with the third electrical component and the fourth electrical component of the second sub-circuit. For example, performance parameters may be input to the second trained ML model of the identified sub-circuit for the second process technology to determine physical parameter values for electrical components of the second sub-circuit for the second process technology. In certain cases, the first and second trained ML models may be neural networks. A netlist for the second sub-circuit in the second process technology may then be determined based on the physical parameter values. At block 1212, the converted second sub-circuit may be output. For example, the netlist for the second sub-circuit may be output. In certain cases, the second process technology comprises a second semiconductor manufacturing process associated with smaller circuit electrical components as compared to a first process technology of the analog circuit. For example, the second process technology may be associated with smaller sized transistors, as compared to the first process technology. In certain cases, the second sub-circuit may be verified based on a circuit simulation of the second sub-circuit and performance parameters associated with the first sub-circuit. For example, the output netlist may be simulated on a circuit simulator to verify that performance parameters of the second sub-circuit are within a threshold amount of performance parameters associated with the first sub-circuit.[0112] FIG. 13 is a flow diagram illustrating an overview of a technique for designing analog circuits 1300, in accordance with aspects of the present disclosure. At block 1302, a data object representing a circuit is received, the circuit including a sub-circuit, the sub-circuit including a first electrical component and a second electrical component, the first electrical component and the second electrical component arranged in a first topology. For example, the analog circuit may be described as a netlist, including one or more circuit blocks. These circuit blocks each include one or more electrical components, such as transistors, resistors, capacitors, inductors, diodes, etc. of the circuit block. At block 1304, a set of stored topologies are received. For example, a library of trained ML models, including trained ML models for known sub-circuits may be stored and accessed from a memory storage. At block 1306, the first electrical component, second electrical component, and connections of the first electrical component and second electrical component may be identified. For example, a first electrical component of the functional circuit block may be identified based on a set of predefined electrical component types stored in a memory storage. For example, electrical components play a particular role within a functional circuit block and the role of a first electrical
component may be determined based on what other electrical components the first electrical component is connected to. This role, along with a type of electrical component, may be used to identify the first electrical component from a set of predetermined electrical components. In certain cases, the first circuit may be identified based on a set of rules. At block 1308, a coupling between the first electrical component and a second electrical component is determined, based on the connections of the first electrical component. For example, the netlist may include a description of connections as between electrical components and this description may be parsed to determine connections as between electrical components. Parsing may be performed using a set of rules. In certain cases, rules of the set of parsing rules may be based, at least in part, on an identified type of the first electrical component, connections of the first electrical component, and an identified type of the second electrical component. As an example, this set of rules may describe the possible connections of the electrical component and mapping those connections to various sub-circuit types or topologies. In certain cases, rules of the set of parsing rules may be based, at least in part, on physical parameters of the first electrical component and second electrical component.[0113] At block 1310, the first topology is determined based on a comparison between the identified first electrical component, the identified second electrical component, the determined coupling between the first electrical component and the second electrical component, and topologies of the set of stored topologies. At block 1312, the identified first topology may be output. For example, the identified topology may be output for use by one or more ML models for predicting sub-circuit performance parameters or sub-circuit physical parameters. In certain cases, a determination, based on the comparison, is made that multiple topologies of the set of stored topologies could match. In such cases, a third electrical component and connections of the third electrical component may be identified and, based on the connections of the third electrical component, a coupling between the third electrical component and either the first electrical component or the second electrical component is determined. The topologies of the set of stored topologies are compared to the identified first electrical component, the identified second electrical component, the identified third electrical component, the determined coupling between the first electrical component and the second electrical component, and the identified coupling between the third electrical component and either the first electrical component or the second electrical component to identify the first topology. For example, if multiple matches between a set of electrical components and topologies of the set of known topologies are found, the set of electrical components
may be expanded to include additional electrical components coupled to the current electrical components of the set of electrical components. Matching against the set of known topologies may then be performed again with the expanded set of electrical components.[0114] FIG. 14 is a flow diagram illustrating a technique for identifying sub-circuits 1400, in accordance with aspects of the present disclosure. At block 1402, a data object representing a circuit for a process technology is received, the circuit including a first sub-circuit and the first sub-circuit including a first electrical component and a second electrical component, the first electrical component and the second electrical component arranged in a first topology. For example, the analog circuit may be described as a netlist, including one or more circuit blocks. The functional circuit blocks include one or more sub-circuits. The sub-circuits may be made of a set of electrical components which together perform a specific purpose in the functional circuit block. At block 1404, the first sub-circuit in the circuit is identified by comparing the first topology to a stored topology, the stored topology associated with the first process technology. For example, there may be a relatively limited number of arrangements of electrical components capable of practically performing the purpose of a sub-circuit. These arrangements of electrical components may be predetermined, for example based on chip design experience, as a set of predetermined sub-circuits. In certain cases, this set of predetermined sub-circuits may not be exhaustive and may contain subcircuits determined to be more likely to be found in analog circuits. The first sub-circuit may be compared to the set of predetermined sub-circuits.[0115] At block 1406, a first set of physical parameter values associated with first electrical component and the second electrical component of the first sub-circuit is identified. For example, the netlist may include physical parameters associated with electrical components of the circuit. Additionally, operating point simulations may be used to obtain operating parameters for the subcircuit. At block 1406, a set of performance parameter values for the first sub-circuit is determined based on a first machine learning (ML) model of the first sub-circuit and the identified set of physical parameter values. For example, different types of sub-circuits may be associated with different sets of performance parameters. Examples of performance parameters include transconductance, channel conductance, minimum drain to source voltage, threshold voltage mismatch, etc. In certain cases, performance parameter values for a set of physical parameters associated with the identified first sub-circuit may be determined based on a first ML model of the identified sub-circuit. For example, physical parameters associated with the identified first sub-circuit may be input to a first trained ML
model of the identified sub-circuit for the first process technology to determine performance parameter values for the identified first sub-circuit. At block 1408, the identified first sub-circuit is converted to a second sub-circuit for the process technology based on the determined set of performance parameter values, the second sub-circuit having a third electrical component and a fourth electrical component arranged in a second topology. In certain cases, a type of the first subcircuit is identified based on connections of the first electrical component and the second electrical component. The determined set of performance parameter values are input to one or more ML models of the identified type of the first sub-circuit for the processing technology, one or more sets of physical parameter values corresponding to one or more topologies associated with the type of the first sub-circuit are received. The second topology is selected from the one or more topologies. In certain cases, selecting the second topology is based on an optimization function. This optimization function may be based on a number of electrical components of topologies of the one or more topologies. In certain cases, the optimization function is based on physical parameter values corresponding to one or more topologies Physical parameters values of a set of physical parameter values corresponding to the selected second topology are associated with the third electrical component and the fourth electrical component.[0116] FIG. 15 is a flow diagram illustrating a technique for designing circuits 1500, in accordance with aspects of the present disclosure. At block 1502, an indication of a sub-circuit type and a set of sub-circuit performance parameter values may be received. For example, a user may provide an indication of a type of sub-circuit and one or more sub-circuit performance parameters values for the sub-circuit type. At block 1504, a sub-circuit topology may be determined based on the sub-circuit type and the set of sub-circuit performance parameters values. For example, a specific sub-circuit topology for the sub-circuit type may be provided and a ML model for the sub-circuit type may be identified. As another example, the sub-circuit performance parameter values may be provided to multiple sub-circuit ML models corresponding to the sub-circuit type. This set of sub-circuit ML models, and corresponding sub-circuit topologies, may be obtained from a ML model library. The sub-circuit performance parameter values may be input to sub-circuit ML models of the set of subcircuit ML models to determine corresponding sub-circuit physical parameters for the sub-circuit topologies corresponding to the sub-circuit ML models. In certain cases, if sub-circuit physical parameters for a sub-circuit topology cannot be determined for the sub-circuit performance parameters, then the sub-circuit topology, may be removed from the set of sub-circuit topologies.
An optimization function may then be applied to the sub-circuit topologies of the set of sub-circuit topologies to select a sub-circuit topology. The optimization function may be any known optimization technique, such as cost function, loss function, etc. As an example, the optimization function may select a sub-circuit topology based on a least number of electrical components with sub-circuit physical parameters of those electrical components within a certain range, the range selected for ease of manufacture based on the first process technology.[0117] At block 1506, a set of sub-circuit physical parameter values are determined based on a first machine learning (ML) model of the sub-circuit topology and the set of sub-circuit performance parameter values. In certain cases, the set of sub-circuit physical parameters values may be determined as a part of determining a sub-circuit topology. At block 1508, a data obj ect representing a sub-circuit based on the determined set of sub-circuit physical parameter values and the determined sub-circuit topology is generated. For example, a netlist representation of the sub-circuit may be generated using the determined sub-circuit topology and the determined sub-circuit physical parameter values. At block 1510, the data object is output.[0118] FIG. 16 is a flow diagram illustrating a technique for designing circuits 1600, in accordance with aspects of the present disclosure. At block 1602, a first set of sub-circuit physical parameters for electrical components of a sub-circuit, and an indication of a first process technology is received. For example, physical parameters for electrical components of a first sub-circuit may be received, along with a description of how those electrical components are connected, as well as information related to the process technology the first sub-circuit is associated with may be received. In certain cases, a set of performance parameters may also be received, the performance parameters indicating which performance parameters may be applicable for the first sub-circuit. At block 1604, a first variation of sub-circuit physical parameters for the electrical components of the structural sub-circuit is determined, the first variation including at least one sub-circuit physical parameter that vary from sub-circuit physical parameters of the first set of sub-circuit physical parameters. In certain cases, determining sets of variations of physical parameters for the electrical components of the sub-circuit includes determining variations of physical parameters for the electrical components based on a practical range of physical parameter values for the first process technology. At block 1606, the first variation of sub-circuit physical parameters in the first process technology is simulated to generate a first set of sub-circuit performance parameter values associated with the first variation. For example, for a particular sub-circuit, sets of physical parameters may be generated by simulating the
particular sub-circuit with sets of physical parameter values. Physical parameter values of these sets of physical parameter values may vary across ranges of practical values associated with respective physical parameter values. In certain cases, the sets of variations of physical parameters are identified to show non-linear behavior of the sub-circuit. In certain cases, the sets of variations of physical parameters for the sub-circuit may be simulated using a simulation program with integrated circuit emphasis (SPICE) circuit model of the sub-circuit.[0119] At block 1608, a machine learning (ML) model of the structural sub-circuit is trained based on a set of variations, the set of variations including the first variation and set of sub-circuit physical parameters associated with the first variation, for the first process technology. In certain cases, the ML model of the sub-circuit comprises one of a linear regression, large margin classifier, principle component analysis, tree based, or neural network machine learning model. In certain cases, training the ML model includes identifying a set of parameters for input to the ML model. In certain cases, the set of parameters for input to the ML model is based on one of: the sets of physical parameters or generated performance parameters and one of: one or more parameters associated with the first process technology or one or more parameters associated with the second process technology. At block 1610, the trained ML model is stored. In certain cases, the library of trained ML models includes a trained ML model for each sub-circuit of a set of predetermined sub-circuits. In certain cases, in the library of trained ML models, each trained ML model is associated with a specific subcircuit and each trained ML model may differ from other trained ML models in the library of trained ML models.[0120] FIGS. 17A-17B are a flow diagram illustrating a technique for circuit modeling 1700, in accordance with aspects of the present disclosure. At block 1702, an initial set of parameters are received, the initial set of parameters associated with a sub-circuit. For example, a set of sub-circuit performance parameters or sub-circuit physical parameters for a ML model of a sub-circuit may be received. At block 1704, a first parameter of the initial set of parameters is interacted with other parameters of the initial set of parameters to generate a set of interacted parameters. For example, the first parameter may be interacted with another parameter of the set of parameters to generate an interacted parameter. At block 1706, the interacted parameter is added to the initial set parameters to generate a candidate set of parameters. For example, the interacted parameter may be added to the set of parameters. At block 1708, a linear regression may be performed on parameters of the candidate set of parameters against a set of expected parameter values to determine a predictive value
for parameters of the candidate set of parameters. For example, the linear regression attempts to model a relationship between the parameters as compared to expected results of the ML model and a statistical significance test may be applied to the results of the linear regression to determine a statistical significance value of parameters of the set of parameters. In certain cases, this linear regression equation may be based on a Taylor series regression.[0121] At block 1710, parameters of the candidate set of parameters are removed based on a comparison between the predicative value and a predetermined predictive threshold. For example, statistical significance value of parameters of the set of parameters may be compared to a predefined threshold and parameters which do not meet the predefined threshold may be removed from the set of parameters. In certain cases, statistical p-values may be compared against a minimum p-value and variables with p-values less than the minimum p-value may be removed from the candidate set. Multiple variables may be removed from the candidate set of variables in each round. At block 1712, an accuracy of the candidate set of parameters may be determined based on the set of expected parameter values. For example, the candidate set of parameters may be compared to the expected results to determine the accuracy. Predicted values based on the candidate set of variables may be compared against the expected set of parameter values to determine the accuracy for the candidate set of variables. In certain cases, each parameter of the initial set of parameters may be interacted with the other parameters of the initial set of parameters prior to the accuracy determination. For example, each of the original variables may be interacted with candidate variables of the set of candidate variables, even if the original variable is removed from the set of candidate variables. At block 1714, the accuracy of the candidate set of parameters may be compared to a predetermined accuracy level. At block 1716, if the accuracy of the candidate set of parameters reaches the predetermined accuracy level, the candidate set of parameters is output at block 1718. If the accuracy of the candidate set of parameters does not reached a predetermined accuracy level, certain steps may be repeated.[0122] At block 1720, a second parameter of the initial set of parameters is interacted with other parameters of the candidate set of parameters. This interaction may be similar to the interaction discussed in conjunction with block 1704 where another parameter is interacted with another parameter of the set of parameters to generate the interacted parameter. At block 1722, the interacted parameter is added to the candidate set of parameters. For example, the interacted parameter may be added to the set of parameters. At block 1724, the linear regression may be performed on parameters
of the candidate set of parameters against a set of expected parameter values to determine a predictive value for parameters of the candidate set of parameters. At block 1726, parameters of the candidate set of parameters are removed based on a comparison between the predicative value and a predetermined predictive threshold. At block 1728, the accuracy of the candidate set of parameters may be determined based on the set of expected parameter values. At block 1730, the accuracy of the candidate set of parameters may be compared to a predetermined accuracy level. At block 1732, if the accuracy of the second candidate set of parameters has reached the predetermined accuracy, the candidate set of parameters are output at block 1718. At block 1732, if each parameter of the initial set of parameters has been interacted with other parameters of the candidate set a predetermined number of times, the candidate set of parameters are output at block 1718. Otherwise, blocks 1720-1730 may be repeated with another parameter of the initial set of parameters.[0123] In certain cases, the initial set of parameters may include one or more parameter values based on properties of the process technology. In certain cases, the initial set of parameters may include one or more parameter values based on theoretical interactions between one or more parameter values of the first set of parameters.[0124] . In certain cases where the accuracy has not reached the predetermined accuracy level, a second ML model may be trained based on the set of selected variables and parameter values of the second set of parameter values. For example, where the sufficient level of accuracy has not been met and the repeating ended after each variable in the original set of variables has been interacted a predetermined number of times, a final set of candidate variables may be used to train another ML model. If the other ML model is sufficiently accurate, the other ML model may be stored instead of the linear regression equation, for example, in a ML library. Additionally, an accuracy for the second ML model may be determined. Further, a determination may be made that the accuracy of the second ML is greater than the predetermined accuracy level, and the set of selected variables and second ML model may be stored as the first ML model for the sub-circuit for the process technology. In certain cases, the second ML model may be a neural network. In certain cases, Bayesian hyperparameter optimization may be applied to the second ML model. In certain cases, the hyperparameters being optimized by the Bayesian hyperparameter optimization include one of: a number of layers of neurons of the neural network, a number of neurons in each layer of the neural network, and a weight decay value.[0125] As illustrated in FIG. 18, device 1800 includes a processing element such as processor
1805 that contains one or more hardware processors, where each hardware processor may have a single or multiple processor cores. Examples of processors include, but are not limited to, a central processing unit (CPU) or a microprocessor. Although not illustrated in FIG. 18, the processing elements that make up processor 1805 may also include one or more other types of hardware processing components, such as graphics processing units (GPUs), application specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), and/or digital signal processors (DSPs). In certain cases, processor 1805 may be configured to perform functions described in conjunction with FIGs 5, 6, 8, 11, and 12-17. It may also be understood that while described in conjunction with a single device, the functions described may be performed by any number of processing elements and that these processing elements may associated with multiple devices that are communicatively coupled. For example, generation of ML models, ML libraries, netlists, etc. may be performed on a separate device as compared to the conversion or optimization of a circuit. In certain cases, these various devices may be networked by any known networking technology, examples of which include ethemet, wireless fidelity (Wi-Fi), internet, etc. In certain cases, data objects may be provided and/or received via non-transitory computer readable storage medium.[0126] FIG. 18 illustrates that memory 1810 may be operatively and communicatively coupled to processor 1805. Memory 1810 may be a non-transitory computer readable storage medium configured to store various types of data. For example, memory 1810 may include one or more volatile devices such as random access memory (RAM). In certain cases, the SRAM and circuits as described in FIGS. 4-8 may be incorporated as part of the memory 1810. Non-volatile storage devices 1820 can include one or more disk drives, optical drives, solid-state drives (SSDs), tap drives, flash memory, electrically programmable read only memory (EEPROM), and/or any other type memory designed to maintain data for a duration time after a power loss or shut down operation. The non-volatile storage devices 1820 may also be used to store programs that are loaded into the RAM when such programs executed.[0127] Persons of ordinary skill in the art are aware that software programs may be developed, encoded, and compiled in a variety of computing languages for a variety of software platforms and/or operating systems and subsequently loaded and executed by processor 1805. In one embodiment, the compiling process of the software program may transform program code written in a programming language to another computer language such that the processor 1805 is able to execute the programming code. For example, the compiling process of the software program may generate
an executable program that provides encoded instructions (e.g., machine code instructions) for processor 1805 to accomplish specific, non-generic, particular computing functions.[0128] After the compiling process, the encoded instructions may then be loaded as computer executable instructions or process steps to processor 1805 from storage 1820, from memory 1810, and/or embedded within processor 1805 (e.g., via a cache or on-board ROM). Processor 1805 may be configured to execute the stored instmctions or process steps in order to perform instructions or process steps to transform the computing device into a non-generic, particular, specially programmed machine or apparatus. Stored data, e.g., data stored by a storage device 1820, may be accessed by processor 1805 during the execution of computer executable instructions or process steps to instruct one or more components within the computing device 1800. Storage 1820 may be partitioned or split into multiple sections that may be accessed by different software programs. For example, storage 1820 may include a section designated for specific purposes, such as storing program instructions or data for updating software of the computing device 1800. In one embodiment, the software to be updated includes the ROM, or firmware, of the computing device. In certain cases, the computing device 1800 may include multiple operating systems. For example, the computing device 1800 may include a general-purpose operating system which is utilized for normal operations. The computing device 1800 may also include another operating system, such as a bootloader, for performing specific tasks, such as upgrading and recovering the general-purpose operating system, and allowing access to the computing device 1800 at a level generally not available through the general-purpose operating system. Both the general-purpose operating system and another operating system may have access to the section of storage 1820 designated for specific purposes.[0129] The one or more communications interfaces may include a radio communications interface for interfacing with one or more radio communications devices. In certain cases, elements coupled to the processor may be included on hardware shared with the processor. For example, the communications interfaces 1825, storage, 1820, and memory 1810 may be included, along with other elements such as the digital radio, in a single chip or package, such as in a system on a chip (SOC). Computing device may also include input and/or output devices, not shown, examples of which include sensors, cameras, human input devices, such as mouse, keyboard, touchscreen, monitors, display screen, tactile or motion generators, speakers, lights, etc. Processed input, for example from the radar device 1830, may be output from the computing device 1800 via the communications interfaces 1825 to one or more other devices.
[0130] Modifications are possible in the described embodiments, and other embodiments are possible, within the scope of the claims.[0131] For example, first process technology characterization module 810 may be implemented using any number of determination techniques, such as statistical regression analysis and statistical classifiers such as neural networks, decision trees, Bayesian classifiers, fuzzy logic-based classifiers, deep learning, and statistical pattern recognition[0132] Likewise, and as another example, second process technology characterization module 820 may be implemented using any number of determination techniques, such as statistical regression analysis and statistical classifiers such as neural networks, decision trees, Bayesian classifiers, fuzzy logic-based classifiers, deep learning, and statistical pattern recognition. |
According to one aspect of the invention, a electronic assembly is provided. The electronic assembly includes a motherboard, a first microelectronic die on a package substrate, a second microelectronic die, and a strip of flex tape interconnecting the microelectronic dies. The package substrate has a metal core with via openings, power conductors connecting a top and bottom surface of the substrate and passing through the via openings, and ground conductors interconnecting the metal core with the top and bottom surfaces of the package substrate. The flex tape has signal conductors which interconnect the microelectronic dies. Power is provided to the first microelectronic die via the power conductors. IO signals are sent between the microelectronic dies over the signal conductors in the flex tape. |
1. An electronic assembly, comprising:a substrate including a metal plane having a plurality of via openings therethrough, a plurality a first conductors passing through the via openings from an upper surface to a lower surface of the metal plane, and a plurality of insulating bodies in the via openings, each surrounding a respective first conductor and insulating the respective first conductor from the metal plane; a first microelectronic die, on the substrate, having a first integrated circuit, first contacts connected to the first conductors, second contacts connected to the metal plane, and first signal contacts connected to the first integrated circuit; a second microelectronic die having a second integrated circuit and second signal contacts connected to the second integrated circuit; and a flexible tape having a flexible substrate and a plurality of signal conductors, carried by the flexible substrate, interconnecting the first signal contacts and second signal contacts. 2. The electronic assembly of claim 1, wherein the first conductors are power conductors.3. The electronic assembly of claim 2, wherein the metal plane is a ground plane.4. The electronic assembly of claim 3, further comprising a power supply connected to the power conductors to conduct an electric current to the first integrated circuit of the first microelectronic die allowing the first integrated circuit to send electric signals to the second integrated circuit of the second microelectronic die through the signal conductors of the flexible tape.5. The electronic assembly of claim 4, further comprising a plurality of ground conductors connected between the upper surface of the metal plane and a top surface of the substrate and connected between the lower surface of the metal plane and a bottom surface of the substrate.6. The electronic assembly of claim 5, wherein the substrate has a thickness of between 500 and 1300 microns.7. The electronic assembly of claim 6, wherein the metal plane has a thickness of between 200 and 400 microns.8. The electronic assembly of claim 7, wherein the first conductors have lengths of between 500 and 1300 microns.9. The electronic assembly of claim 8, wherein the metal plane and the ground conductors are electrically disconnected from the power conductors within the substrate.10. The electronic assembly of claim 9, wherein the first microelectronic die is a microprocessor.11. The electronic assembly of claim 10, wherein the second microelectronic die is a chipset.12. The electronic assembly of claim 11, wherein the microprocessor and the chipset are connected to a motherboard.13. The electronic assembly of claim 2, wherein the flexible tape is on the substrate.14. The electronic assembly of claim 13, wherein the flexible tape has a thickness between 5 microns and 30 microns.15. An electronic assembly, comprising:a substrate having top and bottom surfaces including a ground plane having a plurality of via openings therethrough from an upper surface to a lower surface thereof, a plurality of ground conductors connecting the upper surface of the ground plane and the top surface of the substrate and connecting the lower surface of the ground plane to the bottom surface of the substrate, a plurality a power conductors passing through the via openings from the upper surface to the lower surface of the ground plane, and a plurality of insulating bodies in the via openings, each surrounding a respective power conductor and insulating the respective power conductor from the ground plane; a first microelectronic die on the substrate having a first integrated circuit, power contacts connected to the power conductors, ground contacts connected to the ground conductors, and first signal contacts connected to the first integrated circuit; a second microelectronic die having a second integrated circuit and second signal contacts connected to the second integrated circuit; a flexible tape having a flexible substrate and a plurality of signal conductors, carried by the flexible substrate, interconnecting the first signal contacts and second signal contacts; and a power supply connected to the power conductors to conduct an electric current to the first integrated circuit to send electric signals to the second integrated circuit of the second microelectronic die through the signal conductors of the flexible tape. 16. The electronic assembly of claim 15, wherein the flexible tape is between the substrate and the first microelectronic die.17. The electronic assembly of claim 16, further comprising a third microelectronic die having a third integrated circuit and third signal contacts connected to the third integrated circuit and the signal conductors, the signal conductors interconnecting the first signal contacts and the third signal contacts.18. An electronic assembly, comprising:a substrate, having top and bottom surfaces, including a ground plane having a plurality of via openings therethrough from an upper surface to a lower surface thereof, a plurality of ground conductors connecting the upper surface of the ground plane and the top surface of the substrate and connecting the lower surface of the ground plane to the bottom surface of the substrate, a plurality a power conductors passing through the via openings from the upper surface to the lower surface of the ground plane, and a plurality of insulating bodies in the via openings, each surrounding a respective power conductor and insulating the respective power conductor from the ground plane; a microprocessor on the substrate having a first integrated circuit, power contacts connected to the power conductors, ground contacts connected to the ground conductors, and first signal contacts connected to the first integrated circuit; a chipset having a second integrated circuit and second signal contacts connected to the second integrated circuit; a flexible tape on the substrate extending between the microprocessor and the chipset having a flexible substrate and a plurality of signal conductors, carried by the flexible substrate, interconnecting the first signal contacts and second signal contacts; and a power supply connected to the power conductors to conduct an electric current to the first integrated circuit of the microprocessor to send electric signals to the second integrated circuit of the chipset through the signal conductors of the flexible tape. 19. The electronic assembly of claim 18, further comprising a second chipset having a third integrated circuit and third signal contacts connected to the third integrated circuit.20. The electronic assembly of claim 19, wherein the flexible tape extends between the microprocessor and the second chipset, and the plurality of signal conductors interconnect the second signal contacts and the third signal contacts. |
BACKGROUND OF THE INVENTION1). Field of the InventionThis invention relates generally to an electronic assembly and more specifically to the manner in which power, ground, and signals are provided to integrated circuits of the electronic assembly.2). Discussion of Related ArtIntegrated circuits are formed on semiconductor wafers. The wafers are then sawed into microelectronic dies, also known as semiconductor chips. Each semiconductor chip is then mounted to a package, or carrier, substrate. Often the packages are then mounted to a motherboard.The integrated circuit receives power, ground, and other electronic signals through contacts located between the semiconductor chip and the carrier substrate and vias in the package substrate. The vias extend from an upper surface to a lower surface of the carrier substrate and pass through a core made of an organic material. In order to send an electronic signal from one semiconductor chip to another, the signal must first pass from one of the semiconductor chips down through a via in the substrate, laterally across the motherboard, and back up through another via connected to the other semiconductor chip.The use of vias for power, ground, and signal conductors at the same time is unsuitable for the power requirements of state of the art microelectronic applications. Because of high loop inductance of the system, three voltage drops, also known as first, second, and third "droops," occur at different times during use. A number of decoupling capacitors must be used, increasing the cost of the assembly. The signal integrity of signals sent between the semiconductor chips is weakened due impedance mismatches between the different substrates. The resistance encountered by the current used for power the chips is unnecessarily high because not all of the vias are used for power so that the power delivered to the semiconductor chips is not maximized. The organic core does not have a good coefficient of thermal expansion, and for mechanical support, it must be made thicker which adds inductance and increases the first droop.BRIEF DESCRIPTION OF THE DRAWINGSThe invention is described by way of example with reference to the accompanying drawings, wherein:FIG. 1 is a perspective view of an electronic assembly including a motherboard, a microprocessor on a package substrate, a chipset, and a strip of flex tape, according to an embodiment of the invention;FIG. 2 is a side view of the electronic assembly;FIG. 3 is a cross-sectional side view on 3-3 in FIG. 1 of the microprocessor and the package substrate;FIG. 4 is a cross-sectional top plan view on 4-4 in FIG. 2 of a metal core within the package substrate; andFIG. 5 is a bottom view on 5-5 in FIG. 1 of the microprocessor.DETAILED DESCRIPTION OF THE INVENTIONFIG. 1 to FIG. 5 of the accompanying drawings illustrate an electronic assembly according to one embodiment of the invention. The electronic assembly includes a motherboard, a first microelectronic die on a package substrate, a second microelectronic die, and a strip of flex tape interconnecting the microelectronic dies. The package substrate has a metal core with via openings, power conductors connecting a top and bottom surface of the substrate and passing through the via openings, and ground conductors interconnecting the metal core with the top and bottom surfaces of the package substrate. The flex tape has signal conductors which interconnect the microelectronic dies. Power is provided to the first microelectronic die via the power conductors. IO signals are sent between the microelectronic dies over the signal conductors in the flex tape.FIGS. 1 and 2 illustrate an electronic assembly 10 for use in a computer, including a motherboard 12, a microprocessor 14 on a package substrate 16, a chip set 18, and a strip 20 of flex tape. It should be noted that FIGS. 1 to 5 are merely illustrative and may not be drawn to scale.The motherboard 12 is a large silicon plane having a plurality of sockets for securing and providing electric signals to various microelectronic dies as is commonly understood in the art.The package substrate 16 is square in shape with side lengths 22 of 10 cm and a thickness 24 of 1000 microns. The thickness 24 of the package substrate 16 may range, for example, between 500 and 1300 microns. The substrate 16 has a top surface 26, a bottom surface 28, and an outer edge 30.FIG. 3 illustrates the microprocessor 14 and the package substrate 16. The package substrate 16 includes a metal core 32, a first build-up layer 34, a second build-up layer 36, a plurality of power conductors 38, a plurality of insulating bodies 40, and a plurality of ground conductors 42.The metal core 32 is in the shape of a plane which extends to the outer edge 30 of the package substrate 16 and has a substantially uniform thickness 44 of 250 microns. However, the thickness 44 of the metal core 32 may range, for example, between 200 and 400 microns. The metal core 32 has an upper 46 and a lower 48 surface and is made of copper. The metal core 32 has an array of circular via openings 50 extending from the upper surface 46 to the lower surface 48 thereof. The via openings 50 have diameters 52 of 50 microns.Referring to FIG. 3 and FIG. 4, the power conductors 38 are metallic, cylindrically shaped bodies extending between the top surface 26 and the bottom surface 28 of the package substrate 16 and extend through the via openings 50 of the metal core 32. The length of the power conductors 38 may range, for example, between 500 and 1300 microns, depending on the thickness of the package substrate 16. Central portions 54 of the power conductors 38, which pass through the via openings 50, have diameters 56 of 35 microns. Upper portions 58 of the power conductors 38, located above the via openings 50, have diameters 60 of 50 microns. The power conductors 38 are typically made of a metal such as copper.The insulating bodies 40 are vertically oriented dumbbell shaped bodies with vertical passageways 62 therethrough and are located between the power conductors 38 and the metal core 32. The vertical passageways 62, through which the power conductors 38 pass, have diameters 64 of 35 microns. The insulating bodies 40 have heights 66 of 300 microns, extending 25 microns above and below the metal core 32, small outer diameters 68 of 50 microns extending within the via openings 50, and large outer diameters 70 of 100 microns above and below the via openings 50. The insulating bodies 40 are made of silicon oxide.The ground conductors 42 are metallic, cylindrically shaped bodies extending vertically within the package substrate 16. The ground conductors 42 have diameters 72 of 35 microns. Each ground conductor 42 has an upper piece 74 and a lower piece 76. The upper piece 74 of each ground conductor 42 connects the top surface 26 of the package substrate 12 to the upper surface 46 of the metal core 32. The lower piece 76 of each ground conductor 42 lies directly beneath the upper piece 74 of the respective ground conductor 42 and connects the bottom surface 28 of the package substrate 12 to lower surface 48 of the metal core 32. The ground conductors 42 contact the metal core 32 between the via openings 50.The first build-up layer 34 is located adjacent to the upper surface 46 of the metal core 32 and includes of a plurality of alternating conducting and insulating layers, and the second build-up layer 36 is located adjacent to the lower surface 48 of the metal core 32 and comprises a plurality of alternating conducting and insulating layers. The first 34 and second 36 build-up layers each have a thickness 78 of approximately 375 microns.Referring to FIG. 1 and FIG. 2, the microprocessor 14 is located on a central portion of the top surface 26 of the package substrate 16. The microprocessor 14 is square in shape and has side lengths 80 of 3 cm and a thickness 82 of, for example, 700 microns. The microprocessor 14 includes an integrated circuit 84 formed therein, as is commonly understood in the art, and a plurality of substantially spherical, conductive contacts 86 located on a bottom surface 88 thereof. The contacts 86 are placed between the microprocessor 14 and the top surface 26 of the package substrate 16 and are connected to the integrated circuit 84 by a plurality of chip conductors 90. The spherical contacts 86 include power contacts 92, ground contacts 94, and signal contacts 96. The contacts 86 connect and secure the microprocessor 14 to the package substrate 16 and support an upper surface 96 of the microprocessor 14 to a height 100 of approximately 730 microns above the top surface 26 of the package substrate 16.FIG. 5 illustrates the bottom surface 88 of the microprocessor 14. The bottom surface 88 of the microprocessor 14 is covered with contacts 86 arranged in rows. The signal contacts 96 are arranged in two rows along a side 102 of the microprocessor 14. The rows of power 92 and ground 94 contacts are arranged in groups over the rest of the bottom surface 88 of the microprocessor 14.Referring again to FIG. 3, the power contacts 92 of the microprocessor 14 are placed directly above and touch the power conductors 38, and the ground contacts 94 are placed directly above and touch the ground conductors 42.Referring to FIG. 1 and FIG. 2 in combination, the chipset 18 is rectangular with a length 104 of 2 cm, a width 106 of 1 cm, and a thickness 108 of 0.25 cm. The chipset 18 is located on the motherboard 12 and spaced by a distance 110 of 1.5 cm from the package substrate 16. The chipset 18 includes a substrate with a microelectronic die on a central portion of an upper surface thereof with an integrated circuit 112 formed within the die, as is commonly understood in the art. Electrical connectors 114 extend outwards and downwards away from sides 116 of the chipset and are connected to the integrated circuit 112 within the chipset.Referring now to FIGS. 1, 2, 3, and 5, the strip 20 of flex tape extends from beneath the signal contacts 96 located on the microprocessor 14 outwards across the top surface 26 of the package substrate 16 and over the outer edge 30 of the package substrate 16 to the integrated circuit 112 of the chipset 18. The strip 20 is suspended above the motherboard 12. The flex tape strip 20 has a length 118 of 1.75 cm, a width 120 of 1.9 cm, and a thickness 122 of 10 microns. The thickness of the flex tape may range, for example, between 5 and 30 microns. The flex tape strip 20 has a substrate made of a flexible material such as Mylar and a plurality of signal conductors 124, or conductive strips, on the substrate interconnecting the signal contacts 96 of the microprocessor 14 to the electrical connectors 114 of the chipset 18. Although only one chipset 18 is shown in this embodiment, it should be understood that, more chipsets, or other types of microelectronic dies, may be placed on the motherboard and electrically connected to the microprocessor in the same manner.Referring again to FIG. 2 and FIG. 3, a plurality of conductive pins 126 extend downward from the bottom surface 28 of the package substrate 16 and are inserted into the sockets located on the motherboard, as is commonly understood in the art. The pins 126 include power pins 128 connected to the power conductors 38 and ground pins 130 connected to the ground conductors 42. The pins 126 have diameters 132 of approximately 25 microns. As shown schematically in FIG. 3, the power pins 128 are connected to a first electric terminal 134 of a computer through the sockets of the motherboard, which supplies power to the electronic assembly 10. The ground pins 130 are connected to a second electric terminal 136 of a computer through the sockets of the motherboard 12, which supplies a ground for the electronic assembly 10. The computer has a memory for storing set instructions and a process server connected to the memory for executing the instructions, as is commonly understood.In use, an electric current is supplied by the computer to the integrated circuit 84 in the microprocessor 14 through the power pins 128, the power conductors 38, and the power contacts 92. Electric signals, such as IO signals, are then sent from the integrated circuit 84 in the microprocessor 14 through the signal contacts 96, the signal conductors 124, the electrical connectors 114 of the chipset, and into the integrated circuit 112 of the chipset 18. Then other electric signals are sent back to the microprocessor 14 via the signal conductors 124 in the flex tape. The electric signals are sent directly to and from the chipset 18 across the upper surface 26 of the package substrate 16 and above the motherboard 12 without having to travel back through the interior of the package substrate 16 and through the motherboard 12. Thus, all of the pins 126 can be used for either power or ground and need not be sacrificed for providing electrical signals.One advantage of this system is that the first voltage droop is reduced by the use of the metal core and the second and third voltage droops are reduced primarily by the use of the signal conductors because the overall inductance of the electronic assembly is reduced. Therefore, the number of decoupling capacitors can be reduced which lowers the cost of the assembly. A further advantage is that the signal integrity of the signals sent to and from the memory chips is improved because a more direct pathway is provided for the signals when compared to a conventional circuit board. A further advantage is when compared to a conventional circuit board, the number of power and ground vias are doubled, therefore, the resistance is decreased by 50% and power is increased to the microprocessor. A further advantage is that the thickness of the substrate can be reduced without a brace or any additional support needed for the substrate because of the mechanical strength added to the substrate by the use of the metal core.While certain exemplary embodiments have been described and shown in the accompanying drawings, it is to be understood that such embodiments are merely illustrative and not restrictive of the current invention, and that this invention is not restricted to the specific constructions and arrangements shown and described since modifications may occur to those ordinarily skilled in the art. |
An embodiment of an electronic apparatus may include one or more substrates, and logic coupled to the one or more substrates, the logic to receive a current access request for a storage media associated with a stream, identify a hint in the current access request which indicates one or more stream characteristics for future access requests from the stream, and handle the current access request based on the indicated one or more stream characteristics for future access requests from the stream. Other embodiments are disclosed and claimed. |
1.An electronic device, which includes:One or more substrates; andLogic coupled to one or more substrates, the logic used to:Receive the current access request for the storage medium associated with the stream,Identifies a prompt in the current access request, the prompt indicating one or more stream characteristics of future access requests from the stream, andThe current access request is processed based on the indicated one or more stream characteristics from future access requests from the stream.2.The device of claim 1, wherein the logic is further used to:One or more of the delay and bandwidth requirements for the flow is determined based on the prompt.3.The device according to any one of claims 1 to 2, wherein the logic is further used to:The priority of the stream is determined based on the prompt.4.The device according to any one of claims 1 to 2, wherein the logic is further used to:The granularity of the access request for the stream is determined based on the prompt.5.The device according to any one of claims 1 to 2, wherein the logic is further used to:The current access request is scheduled based on the indicated one or more flow characteristics from future access requests from the flow.6.The device of claim 5, wherein the logic is further used to:Combine the current access request with one or more of the future access requests from the stream.7.The device according to any one of claims 1 to 2, wherein the storage medium includes a solid state drive.8.An electronic system including:Storage medium; andA controller communicatively coupled to the storage medium, the controller including the following logic for:Receive the current access request for the storage medium associated with the stream,Identifies a prompt in the current access request, the prompt indicating one or more stream characteristics of future access requests from the stream, andThe current access request is processed based on the indicated one or more stream characteristics from future access requests from the stream.9.The system of claim 8, wherein the logic is further used to:One or more of the delay and bandwidth requirements for the flow is determined based on the prompt.10.The system according to any one of claims 8 to 9, wherein the logic is further used to:The priority of the stream is determined based on the prompt.11.The system according to any one of claims 8 to 9, wherein the logic is further used to:The granularity of the access request for the stream is determined based on the prompt.12.The system according to any one of claims 8 to 9, wherein the logic is further used to:The current access request is scheduled based on the indicated one or more flow characteristics from future access requests from the flow.13.The system of claim 12, wherein the logic is further used to:Combine the current access request with one or more of the future access requests from the stream.14.The system according to any one of claims 8 to 9, wherein the storage medium includes a solid state drive.15.A method for controlling a storage device, which includes:Receive the current access request for the storage medium associated with the stream;Identifying a prompt in the current access request, the prompt indicating one or more stream characteristics of future access requests from the stream; andThe current access request is processed based on the indicated one or more stream characteristics from future access requests from the stream.16.The method according to claim 15, further comprising:One or more of the delay and bandwidth requirements for the flow is determined based on the prompt.17.The method according to any one of claims 15 to 16, further comprising:The priority of the stream is determined based on the prompt.18.The method according to any one of claims 15 to 16, further comprising:The granularity of the access request for the stream is determined based on the prompt.19.The method according to any one of claims 15 to 16, further comprising:The current access request is scheduled based on the indicated one or more flow characteristics from future access requests from the flow.20.The method of claim 19, further comprising:Combine the current access request with one or more of the future access requests from the stream.21.A controller device, which includes:A component for receiving the current access request for the storage medium associated with the stream;Means for identifying a prompt in the current access request, the prompt indicating one or more stream characteristics of future access requests from the stream; andA component for processing the current access request based on the indicated one or more stream characteristics from the future access request from the stream.22.The device according to claim 21, further comprising:A component for determining one or more of the delay and bandwidth requirement for the flow based on the prompt.23.The device according to any one of claims 21 to 22, further comprising:Means for determining the priority of the stream based on the prompt.24.The device according to any one of claims 21 to 22, further comprising:A component for determining the granularity of the access request for the stream based on the prompt.25.The device according to any one of claims 21 to 22, further comprising:A means for scheduling the current access request based on the indicated one or more flow characteristics from the future access request from the flow. |
Adaptive storage scheduler for SSDBackground techniqueA solid state drive (SSD) can have a variety of specifications, including performance specifications, heat dissipation specifications, and reliability/durability specifications. Performance specifications include standards such as input/output operations per second (IOPS), throughput/bandwidth, and latency. The Non-Volatile Memory (NVM) Express (NVMe) specification (nvmexpress.org) describes various features related to the utilization of streams for accessing storage devices.Description of the drawingsThe material described herein is illustrated in the drawings as an example and not as a limitation. For conciseness and clarity of description, the elements illustrated in each figure are not necessarily drawn to scale. For example, the size of some elements may be exaggerated relative to other elements for clarity. In addition, where deemed appropriate, reference numerals have been repeated in each figure to indicate corresponding or similar elements. In each figure:Fig. 1 is a block diagram of an example of an electronic system according to an embodiment;Fig. 2 is a block diagram of an example of an electronic device according to an embodiment;3A to 3C are flowcharts of an example of a method for controlling a storage device according to an embodiment;Figure 4 is a block diagram of an example of a distributed computing environment according to an embodiment;Fig. 5 is a block diagram of an example of a computing system according to an embodiment;Fig. 6 is a block diagram of another example of a computing system according to an embodiment; andFIG. 7 is a block diagram of an example of a solid state drive (SSD) device according to an embodiment.Detailed waysOne or more embodiments or implementations will now be described with reference to the disclosed drawings. Although specific configurations and arrangements are discussed, it should be understood that this was done for illustrative purposes only. Those skilled in the relevant art will realize that other configurations and arrangements may be adopted without departing from the spirit and scope of the description. It will be apparent to those skilled in the relevant art that the techniques and/or arrangements described herein may also be adopted in a variety of other systems and applications in addition to those described herein.Although the following description sets forth various implementations that can appear, for example, in architectures such as system-on-chip (SoC) architectures, the implementations of the techniques and/or arrangements described herein are not limited to specific architectures and/or computing systems , And can be implemented by any architecture and/or computing system for similar purposes. For example, various architectures using, for example, multiple integrated circuit (IC) chips and/or packages, and/or various computing devices and/or consumer electronics (CE) devices (such as set-top boxes, smart phones, etc.) can implement this document The techniques and/or arrangements described in. In addition, although the following description may elaborate on many specific details (such as the logical implementation of system components, types and relationships, logical division/integration options, etc.), the claimed subject matter can be practiced without such specific details. . In other instances, some materials such as, for example, control structures and full software instruction sequences may not be shown in detail so as not to obscure the materials disclosed herein.The materials disclosed herein can be implemented by hardware, firmware, software, or any combination thereof. The materials disclosed herein can also be implemented as instructions stored on a machine-readable medium, and the instructions can be read and executed by one or more processors. A machine-readable medium may include any medium and/or mechanism for storing or transmitting information in a form readable by a machine (eg, computing device). For example, machine-readable media may include read-only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustic, or other forms of propagated signals (for example, Carrier wave, infrared signal, digital signal, etc.) and others.References in the specification to "one implementation," "an implementation," "an example implementation," etc. indicate that the described implementation may include specific features, structures, or characteristics, but each embodiment may not necessarily include the specific features , Structure or characteristics. Furthermore, such phrases do not necessarily refer to the same implementation. In addition, when a specific feature, structure, or characteristic is described in conjunction with an embodiment, it is considered that it is within the knowledge of those skilled in the art to implement such a feature, structure, or characteristic in combination with other implementations (whether or not explicitly described in this document). Within range.The various embodiments described herein may include a memory component and/or an interface to the memory component. Such memory components may include volatile and/or non-volatile (NV) memory. The volatile memory may be a storage medium that requires power to maintain the state of the data stored by the medium. Non-limiting examples of volatile memory may include various types of random access memory (RAM), such as dynamic RAM (DRAM) or static RAM (SRAM). One specific type of DRAM that can be used in memory modules is synchronous dynamic RAM (SDRAM). In certain embodiments, the DRAM of the memory component may comply with standards promulgated by the Joint Electronic Equipment Engineering Council (JEDEC), such as JESD79F for double data rate (DDR) SDRAM, JESD79-2F for DDR2 SDRAM, and JESD79-2F for DDR3 JESD79-3F for SDRAM, JESD79-4A for DDR4 SDRAM, JESD209 for Low Power DDR (LPDDR), JESD209-2 for LPDDR2, JESD209-3 for LPDDR3, and JESD209-4 for LPDDR4 ( These standards are available from jedec.org). This standard (and similar standards) may be referred to as a DDR-based standard, and the communication interface of a storage device that implements this standard may be referred to as a DDR-based interface.NV memory (NVM) may be a storage medium that does not require power to maintain the state of data stored by the medium. In one embodiment, the memory device may include a block addressable memory device, such as a memory device based on NAND or NOR technology. Memory devices may also include future generations of non-volatile devices, such as three-dimensional (3D) cross-point memory devices, or other byte-addressable, write-in-place non-volatile memory devices. In one embodiment, the memory device may be or may include a memory device using the following: chalcogenide glass, multi-threshold level NAND flash memory, NOR flash memory, single-level or multi-level phase change memory (PCM) , Resistive memory, nanowire memory, ferroelectric transistor RAM (FeTRAM), antiferroelectric memory, magnetoresistive RAM (MRAM) memory incorporating memristor technology, including metal oxide matrix, oxygen vacancy matrix and conductive bridge RAM (CB-RAM) resistive memory, or spin transfer torque (STT)-MRAM, devices based on spintronic magnetic junction memory, devices based on magnetic tunnel junction (MTJ), based on DW (domain wall) and SOT (Spin orbit transfer) devices, thyristor-based memory devices, or any combination of the above, or other memory. The memory device can refer to the die itself and/or to the packaged memory product. In certain embodiments, the memory component with non-volatile memory can comply with one or more standards promulgated by JEDEC, such as JESD218, JESD219, JESD220-1, JESD223B, JESD223-1 or other suitable standards (referenced here The JEDEC standard is available from jedec.org).Referring to FIG. 1, an embodiment of an electronic system 10 may include a storage medium 12 and a controller 11 communicatively coupled to the storage medium 12. The controller 11 may include logic 13 for: receiving a current access request for the storage medium 12 associated with the stream; identifying a prompt in the current access request, the prompt indicating one or more future access requests from the stream Flow characteristics; and processing the current access request based on the indicated one or more flow characteristics of future access requests from the flow. For example, the logic 13 may be configured to: determine one or more of the delay and bandwidth requirements for the flow based on the prompt; determine the priority of the flow based on the prompt; and/or determine the priority for the flow based on the prompt The granularity of the flow's access request. In some embodiments, the logic 13 may be further configured to schedule the current access request based on the indicated one or more flow characteristics of future access requests from the flow. For example, the logic 13 may also be configured to combine the current access request with one or more of the future access requests from the stream. In any embodiment herein, the storage medium 12 may include a permanent storage medium, such as a solid state drive (SSD).The embodiments of each of the aforementioned controller 11, storage medium 12, logic 13, and other system components may be implemented by hardware, software, or any suitable combination thereof. For example, the hardware implementation may include: configurable logic (such as, for example, programmable logic array (PLA), field programmable gate array (FPGA), complex programmable logic device (CPLD)), or using circuit technology (such as, for example, Application-specific integrated circuit (ASIC), complementary metal oxide semiconductor (CMOS) or transistor-transistor logic (TTL) technology) fixed-function logic hardware, or any combination thereof. Embodiments of the controller 11 may include a general-purpose controller, a special-purpose controller, a storage controller, a memory controller, a microcontroller, a general-purpose processor, a special-purpose processor, a central processing unit (CPU), an execution unit, and the like. In some embodiments, the storage medium 12, the logic 13, and/or other system memory may be located in various components including the controller 11 (for example, on the same die), or collocated with the various components.Alternatively or additionally, all or part of these components may be implemented in one or more modules as a set of logical instructions to be executed by a processor or a computing device stored in a machine-readable or computer-readable storage medium , The storage medium such as random access memory (RAM), read only memory (ROM), programmable ROM (PROM), firmware, flash memory, etc. For example, the computer program code used to perform the operations of these components can be written in any combination of one or more operating system (OS) suitable/appropriate programming languages, including object-oriented programming languages such as PYTHON , PERL, JAVA, SMALLTALK, C++, C#, etc.), and conventional procedural programming languages (such as the "C" programming language or similar programming languages). For example, the storage medium 12, other storage mediums, or other system memories may store a set of instructions that, when executed by the controller 11, cause the system 10 to implement one or more components, features, or aspects of the system 10 (for example, logic 13, It receives the current access request for the storage medium 12 associated with the stream, identifies the prompt in the current access request, and processes the current access request based on the prompt for the stream, etc.).Turning now to FIG. 2, an embodiment of the electronic device 15 may include one or more substrates 16, and logic 17 coupled to the one or more substrates 16. The logic 17 may be configured to: receive a current access request for a storage medium associated with the stream; identify a prompt in the current access request that indicates one or more stream characteristics of future access requests from the stream; and The flow's future access request indicates one or more flow characteristics to process the current access request. For example, the logic 17 may be configured to: determine one or more of the delay and bandwidth requirements for the flow based on the prompt; determine the priority of the flow based on the prompt; and/or determine the priority for the flow based on the prompt The granularity of the flow's access request. In some embodiments, the logic 17 may be further configured to schedule the current access request based on the indicated one or more flow characteristics of future access requests from the flow. For example, the logic 17 may also be configured to combine the current access request with one or more of the future access requests from the stream. In any of the embodiments herein, the storage medium may include an SSD.Embodiments of logic 17 may be implemented with systems, apparatuses, computers, devices, etc. (for example, such as those described herein). More specifically, the hardware implementation of the logic 17 may include: configurable logic (such as, for example, PLA, FPGA, CPLD), or in the form of fixed-function logic hardware using circuit technology (such as, for example, ASIC, CMOS or TTL technology) , Or any combination thereof. Alternatively or additionally, the logic 17 may be implemented in one or more modules as stored in a machine-readable or computer-readable storage medium (such as RAM, ROM, PROM, firmware, flash memory, etc.). A set of logical instructions executed by a processor or computing device. For example, the computer program code used to perform the operations of these components can be written in any combination of one or more OS-appropriate/appropriate programming languages, including object-oriented programming languages (such as PYTHON, PERL, JAVA) , SMALLTALK, C++, C#, etc.), and conventional procedural programming languages (such as "C" programming language or similar programming languages).For example, the logic 17 may be implemented on a semiconductor device, which may include one or more substrates 16, where the logic 17 is coupled to the one or more substrates 16. In some embodiments, the logic 17 may at least partially use one or more of configurable logic and fixed-function hardware logic on a semiconductor substrate(s) (eg, silicon, sapphire, gallium arsenide, etc.) accomplish. For example, the logic 17 may include a transistor array and/or other integrated circuit components that are coupled to the substrate(s) 16 using transistor channel regions located in the substrate(s) 16. The interface between the logic 17 and the substrate(s) 16 may not be an abrupt junction. The logic 17 can also be considered to include an epitaxial layer grown on the initial wafer of the substrate(s) 16.Turning now to FIGS. 3A to 3C, an embodiment of the method 20 for controlling a storage device may include: at block 21, receiving a current access request for a storage medium associated with the stream; at block 22, identifying the current access request And at block 23, processing the current access based on the indicated one or more stream characteristics of the future access request from the stream ask. For example, the method 20 may include: at block 24, determining one or more of the delay and bandwidth requirements for the flow based on the prompt; at block 25, determining the priority of the flow based on the prompt; and /Or at block 26, the granularity of the access request for the flow is determined based on the prompt. Some embodiments of method 20 may further include: at block 27, scheduling the current access request based on the indicated one or more flow characteristics of future access requests from the flow. For example, the method 20 may also include: at block 28, combining the current access request with one or more of the future access requests from the stream. In any of the embodiments herein, at block 29, the storage medium may include an SSD.The embodiment of the method 20 may be implemented with a system, an apparatus, a computer, a device, etc. (for example, such as those described herein). More specifically, the hardware implementation of method 20 may include: configurable logic (such as, for example, PLA, FPGA, CPLD), or in the form of fixed-function logic hardware using circuit technology (such as, for example, ASIC, CMOS or TTL technology) , Or any combination thereof. Alternatively or in addition, the method 20 may be implemented in one or more modules as stored in a machine-readable or computer-readable storage medium (such as RAM, ROM, PROM, firmware, flash memory, etc.). A set of logical instructions executed by a processor or computing device. For example, the computer program code used to perform the operations of these components can be written in any combination of one or more OS-appropriate/appropriate programming languages, including object-oriented programming languages (such as PYTHON, PERL, JAVA) , SMALLTALK, C++, C#, etc.), and conventional procedural programming languages (such as "C" programming language or similar programming languages).For example, the method 20 may be implemented on a computer-readable medium as described in conjunction with Examples 22 to 28 below. The embodiment or part of the method 20 may be implemented with firmware, an application (for example, through an application programming interface (API)), or a driver software running on an operating system (OS). Additionally, logic instructions may include assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, state setting data, configuration data for integrated circuits, electronic circuits that will be native to the hardware, and/ Or other structural components (for example, main processor, central processing unit/CPU, microcontroller, etc.) for personalized status information.Some embodiments can advantageously provide adaptive storage scheduling techniques for emerging workloads with relaxed latency requirements. As a background and not a limitation, data centers can provide storage services across a variety of use cases, including, for example, edge cloud services, Internet of Things (IoT) applications, manufacturing, manned aviation, unmanned air transportation, autonomous and Auxiliary vehicles, health monitoring, intelligent monitoring, etc. Some embodiments may advantageously provide technologies for efficient management, movement, and sharing of data among many different devices, many of which are themselves part of a larger group of devices.Some conventional memory and storage systems may be less efficient when reading or writing to the media at different granularities and/or service level agreement (SLA) requirements, because they lack the source of software entities (for example, Service or equipment) sufficient information about the expected data flow. According to some embodiments, several writes from a device or service arriving at the storage device can be merged before being written to the medium, depending on the granularity at the storage device after the message is sent from the service or device As well as delay and bandwidth requirements. For example, if the first device is sending continuous 128B writes to the storage medium every 1 ms and has a low priority latency requirement, and the storage medium writes a 512B payload to the medium, then The medium can keep 4 writes in the buffer during 4 ms, and write to the storage medium after the buffer is full. In another example, if the second device is sending a 64B write and has a high latency requirement, each 64B write request to the storage device can directly reach the medium. In some conventional memory and storage systems, several read requests for smaller data sizes from the same entity (for example, service or device) generate independent small storage requests, which may lead to poor resource utilization. According to some embodiments, if the service is accessed with a granularity of 64 bytes per 0.5 ms under low latency requirements, multiple reads can be combined in a single data packet from the storage device to the platform.Some embodiments may provide a more scalable, layered, and power-efficient architecture (for example, by providing efficient interoperability with accelerators in edge and IoT deployments). Some embodiments may allow for different priorities and SLAs in an agile manner, so that "whether to process a given request as part of a batch or dispatch it immediately" may be based on a real-time control loop, where the determination can be easily Judgment, designation and end-to-end implementation. Some embodiments may provide efficient data storage solutions for data payloads of different granularities (ie, 64 bytes, 4KB, 1MB, etc.). In order to implement one or more of the foregoing, some embodiments may provide a technology that allows an entity to provide a prompt to the storage device, and the storage device can use the prompt to implement a more efficient data path return strategy from the storage device to the entity. , And/or based on delay, bandwidth, SLA, and other stream characteristic prompts (for example, granularity) to achieve an adaptive and more efficient write strategy to the medium.In some embodiments, the storage device or the storage device may include the following technology: the technology is used to use service or device prompts related to the characteristics of the data stream in the storage device/storage device (for example, a local platform storage device, a simple disk bundle ( JBOD), etc.) to realize intelligent request processing. For example, the prompt can be used to provide more efficient data reading and writing from/to the storage medium while satisfying various SLAs of the application.4, the distributed computing environment 40 may include an edge device 41 (for example, device 1 to device N, where N>1), the edge device 41 requests the edge device through the access point of the radio access network (RAN) 42 It is transmitted to a local breakout and/or edge service execution platform 43 (for example, it may include preset equipment, micro cloud, etc.). The corresponding edge devices 41 may have different characteristics and/or requirements. For example, the device 1 (for example, a car) may have the following characteristics: delay SLA=high; low bandwidth (LBW) SLA=medium; service type=WR block 64B. Device 2 (for example, a mobile phone) may have the following characteristics: delay SLA=medium; LBW SLA=low; service type=WR block 128B. Other edge devices 41 may have other characteristics. According to some embodiments, the edge device 41 may provide prompts about these and/or other characteristics associated with the flow identifier (eg, flow ID) to the local branch and/or edge service execution platform 43 through the RAN 42. For example, a data center may include a storage device 44 configured to merge write payloads into the media buffer based on the stream and SLA prompts, and merge reads to be returned to the platform based on the stream and SLA prompts into In the media buffer. For example, the storage device 44 may include: stream merging logic 44a, and SLA and scheduling logic 44b, which are configured to merge corresponding read and write payloads based on stream prompts.In some embodiments, the network interface 45 (e.g., network interface card (NIC), 5G card, cellular V2X card, etc.) may be configured to receive the prompt (e.g., nominally formatted as a prompt (stream ID, priority= {BW, Lat,...}, granularity)) and using the communication channel 46a (for example, computational express link (CXL) interconnection, PCIE, etc.), the communication channel 46a is configured to stream prompts (for example, SLA, message Granularity, etc.) is propagated from the network interface 45 to the platform 43a. Another communication channel 46b may be configured to propagate stream prompts from the platform 43a to the storage device 44. In addition to the edge device 41, other devices or services can also make an access request to the storage device 44 through the platform 43a. For example, the service 47 may have the following characteristics: delay SLA=low; LBW SLA=low; service type=RD block 64B.According to some embodiments, the edge device 41 or the service(s) 47 may provide two types of prompts (for example, the device/service may be in the same edge platform, or the request may be issued from outside the platform itself). The first type of prompt may indicate how quickly the payload from a particular stream needs to be written to the storage device 44 (for example, the total stream bandwidth requirement and the maximum per individual request or request group(s) Hour extension). Processing a particular stream will depend on the amount of data that needs to be stored in the medium during a given period of time, the injection rate for that stream, and the latency SLA associated with each individual request. The second type of prompt can indicate how quickly a read request from a particular stream needs to be returned to the client. In this case, similar to writing, processing a particular stream can depend on how much data needs to be returned to the client, and the SLA associated with each individual request. For these two types of prompts, the less restrictive latency SLA for each request can provide more options for the stream merging logic 44a of the storage device 44 to merge different requests (for example, from the same stream) to the device 44. Or merge from the device 44 to the platform 43a (for example, or merge to the network in the case of JBOD).Advantageously, some embodiments of the storage device/memory device may include logic to utilize either/two types of prompts to schedule and combine read and write requests per flow in order to improve or Maximize the bandwidth utilization of the link to the platform and the link to the specific storage device. In some embodiments, this logic may be implemented in a storage device or a memory controller (eg, it may be locally attached, network attached, pooled, etc.). Some embodiments may be particularly beneficial for dynamic integration of multiple devices (including devices in a distributed system), flexible and prioritized processing of requests at CPUs, GPUs, accelerators, etc.Referring to FIG. 5, an embodiment of the computing system 50 may include a storage device 51 communicatively coupled to a CPU 52 (eg, via PCIE), which is also communicatively coupled to a NIC 53 and a discrete device target 54 (eg, an accelerator). The storage device 51 may include: entry logic 51a, interface 51b, pending request scheduler 51c, exit logic 51d, merge logic 51e, pending request queue 51f, and per stream merge logic 51g, which are as shown Coupling as out. The CPU 52 may include: PCIE logic 52a and on-die interconnect (ODI) 52f, which are coupled as shown. Each PCIE logic 52a may include: an interface 52b, a to-peer-send queue 52c, a to-device-merge queue 52d, and a scheduler 52e. The NIC 53 may include: ingress logic 53a, interface 53b, pending request scheduler 53c, pending request queue 53d, and egress logic 53e.According to some embodiments, each discrete intermediate device (e.g., such as NIC 53) in computing system 50 may be configured with logic that allows a sender (e.g., such as a networked client) to transmit data for use at the intermediate device And also process the granularity and priority of the sender's request at the subsequent receiver. For different discrete devices, the specific implementation may be different, but the embodiments of discrete intermediate devices may include the following technologies to provide the following capabilities visible from the perspective of the sender: ingress queue, pending request queue, and scheduler.A request submitted by a sender or a sending device that the sender wants to route to one or more devices at other peers can be received into the ingress queue. According to some embodiments, the entry logic can be interfaced with the following logic: the logic is used according to the priority of the request (for example, based on the multi-metric vector specified by the delay, BW, etc.) and the desired granularity for sending to the target device ( For example, to improve or maximize performance, PCIE link utilization, etc.) to handle incoming requests. For example, the submitted request can be entered into the pending queue request, and the pending queue request is sorted by priority. In some embodiments, the priority of these requests can be recalculated over time. For example, a request with a low priority can generally have its priority increase over time. In some embodiments, the priority of the request may be calculated as a formula. For example, the formula can consider various sub-priorities for different performance parameters, power, efficiency parameters, and so on. In one example, the low-latency category (eg, speed parameter) can be given a weight of 80%, and the bandwidth parameter can be given a weight of 20% (eg, to reflect the need to minimize queuing time to achieve fast response). The pending queue request may be scheduled by a scheduler (for example, the pending request scheduler 53c for the pending request queue 53d of the NIC 53). The scheduler can generally pick up the first pending request (eg, when possible) and process the request. For example, processing the request may cause the request to be sent to the target (eg, CPU 52 in this example) via an exit point (eg, exit logic 53e for NIC 53).At the CPU 52 (for example, or another target device), the PCIE logic 52a may be configured with flow prompt logic for its role as both a transmitter and a receiver. When operating as a transmitter, the PCIE logic 52a can be configured to provide the following technology: the technology is used to make the interface 52b based on the flow prompts in the request to enqueue these requests with a specified priority, granularity, etc.; The send queue 52c of the waiting party stores pending requests; and makes the scheduler 52e select which pending requests on the queue 52c will be sent to the target PCIE logic (for example, it manages the target device). For example, when the scheduler 52e selects a request, the scheduler 52e may send a message header with priority, granularity, etc., to the PCIE logic block that manages the target device. Then, the scheduler 52e can start processing the request by sending one or more ODI packets to the target PCIE logic block until all the original payload is transmitted. Advantageously, multiple requests from queues targeting different PCIE logic blocks (for example, different devices or CPUs) can be processed simultaneously. Various strategies or heuristics can control the spread of requests among different targets (such as saturation at the ODI for the target), alternating delay-sensitive requests and bandwidth-sensitive requests, and so on. In the earlier example, the logic could decide to alternate messages from transactions 1 and 2 to ODI in different 64-byte batches (for example, or ODI channel bandwidth).When operating as a receiver (for example, processing a receiving request from the PCIE logic 52a), the PCIE logic 52a can be configured to provide the following technology: the technology is used to make the merge queue 52d to the device store the PCIE logic block from the peer Inflight requests that map to specific device transactions. For example, queue 52d can store all upcoming payloads from ODI and peer PCIE logic. The queue 52d may be further configured to combine and send these payloads. For example, the queue 52d may include the following logic: the logic is used to map the payload that arrives in the queue 52d at the ODI granularity into a payload with an equal/higher granularity, depending on, for example, the granularity specified through the interface 52b and for the chain The granularity supported by the route, and/or the priority of different pending requests. In the case where a discrete device is the target, the discrete device target can process input messages as usual according to the normal operation of the device.In some embodiments, the entry logic 51a may provide a technique for entering the entry queue into the storage device 51. The platform may submit a request for the platform to be routed to the storage medium of the storage device to the portal logic 51a. The interface 52b may include the following logic: the logic is used to process the incoming request in the ingress queue according to the first interface used for the incoming request and the second interface used for the incoming request, and the first interface is used to specify and One or more characteristics associated with a particular flow (for example, where the characteristics can include latency (per request), bandwidth (for the entire flow), and the SLA associated with the particular flow (for both read and write flows) (Or), a prompt about the expected size of the payload sent to be written to the storage device, etc.), the second interface is used to submit a read and/or write request to the storage medium. For example, the read and write interface can be extended for the request to specify the stream ID, the optional delay SLA of the request (for example, it can default to the priority from the entry logic 51a), and in the case of a write request Payload.In some embodiments, the merge logic 51e may include the following technology: the technology is used to support a content addressable memory (CAM) structure per each stream (for example, up to the maximum number of streams supported in the logic). All write requests that have not been sent to the medium are reserved. Each CAM may include the address of the request to be written to the memory or storage device, the period when the request arrives, and the payload.In some embodiments, the pending request scheduler 51c may include logic to process each new request. For example, when a write request arrives at the storage device 51, the latency SLA for the specific stream can be checked. For example, various parts of the logic of the storage device 51 may utilize flow characteristics from a prompt saved in association with the flow ID to create entries/values or add entries/values to the table 51h. Those parts and other parts can then look up information from the table 51h to identify what hints/information is available for the flow based on the flow ID of the flow. If the delay SLA value for a specific flow in Table 51h indicates that the request should be submitted to the medium as soon as possible, then the request will go to the non-delayed request logic for the device. If the delay SLA value for a specific flow in the table 51h indicates that the request can be delayed, the request may be stored in the per-flow merging logic 51g. The logic of the scheduler 51c may traverse all the pending requests in the per-stream merging logic 51g every cycle, and select requests according to each request that needs to be written to the medium. The scheduler 51c may prioritize the requests that can be combined based on the bus size to the medium. In each cycle, the logic of the scheduler 51c can select the request finally submitted to the medium. The final selection can be based on the priority of the flow and based on the SLA established for the flow. The logic of the scheduler 51c can merge the selected payload from the CAM and submit it to the medium.The exit logic 51d may include logic to implement a process similar to the scheduler 51c, but for data to be returned to the network or platform. The exit logic 51d may cooperate with the merge logic 51e (for example, which may be implemented similarly to CAM) for the data to be returned to the platform. For example, each CAM may include: the address of the request to be written to the memory or storage device; the period when the request arrives, the payload, and the scheduling logic to merge the requests on the CAM.The technology discussed in this article can be used in various computing systems (for example, including: non-mobile computing devices, such as desktops, workstations, servers, rack systems, etc.; mobile computing devices, such as smart phones, tablet devices, ultra-mobile personal computers ( UMPC), laptop computers, ULTRABOOK computing devices, smart watches, smart glasses, smart bracelets, etc.; and/or client/edge devices, such as Internet of Things (IoT) devices (such as sensors, cameras, etc.)).Turning now to FIG. 6, an embodiment of the computing system 100 may include one or more processors 102-1 to 102-N (generally referred to herein as "processor(s) 102"). The processor 102 can communicate via an interconnect or bus 104. Each processor 102 may include various components, and for clarity, only some of these components are discussed with reference to processor 102-1. Accordingly, each of the remaining processors 102-2 to 102-N may include the same or similar components discussed with reference to the processor 102-1.In some embodiments, the processor 102-1 may include: one or more processor cores 106-1 to 106-M (referred to herein as "core(s) 106", or more generally Is “core 106”), cache 108 (in various embodiments, it may be a shared cache or a private cache), and/or router 110. The processor core 106 may be implemented on a single integrated circuit (IC) chip. In addition, the chip may include one or more shared and/or private caches (such as cache 108), buses or interconnects (such as bus or interconnect 112), logic 170, memory controllers, or other components.In some embodiments, the router 110 may be used to communicate between the processor 102-1 and/or various components of the system 100. In addition, the processor 102-1 may include more than one router 110. In addition, numerous routers 110 can communicate to enable data routing between various components inside or outside the processor 102-1.The cache 108 may store data (eg, including instructions) utilized by one or more components of the processor 102-1, such as the core 106. For example, the cache 108 may locally cache data stored in the memory 114 for faster access by components of the processor 102. As shown in FIG. 6, the memory 114 may communicate with the processor 102 via the interconnect 104. In some embodiments, the cache 108 (which may be shared) may have various levels, for example, the cache 108 may be a mid-level cache and/or a last-level cache (LLC). Also, each of the cores 106 may include a level 1 (L1) cache (116-1) (generally referred to herein as "L1 cache 116"). Various components of the processor 102-1 may directly communicate with the cache 108 through a bus (for example, the bus 112) and/or a memory controller or a hub.As shown in FIG. 6, the memory 114 may be coupled to other components of the system 100 through the memory controller 120. The memory 114 may include volatile memory, and may be interchangeably referred to as main memory or system memory. Even though the memory controller 120 is shown as being coupled between the interconnect 104 and the memory 114, the memory controller 120 may be located elsewhere in the system 100. For example, in some embodiments, the memory controller 120 or a portion thereof may be provided within one of the processors 102.The system 100 may communicate with other devices/systems/networks via a network interface 128 (for example, the network interface 128 communicates with a computer network and/or cloud 129 via a wired or wireless interface). For example, the network interface 128 may include an antenna (not shown) to communicate with the network/cloud 129 wirelessly (for example, via the Institute of Electrical and Electronics Engineers (IEEE) 802.11 interface (including IEEE 802.11a/b/g/n/ac Etc.), cellular interface, 3G, 4G, LTE, Bluetooth, etc.) for communication.The system 100 may also include a storage device, such as an SSD device 130 coupled to the interconnect 104 via the SSD controller logic 125. Therefore, the logic 125 can control the access of the various components of the system 100 to the SSD device 130. In addition, even though the logic 125 is shown in FIG. 6 as directly coupled to the interconnect 104, the logic 125 may alternatively be via a storage bus/interconnect (such as a SATA (Serial Advanced Technology Attachment) bus, a peripheral component interconnect (PCI ) (Or PCI Express (PCIe) interface), NVM Express (NVMe), etc.) communicate with one or more other components of the system 100 (for example, where the storage bus passes through some other logic (such as a bus bridge, chipset, etc.) Coupled to interconnect 104). Additionally, in various embodiments, the logic 125 may be incorporated into the memory controller logic (such as those discussed with reference to FIG. 7) or provided on the same integrated circuit (IC) device (eg, provided On the same circuit board device as the SSD device 130, or provided in the same housing as the SSD device 130).In addition, the logic 125 and/or the SSD device 130 may be coupled to one or more sensors (not shown) to receive information indicating the status of the one or more sensors or the values detected by the one or more sensors ( For example, in the form of one or more bits or signals). These sensor(s) can be provided near components of the system 100 (or other computing systems discussed herein) to sense various factors (such as temperature) that affect the power/thermal behavior of the system/platform , Operating frequency, operating voltage, power consumption and/or inter-core communication activities, etc.), the components of the system 100 include core 106, interconnect 104 or 112, components external to processor 102, SSD device 130, SSD bus , SATA bus, logic 125, logic 160, logic 170, etc.FIG. 7 illustrates a block diagram of various components of the SSD device 130 according to an embodiment. As illustrated in FIG. 7, the logic 160 may be located in various locations, such as inside the SSD device 130 or the controller 382 (or inside the memory controller 120 or the memory 114), and the logic 160 may include and combine diagrams 6 Techniques similar to those discussed. The SSD device 130 includes: a controller 382 (which in turn includes one or more processor cores or processors 384 and memory controller logic 386), a cache 138, a RAM 388, a firmware storage device 390, and one or more memory devices 392-1 to 392-N (collectively referred to as memory 392, which may include 3D cross-points or other types of non-volatile memory). The memory 392 is coupled to the memory controller logic 386 via one or more memory channels or buses. Moreover, the SSD device 130 communicates with the logic 125 via an interface (such as an interface such as SATA, SAS, PCIe, NVMe, etc.). The processor 384 and/or the controller 382 may compress/decompress data written to or read from the memory devices 392-1 to 392-N.As illustrated in FIGS. 6 and 7, the SSD device 130 may include logic 160, and the logic 160 may be in the same housing as the SSD device 130 and/or be fully integrated on a printed circuit board (PCB) of the SSD device 130. The system 100 may include additional logic 170 external to the SSD device 130. One or more features/aspects/operations discussed with reference to FIGS. 1-5 may be implemented by one or more components of FIGS. 6 and/or 7. Moreover, one or more of the features/aspects/operations of FIGS. 1-5 may be programmed into the firmware 390. In addition, SSD controller logic 125 may also include logic 160. Advantageously, the logic 160 and/or the logic 170 may include the following technologies: the technology is used to implement the system 10 (FIG. 1), the device 15 (FIG. 2), the method 20 (FIGS. 3A to 3C), the environment 40 (FIG. 4), One or more aspects of the system 50 (FIG. 5) and/or any of the features discussed herein. For example, the logic 170 may include the following technology to implement the host device/computer system/agent or discrete target device aspects of the various embodiments described herein, and the logic 160 may include the following technology to implement the technology herein The storage/memory device aspects of the various embodiments described in.In particular, the logic 160 may be configured to: receive a current access request for the memory device 392 associated with the flow; identify a prompt in the current access request, the prompt indicating one or more flow characteristics of future access requests from the flow ; And processing the current access request based on the indicated one or more stream characteristics of future access requests from the stream. For example, the logic 160 may be configured to: determine one or more of the delay and bandwidth requirements for the flow based on the prompt; determine the priority of the flow based on the prompt; and/or determine the priority for the flow based on the prompt The granularity of the flow's access request. In some embodiments, the logic 160 may be further configured to schedule the current access request based on the indicated one or more flow characteristics of future access requests from the flow. For example, the logic 160 may also be configured to combine the current access request with one or more of the future access requests from the stream.In other embodiments, the SSD device 130 may be replaced by any suitable storage device/memory technology/medium. In some embodiments, the logic 160/170 may be coupled to one or more substrates (eg, silicon, sapphire, gallium arsenide, printed circuit board (PCB), etc.), and may include being positioned within one or more substrates The channel area of the transistor. In other embodiments, the SSD device 130 may include two or more types of storage media. For example, most storage devices may be NAND, and may further include some faster, smaller granular accessible (eg, byte addressable) NVM, such as INTEL 3DXP media. The SSD device 130 may alternatively or additionally include permanent volatile memory (for example, battery or capacitor backed DRAM or SRAM). For example, the SSD device 130 may include a power loss imminent (PLI) technology with an energy storage capacitor. The energy storage capacitor can provide enough energy (power) to complete any ongoing commands and ensure that any data in the DRAM/SRAM is delivered to the non-volatile NAND medium. Capacitors can be used as backup batteries for permanent volatile memory. As shown in FIG. 6, features or aspects of the logic 160 and/or the logic 170 may be distributed throughout the system 100, and/or juxtaposed/integrated with various components of the system 100.Additional notes and examplesExample 1 includes an electronic device that includes one or more substrates, and logic coupled to the one or more substrates, the logic is used to: receive a current access request for a storage medium associated with the stream; identify the current access A prompt in the request, the prompt indicating one or more flow characteristics of a future access request from the flow; and processing the current access request based on the indicated one or more flow characteristics of the future access request from the flow .Example 2 includes the apparatus of claim 1, wherein the logic is further to determine one or more of a delay and a bandwidth requirement for the flow based on the prompt.Example 3 includes the device of any one of claims 1 to 2, wherein the logic is further to: determine the priority of the flow based on the prompt.Example 4 includes the device of any one of claims 1 to 3, wherein the logic is further to: determine the granularity of the access request for the stream based on the prompt.Example 5 includes the apparatus of any one of claims 1 to 4, wherein the logic is further to: schedule current access requests based on the indicated one or more flow characteristics from future access requests from the flow.Example 6 includes the apparatus of claim 5, wherein the logic is further to: combine one or more of the current access request with future access requests from the stream.Example 7 includes the device of any one of claims 1 to 6, wherein the storage medium includes a solid state drive.Example 8 includes an electronic system that includes a storage medium and a controller communicatively coupled to the storage medium, the controller including logic to: receive current access to the storage medium associated with the stream Request; identifying a prompt in the current access request, the prompt indicating one or more stream characteristics of future access requests from the stream; and one or more stream characteristics based on the indicated future access request from the stream To handle the current access request.Example 9 includes the system of claim 8, wherein the logic is further to determine one or more of a delay and a bandwidth requirement for the flow based on the prompt.Example 10 includes the system of any one of claims 8 to 9, wherein the logic is further to: determine the priority of the flow based on the prompt.Example 11 includes the system of any one of claims 8 to 10, wherein the logic is further to: determine the granularity of the access request for the flow based on the prompt.Example 12 includes the system of any one of claims 8 to 11, wherein the logic is further to: schedule current access requests based on the indicated one or more flow characteristics from future access requests from the flow.Example 13 includes the system of claim 12, wherein the logic is further to: combine a current access request with one or more of future access requests from the stream.Example 14 includes the system of any one of claims 8 to 13, wherein the storage medium includes a solid state drive.Example 15 includes a method for controlling a storage device, including: receiving a current access request for a storage medium associated with a stream; identifying a prompt in the current access request, the prompt indicating a future access request from the stream And processing the current access request based on the indicated one or more flow characteristics of future access requests from the stream.Example 16 includes the method of claim 15, further comprising: determining one or more of a delay and a bandwidth requirement for the flow based on the prompt.Example 17 includes the method of any one of claims 15 to 16, further comprising: determining the priority of the flow based on the prompt.Example 18 includes the method of any one of claims 15 to 17, further comprising: determining the granularity of the access request for the stream based on the prompt.Example 19 includes the method of any one of claims 15 to 18, further comprising: scheduling a current access request based on the indicated one or more flow characteristics from future access requests from the flow.Example 20 includes the method of claim 19, further comprising: combining a current access request with one or more of future access requests from the stream.Example 21 includes the method of any one of claims 15 to 20, wherein the storage medium comprises a solid state drive.Example 22 includes at least one non-transitory type of machine-readable medium that includes a plurality of instructions that, in response to being executed on a computing device, cause the computing device to: receive current data for the storage medium associated with the stream Access request; identifying a prompt in the current access request, the prompt indicating one or more stream characteristics of future access requests from the stream; and one or more streams based on the indicated future access request from the stream Features to handle the current access request.Example 23 includes the at least one non-transitory machine-readable medium of claim 22, which includes a plurality of further instructions that, in response to being executed on a computing device, cause the computing device to: based on the prompt Determine one or more of the delay and bandwidth requirements for the flow.Example 24 includes the at least one non-transitory machine-readable medium of any one of claims 22 to 23, which includes a plurality of further instructions that cause the computing device to be executed in response to being executed on the computing device : Determine the priority of the stream based on the prompt.Example 25 includes at least one non-transitory machine-readable medium of any one of claims 22 to 24, which includes a plurality of further instructions that cause the computing device to be executed in response to being executed on the computing device : Determine the granularity of the access request for the stream based on the prompt.Example 26 includes at least one non-transitory machine-readable medium of any one of claims 22 to 25, which includes a plurality of further instructions that cause the computing device to be executed in response to being executed on the computing device : Schedule the current access request based on the indicated one or more flow characteristics from future access requests from the flow.Example 27 includes the at least one non-transitory machine-readable medium of claim 26, which includes a plurality of further instructions that, in response to being executed on a computing device, cause the computing device to: compare the current access request with One or more of the future access requests from the stream are combined.Example 28 includes the at least one non-transitory type of machine-readable medium according to any one of claims 22 to 27, wherein the storage medium includes a permanent storage medium.Example 29 includes a controller device including: means for receiving a current access request for a storage medium associated with a stream; means for identifying a prompt in the current access request, the prompt indicating from the stream One or more flow characteristics of the future access request; and means for processing the current access request based on the indicated one or more flow characteristics of the future access request from the flow.Example 30 includes the apparatus of claim 29, further comprising: means for determining one or more of a delay and a bandwidth requirement for the flow based on the prompt.Example 31 includes the apparatus of any one of claims 29 to 30, further comprising: means for determining the priority of the flow based on the prompt.Example 32 includes the apparatus of any one of claims 29 to 31, further comprising: means for determining the granularity of the access request for the stream based on the prompt.Example 33 includes the apparatus of any one of claims 29 to 32, further comprising: means for scheduling current access requests based on the indicated one or more flow characteristics of future access requests from the flow.Example 34 includes the apparatus of claim 33, further comprising: means for combining the current access request with one or more of the future access requests from the stream.Example 35 includes the device of any one of claims 29 to 34, wherein the storage medium comprises a permanent storage medium.The term "coupled" can be used herein to refer to any type of direct or indirect relationship between the discussed components, and can be applied to electrical, mechanical, fluid, optical, electromagnetic, electromechanical, or other connections. In addition, unless otherwise indicated, the terms "first", "second", etc. may be used herein only to facilitate discussion, and do not carry any specific time or chronological importance.As used in this application and the claims, a list of items connected by the term "one or more of" can mean any combination of the listed items. For example, the phrase "one or more of A, B, and C" and the phrase "one or more of A, B, or C" can both mean A; B; C; A and B; A and C ; B and C; or A, B and C. The various components of the system described herein may be implemented by software, firmware, and/or hardware and/or any combination thereof. For example, the various components of the systems or devices discussed herein may be provided at least in part by hardware such as computing SoCs that can be found in computing systems such as, for example, smart phones. Those skilled in the art may realize that the system described herein may include additional components that have not been depicted in the corresponding drawings. For example, the system discussed herein may include additional components that have not been depicted for clarity, such as a bitstream multiplexer or demultiplexer module, and so on.Although the implementation of the example process discussed herein may include undertaking all operations shown in the order illustrated, the present disclosure is not limited in this respect, and in various examples, the implementation of the example process herein Only a subset of the operations shown, operations performed in a different order than that illustrated, or additional operations may be included.In addition, any one or more operations discussed herein can be undertaken in response to instructions provided by one or more computer program products. Such a program product may include a signal-bearing medium that provides instructions that, when executed by, for example, a processor, can provide the functions described herein. The computer program product can be provided in any form of one or more machine-readable media. Thus, for example, a processor including one or more graphics processing units or processor core(s) can respond to program codes and/or instructions or instructions communicated to the processor by one or more machine-readable media. The instruction set assumes one or more blocks of the example process in this article. Generally speaking, a machine-readable medium can convey software in the form of program code and/or instructions or a set of instructions, and the program code and/or instructions or a set of instructions can make any of the devices and/or systems described herein At least part of the operation as discussed herein is implemented, and/or any part of the device, system, or any other module or component discussed herein.As used in any implementation described herein, the term "module" refers to any combination of software logic, firmware logic, hardware logic, and/or circuits configured to provide the functions described herein. Software can be embodied as a software package, code, and/or instruction set or instruction, and the "hardware" used in any implementation described herein can, for example, include hard-wired circuits, singly or in any combination, Programmable circuits, state machine circuits, fixed function circuits, execution unit circuits, and/or firmware that stores instructions executed by programmable circuits. These modules can be collectively or individually embodied as circuits that form part of a larger system, such as integrated circuits (ICs), system-on-chips (SoC), and so on.Various embodiments may be implemented using hardware elements, software elements, or a combination of both. Examples of hardware elements may include processors, microprocessors, circuits, circuit elements (for example, transistors, resistors, capacitors, inductors, etc.), integrated circuits, application specific integrated circuits (ASIC), programmable logic devices (PLD) , Digital Signal Processor (DSP), Field Programmable Gate Array (FPGA), logic gates, registers, semiconductor devices, chips, microchips, chipsets, etc. Examples of software may include software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software Interface, application programming interface (API), instruction set, calculation code, computer code, code segment, computer code segment, word, value, symbol, or any combination thereof. Determining whether an embodiment uses hardware elements and/or software elements to implement can vary depending on any number of factors, such as desired calculation rate, power level, heat resistance, processing cycle budget, input data rate, output data rate, memory resources , Data bus speed and other design and performance constraints.One or more aspects of at least one embodiment can be implemented by representative instructions stored on a machine-readable medium representing various logics in a processor, and the machine-readable medium, when read by a machine, causes the machine to produce To implement the logic of the techniques described in this article. This representation, called an IP core, can be stored on a tangible machine-readable medium and supplied to various customers or manufacturing facilities to be loaded into the manufacturing machine that actually makes the logic or processor.Although certain features set forth herein have been described with reference to various implementations, this description is not intended to be understood in a restrictive sense. Therefore, various modifications of the implementations described herein and other implementations obvious to those skilled in the art to which the present disclosure relates are considered to be within the spirit and scope of the present disclosure.It will be appreciated that the embodiment is not limited to the embodiment thus described, but can be practiced through modification and alteration without departing from the scope of the appended claims. For example, the above-described embodiments may include specific combinations of features. However, the above-mentioned embodiments are not limited in this respect, and in various implementations, the above-mentioned embodiments may include: assuming only a subset of these features, assuming different orders of these features, assuming different combinations of these features, and/ Or assume additional features in addition to those explicitly listed. Therefore, the scope of the embodiments should be determined with reference to the appended claims and the full scope of equivalents entitled by such claims. |
A damascene process includes the deposition of a first layer of insulation over a substance and the etching of a first hole in the first layer of insulation. The first hole is filled with a metal. A second layer of insulation is deposited over the first layer of insulation, and a second hole is etched in the second layer of insulation and over the first hole. An interface layer is provided over the metal and within the second hole. The interface layer is exposed to a nitrogen/hydrogen plasma to passivate the interface layer and reduce an ability of the interface layer to associate with oxygen. |
What is claimed is: 1. A damascene process, comprising:depositing a first layer of insulation over a substrate; etching a first hole in said first layer of insulation; filling said first hole with a metal; depositing a second layer of insulation over said first layer of insulation; etching a second hole in said second layer of insulation and over said first hole; providing an interface layer over said metal and within said second hole; and exposing said interface layer to a substance selected from a group consisting of diborane, phosphine, carbon-silicon compounds, HCL, boron trichloride, and combinations thereof to passivate the interface layer and reduce an ability of the interface layer to associate with oxygen. 2. The damascene process of claim 1 wherein the carbon-silicon compounds are selected from a group consisting of methylsilane, hexamethyldisilane, hexamethyldisilazane, and combinations thereof. |
RELATED APPLICATIONThis application is a divisional of U.S. application Ser. No. 09/200,253, filed Nov. 25, 1998, now U.S. Pat. No. 6,303,972.TECHNICAL FIELDThe present invention relates generally to a method of protecting against a conductive layer incorporating oxygen and a device including that layer. More specifically, the present invention relates to an in situ treatment of tungsten nitride.BACKGROUND OF THE INVENTIONThere is a constant need in the semiconductor industry to increase the number of dies that can be produced per silicon wafer. This need, in turn, encourages the formation of smaller die. Accordingly, it would be beneficial to be able to form smaller structures and devices on each die without losing performance. For example, as capacitors are designed to take an ever decreasing amount of die space, those skilled in the relevant art have sought new materials with which to maintain or even increase capacitance despite the smaller size.One such material is tantalum pentoxide (Ta2O5), which can be used as the dielectric in the capacitor. Oftentimes, an electrically conductive layer, such as one made of hemispherical silicon grain (HSG), underlies the tantalum pentoxide and serves as the capacitor's bottom conductive plate. With other dielectrics, it is preferable to have a layer of polycrystalline silicon (polysilicon) deposited over the dielectric to serve as the capacitor's top conductive plate. If polysilicon is deposited directly onto tantalum pentoxide, however, several problems will occur. First, silicon may diffuse into the tantalum pentoxide, thus degrading it. Second, oxygen will migrate from the tantalum pentoxide, resulting in a capacitor that leaks charge too easily. Further, the oxygen migrates to the polysilicon, creating a layer of non-conductive oxide, which decreases the capacitance. This can also be a problem when using barium strontium titanate ((Ba, Sr)TiO3, or BST) as the dielectric.In order to avoid these problems, it is known to deposit a top plate comprising two conductive layers. Polysilicon serves as the upper layer of the plate, with a non-polysilicon conductive material interfacing between the tantalum pentoxide and polysilicon. One such material often used is tungsten nitride (WNX, wherein X is a number greater than zero). However, other problems arise with this process. Specifically, by the end of the capacitor formation process, a layer of non-conductive oxide often forms between the two conductive layers of the top plate. For ease of explanation, this non-conductive oxide will be assumed to be silicon dioxide (SiO2), although other non-conductive oxides, either alone or in combination, may be present.Without limiting the current invention, it is theorized that the tungsten nitride is exposed to an ambient containing oxygen. The tungsten nitride adsorbs this oxygen due to bonds located on the grain boundaries of the tungsten nitride surface. Once the polysilicon layer is deposited, the device is then exposed to a thermal process. For example, the capacitor may be blanketed with an insulator, such as borophosphosilicate glass (BPSG). The BPSG layer may not be planar, especially if it is used to fill a trench in which the capacitor is constructed. Heat is applied to the die to cause the BPSG to reflow and thereby planarize. The heat can cause the oxygen at the tungsten nitride surface to diffuse into the polysilicon, wherein the oxygen and silicon react to form silicon dioxide.Regardless of the exact manner in which the silicon dioxide layer is formed, the result is that the HSG/Ta2O5/WNX/SiO2/polysilicon layers form a pair of capacitors coupled in series, wherein the HSG/Ta2O5/WNX layers serve as one capacitor and the WNX/SiO2/polysilicon layers serve as the second capacitor in the series. This pair of capacitors has less capacitance combined than the single HSG/Ta2O5/WNX/polysilicon capacitor that was intended to be formed.Other problems can occur with the association of WNX and Ta2O5. For example, it is possible for the WNX to serve as the bottom plate of a capacitor, underlying the Ta2O5 dielectric. In that case, the deposition of the Ta2O5 or a subsequent reoxidation of that layer may cause the WNX layer to incorporate oxygen, thereby reducing capacitance.It should be further noted that capacitor formation is not the only circumstance in which such problems can occur. There are many situations in which an in-process multi-layer conductive structure is exposed to oxygen and is subjected to conditions that encourage oxidation. Another example can be seen in the formation of metal lines. A layer of tungsten nitride, or perhaps tantalum nitride, may serve as an interface between the conductive material of a via and the metal line. If the interface is exposed to an ambient containing oxygen, then a thermal process involving the alloying or flowing of the metal in the metal line could cause a similar problem with oxidation, thereby hindering electrical contact.As a result, there is a specific need in the art to prevent or at least decrease the degradation of capacitance in capacitors and of electrical communication in metal lines. There is also a more general need to prevent or at least protect against or minimize the migration of oxygen in relation to a conductive layer of a semiconductor device.SUMMARY OF THE INVENTIONAccordingly, the current invention provides a method for protecting a conductive layer from oxygen. At least one exemplary embodiment concerns preventing or at least limiting a first conductive layer from incorporating oxygen beneath the layer's surface. Other exemplary embodiments address methods of limiting the first conductive layer's ability to adsorb oxygen. In doing so, such embodiments can help prevent the diffusion of oxygen into a second conductive layer, thereby protecting against oxidation between conductive layers. One such method serving as an exemplary embodiment involves exposing one of the conductive layers to an N2/H2 plasma before another conductive layer is provided thereon. In a preferred embodiment, this step is performed in situ relative to the environment or ambient atmosphere in which the one conductive layer was provided.Other exemplary embodiments include the use of other nitrogen-containing plasmas, as well as the use of nitrogen-containing gases that are not in plasma form. Still other exemplary embodiments use gases that do not contain nitrogen.Further, alternate embodiments protect against oxidation between conductive layers with a step performed ex situ relative to the environment or ambient atmosphere in which the one conductive layer was provided. In one specific exemplary embodiment of this type, silane gas is flowed over the one conductive layer.In preferred exemplary embodiments, at least one of the processes described above is performed on a conductive material that has the ability to adsorb or otherwise associate with oxygen. In a more specific embodiment, this material is a non-polysilicon material. Still more specific exemplary embodiments perform one of the processes on tungsten nitride or on tantalum nitride. In an even more specific exemplary embodiment, a tungsten nitride layer is treated before providing a polysilicon layer thereover.In yet another exemplary embodiment, a treatment such as the ones described above occurs in the context of capacitor formation and, more specifically, occurs in between depositing two conductive layers serving as the capacitor's top plate. In another exemplary embodiment, the treatment occurs between depositing the bottom plate and the dielectric of a capacitor. In yet another exemplary embodiment involves treating a conductive layer as part of the formation of a conductive line.In preferred embodiments, the method completely prevents the formation of the oxidation layer, although other exemplary embodiments allow for the restriction of the oxidation layer. In some embodiments, this oxidation layer is less than 10 angstroms thick. These methods also apply to embodiments concerning limiting a first conductive layer from incorporating oxygen beneath the layer's surface. In addition, the current invention also includes apparatus embodiments exhibiting these characteristics.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 depicts an in-process device as known in the prior art.FIG. 2 depicts an in-process device having undergone an additional step known in the prior art.FIG. 3 depicts an in-process device having undergone yet more steps known in the prior art.FIG. 4 depicts one exemplary embodiment of the current invention.FIG. 5 depicts a second exemplary embodiment of the current invention.FIG. 6 depicts an in-process device as known in the prior art.FIG. 7 depicts another in-process device as known in the prior art.FIG. 8 depicts the in-process device in FIG. 7 having undergone an additional step known in the prior art.FIG. 9 depicts a third exemplary embodiment of the current invention.FIG. 10 depicts a fourth exemplary embodiment of the current invention.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTFIG. 1 depicts an "in-process" device 20-one that is in the process of being constructed-having undergone processes known in the art. First, a substrate 22 has been provided. In the current application, the term "substrate" or "semiconductor substrate" will be understood to mean any construction comprising semiconductor material, including but not limited to bulk semiconductive materials such as a semiconductor wafer (either alone or in assemblies comprising other materials thereon), and semiconductive material layers (either alone or in assemblies comprising other materials). Further, the term "substrate" also refers to any supporting structure including, but not limited to, the semiconductive substrates described above. Over the substrate 22, a first conductive layer 24 is provided. It is assumed for purposes of explanation only that the in-process device is a capacitor in the process of being built. Accordingly, the first conductive layer 24 serves as one of the capacitor's conductive plates 25 (see FIG. 2) and may be made of HSG. Returning to FIG. 1, a dielectric 26 is provided which, in this case, is tantalum pentoxide. Subsequently, a second conductive layer is provided, which is intended to serve as part of the other conductive plate for the capacitor. Because the dielectric 26 is tantalum pentoxide, the second conductive layer should not be polysilicon. Rather, in this case, the second conductive layer is assumed to be a tungsten nitride layer 28. Once the tungsten nitride layer 28 is provided, however, there may be a tendency for oxygen to be adsorbed onto the surface of that layer 28.Further, this adsorption may occur before a third conductive layer is provided. This layer can be a polysilicon layer 30 illustrated in FIG. 2. Ideally, the tungsten nitride layer 28 and the polysilicon layer 30 define the other conductive plate 32.However, if the third conductive layer is oxidizable, then further process steps may cause other results. For example, as seen in FIG. 3, a subsequent thermal process may cause a reaction between the polysilicon layer 30 and the oxygen that had been adsorbed onto the surface of the tungsten nitride layer 28. In building a capacitor, this thermal process can be the reflowing of a BPSG layer 34 that is deposited over the polysilicon layer 30. The heat may cause the formation of a silicon dioxide layer 36 between the tungsten nitride layer 28 and the polysilicon layer 30, essentially creating two capacitors 38 and 40 connected in series and having less combined capacitance than the one capacitor originally intended.One preferred exemplary embodiment of the current invention is a method for protecting against the formation of the silicon dioxide layer 36 during the formation of the capacitor. Once the prior art steps depicted in FIG. 1 are carried out, this exemplary embodiment has the tungsten nitride layer 28 exposed in situ to an N2 and H2 plasma. The term in situ indicates that the plasma process takes place in the same chamber, or at least within the same general atmosphere, as the process used to provide the tungsten nitride layer. At the very least, the term in situ indicates that the plasma process takes place before exposing the in-process device 20 to the atmosphere associated with providing the polysilicon layer 30. Exemplary process parameters include a temperature ranging from about 150 to about 600 degrees Celsius; gas flows including H2 at about 50 to about 2000 sccm, N2 at about 5 to about 1000 sccm, and Ar at about 200 to about 2000 sccm; a radio frequency (RF) power ranging from about 50 to about 1000 W; a pressure ranging from about 1 millitorr to about 10 torr; and a process time ranging from about 10 seconds to about 240 seconds. One of ordinary skill in the art, however, can appreciate that these parameters can be altered to achieve the same or a similar process.Without limiting the current invention, it is theorized that this treatment stuffs the tungsten nitride grain boundaries with nitrogen or otherwise passivates the layer, thereby making the bonds at the grain boundaries less active. As a result, oxygen will be less likely to be adsorbed or otherwise become associated with the tungsten nitride layer, if at all. For example, without this treatment, a silicon dioxide layer 36 about 10 to 40 angstroms thick will form between the tungsten nitride layer 28 and the polysilicon layer 30 (see FIG. 3). The exemplary process described above can result in a silicon dioxide layer 36 that is less than 10 angstroms thick, as seen in FIG. 4, and is preferably nonexistent, as illustrated in FIG. 5.Moreover, the current invention is not limited to the process described above. There are other methods of providing nitrogen to the tungsten nitride that are within the scope of this invention. For example, another such plasma treatment involves the use of ammonia (NH3) in place of the nitrogen and hydrogen. In using ammonia for the plasma, parameters such as the ones previously described can be used, except that it is preferred to have a flow rate of ammonia ranging from about 5 sccm to about 1000 sccm and a process time of up to 500 seconds. Yet another embodiment includes a plasma treatment using N2 without H2. In that case, the exemplary process parameters are generally the same as those used with N2/H2 plasma except that the flow rate of N2 is 50-2000 sccm.Alternatively, ultraviolet light could be provided in place of RF energy. For example, in using N2 and H2 or in using NH3, the process parameters would be similar to the ones described above for those gases, except the RF energy would be replaced with UV light at a power ranging from 50 W to 3 kW.Further, the current invention also includes within its scope other methods of providing nitrogen without using electromagnetic energy to affect the gas. One such exemplary embodiment still involves introducing ammonia gas into the process chamber at the same flow rate and time as mentioned in the previous ammonia example, but at a pressure ranging from about 50 millitorr to about 1 atmosphere (760 torr).In addition, the current invention is not limited to providing nitrogen to the tungsten nitride. Other gases may provide a reducer, passivator material, or some nonoxygen stuffing agent to the tungsten nitride surface; or otherwise cause the tungsten nitride to associate with an oxygen-free material. A plasma treatment using H2 without N2 serves as one such embodiment. Exemplary parameters include a temperature ranging from about 150 to about 600 degrees Celsius; gas flows including H2 at about 50 to about 2000 sccm, and Ar at about 200 to about 2000 sccm; an RF power ranging from about 50 to about 1000 W; a pressure ranging from about 1 millitorr to about 10 torr; and a process time ranging from about 10 seconds to about 240 seconds.Still other gases include diborane (B2H6); phosphine (PH3); and carbon-silicon compounds such as methylsilane (CH3SiH3) and hexamethyldisilane (CH3)3Si-Si(CH3)3; and hexamethyldisilazane (HMDS). Additional alternate embodiments of the current invention use hydrazine (N2H4), monomethylhydrazine, carbon tetrafluoride (CF4), CHF3, HCl, and boron trichloride (BCl3), which are also useful in passivating dielectrics, as addressed in copending application 09/114,847, now issued as U.S. Pat. No. 6,201,276 B1. Also included are mixtures of any of the gases or types of gases described above. Exemplary non-plasma process parameters using these other gases include a flow rate of about 2 sccm to about 400 sccm for these gases; a flow rate of about 50 sccm to about 100 sccm for an inert carrier gas such as He or Ar; a temperature ranging from about 150 to about 600 degrees Celsius, a pressure ranging from about 50 millitorr to about 1 atmosphere (760 torr); and a process time ranging from about 50 to about 500 seconds. Again, one skilled in the art is aware that these parameters can be altered to achieve the same or a similar process.It is preferred that at least one of the processes described above occur between providing the tungsten nitride layer 28 and providing the polysilicon layer 30. It is more preferable that one of the inventive processes be carried out in a reducing atmosphere or at least before the tungsten nitride layer 28 is exposed to oxygen. Though such exposure is undesirable in many circumstances, it may be unavoidable. For example, the tungsten nitride layer 28 may be exposed to the cleanroom air at some point during processing. Thus, it is even more preferable to treat the tungsten nitride layer 28 in situ relative to the environment or ambient atmosphere used to provide the tungsten nitride layer 28. It is still more preferable to cover the treated tungsten nitride layer 28 before the in-process device 20 is exposed, even unintentionally, to oxygen. This is preferable because any exposure may allow at least some oxygen to associate with the tungsten nitride layer 28, even after one of the inventive treatments disclosed herein. Nevertheless, it is not necessary under the current invention to discourage oxygen adsorption before exposing the in-process device to the atmosphere associated with providing the polysilicon layer 30. If the in-process capacitor 20 is removed from the environment used to provide the tungsten nitride layer 28 and one of the inventive processes described has not been performed, then another option within the scope of the current invention is to expose the tungsten nitride layer 28 to a reducing atmosphere before providing the polysilicon layer 30. This can be done by flowing silane gas (SiH4) into the environment of the in-process device 20. Process parameters include a silane flow ranging from 50 to 1,000 sccm, a pressure of 10 torr to 1 atmosphere, a temperature ranging from 300 to 700 degrees Celsius, and a process time ranging from 10 to 300 seconds. Moreover, this silane treatment, if chosen, is not limited to ex situ situations. Silane gas may be used in place of or in combination with the in situ treatments described herein. Accordingly, any combination of the individual processes covered by the current invention are also within its scope.As mentioned in the background section, oxygen diffusing away from the tungsten nitride is not the only concern when using that layer along with tantalum pentoxide. As seen in FIG. 6, a tungsten nitride layer 128 is deposited over the substrate 122. A dielectric layer 126, assumed to be tantalum pentoxide, is deposited over the tungsten nitride layer 128. Assuming the in-process device of FIG. 6 represents the early stage of a capacitor, the tungsten nitride layer 128 will serve as the bottom plate rather than part of the top plate as depicted in previous figures. The process of depositing the tantalum pentoxide dielectric layer 126 may cause the tungsten nitride layer 128 to incorporate oxygen. In addition, further processing, such as a reoxidation of the tantalum pentoxide dielectric layer 126 may cause the tungsten nitride layer 128 to incorporate still more oxygen. This incorporation of oxygen will reduce the capacitance of the finished device. Under these circumstances, a preferred embodiment of the current invention calls for exposing the tungsten nitride layer 128 to an N2/H2 plasma before depositing tantalum pentoxide dielectric layer 126. This plasma is created under the parameters already disclosed above. Although using an N2 and H2 plasma is preferred, the alternatives presented earlier-such as a non-plasma process, the use of another nitrogen-containing gas, or the use of a nitrogen-free gas, may also be used under these circumstances, and such alternatives fall within the scope of the invention. Further, it is not required to use tungsten nitride and tantalum pentoxide as the two layers, as embodiments of the current invention will work on other conductive layers and dielectric layers as well.Thus, embodiments of the current invention protect against a conductive layer associating with oxygen in at least two circumstances. First, where a dielectric is deposited over a conductive layer, the disclosed methods help prevent oxygen from being incorporated within the conductive layer. Second, when a second conductive layer is deposited over the initial conductive layer, the disclosed methods inhibit oxygen from being incorporated by the second conductive layer and forming an oxide.It should be further noted that embodiments of the current invention are not limited to the circumstances related to the formation of capacitors. As further mentioned in the background section, a similar risk of oxidation between two conductive materials can occur during the formation of metal lines in a semiconductor device. As seen inFIG. 7, insulation 42 has been deposited over the substrate 22 and subsequently etched to define a via 44. The via is filled with a conductive material, such as polysilicon, tungsten, copper, or aluminum. In this configuration, the conductive material may be referred to as a "plug" 46. The plug 46 will allow electrical communication between the underlying substrate 22, which may be doped to serve as part of a transistor, and the overlying line material 48. The line material 48 may be copper or some other conductive material, including an alloy. The line material 48 is often deposited within a container 50, also defined by etching insulation 42. (One skilled in the art can appreciate that different layers of insulation may define the via 44 and the container 50.)As a part of this process, it may also be preferred to include an interposing layer 52 between the line material 48 and the plug 46. For purposes of explaining the current invention, it is assumed that the interposing layer 52 comprises tungsten nitride. This interposing layer 52 may enhance electrical contact between the line material 48 and the plug 46, promote adhesion of the line material 48 within the container 50, prevent or slow the diffusion of material across its boundaries, or serve some other purpose.Regardless of the intended or inherent purpose, this interposing layer may adsorb oxygen after it is formed. Moreover, there may be thermal processes involved with or occurring subsequent to providing the line material 48. Such a thermal process could be used to deposit, flow, or alloy the line material 48. As a result of this or any other thermal process, it is believed that the oxygen adsorbed by the tungsten nitride interposing layer 52 will react with the line material 48, thereby forming an oxide layer 54 between the interposing layer 52 and the line material 48 (FIG. 8). This oxide layer 54, being an insulator, will hinder the ability to allow electrical communication between the line material 48 and the plug 46. Accordingly, the exemplary methods described above may be used to reduce the oxide layer 54 to a thickness of less than 10 angstroms and preferably down to 0 angstroms, as seen respectively in FIGS. 9 and 10.One skilled in the art can appreciate that, although specific embodiments of this invention have been described for purposes of illustration, various modifications can be made without departing from the spirit and scope of the invention. For example, it is not necessary to use an exemplary treatment of the current invention on a tungsten nitride layer. The invention's embodiments will also be effective on tantalum nitride surfaces, as well as other surfaces that may adsorb or otherwise associate or interact with oxygen.Further it should be noted that the general process described above for providing a metal line could be considered a damascene process, wherein a hole in insulation is filled with metal. This type of process is contrasted to processes wherein a continuous layer of metal is etched to a desired configuration and then surrounded with insulation. More specifically, the metal line process described above is an example of a dual damascene process wherein a second insulator 43 (see FIG. 10) is formed over the first insulator 42 as part of forming the container 50, as will be understood by those skilled in the art. It follows, then, that the current invention may be applied in any type of damascene process. Moreover, one skilled in the art will now be able to appreciate that exemplary methods embodying the current invention apply to any situation involving the prevention, minimization, or change in a factor affecting the association of oxygen with a conductive layer. As a result, the current invention also includes within its scope devices that comprise two conductive layers and a minimal amount of oxide, if any, therebetween. Accordingly, the invention is not limited except as stated in the claims. |
A method of displaying graphical user interface objects is disclosed and may include displaying a GUI object menu on a display and displaying a wrinkled portion at at least one an end of the GUI object menu. The wrinkle indicator may indicate that one or more GUI objects are available off screen at an edge of the display adjacent to the wrinkle indicator. |
CLAIMS What is claimed is: 1. A method of displaying graphical user interface objects, the method comprising: displaying a GUI object menu on a display; and displaying a wrinkled portion at at least one an end of the GUI object menu. 2. The method of claim 1, wherein the wrinkle indicator indicates one or more GUI objects are available off screen at an edge of the display adjacent to the wrinkle indicator. 3. The method of claim 2, further comprising: determining whether the GUI object menu is moved. 4. The method of claim 3, further comprising: determining a direction of motion if the GUI object menu is moved, wherein the direction of motion comprises a first direction and a second direction opposite the first direction. 5. The method of claim 4, further comprising: determining whether one or more GUI objects are available off screen in the second direction, when the GUI object menu is moved in the first direction. 6. The method of claim 5, further comprising:displaying an unwrinkled portion on an end of the GUI object menu corresponding to the second direction, when one or more GUI objects are not available off screen in the second direction. 7. The method of claim 6, further comprising: displaying an unwrinkled, stretched portion on the end of the GUI object menu corresponding to the second direction, when an attempt to continue to move the GUI object menu in the first direction is made. 8. The method of claim 5, further comprising: designated a GUI object nearest an edge of the display corresponding to the first direction as an exiting GUI object, when one or more GUI objects are available off screen to the second direction. 9. The method of claim 8, further comprising: monitoring a location of the exiting GUI object. 10. The method of claim 9, further comprising: wrinkling the end of the GUI object menu corresponding to the first direction when the exiting GUI object is within a first predetermined distance of an edge of the display corresponding to the first direction. 11. The method of claim 10, further comprising: collapsing the exiting GUI object when the exiting GUI object is within a second predetermined distance of the side of the display corresponding to the first direction. 12. The method of claim 11, further comprising: unwrinkling the end of the GUI object menu corresponding to the second direction. 13. The method of claim 12, further comprising: designating a GUI object off screen in the second direction as an entering GUI object. 14. The method of claim 13, further comprising: expanding the entering GUI object into the GUI object menu. 15. A wireless device, comprising: means for displaying a GUI object menu on a display; and means for displaying a wrinkled portion at at least one an end of the GUI object menu. 16. The wireless device of claim 15, wherein the wrinkle indicator indicates one or more GUI objects are available off screen at an edge of the display adjacent to the wrinkle indicator. 17. The wireless device of claim 16, further comprising:means for determining whether the GUI object menu is moved. 18. The wireless device of claim 17, further comprising: means for determining a direction of motion if the GUI object menu is moved, wherein the direction of motion comprises a first direction and a second direction opposite the first direction. 19. The wireless device of claim 18, further comprising: means for determining whether one or more GUI objects are available off screen in the second direction, when the GUI object menu is moved in the first direction. 20. The wireless device of claim 19, further comprising: means for displaying an unwrinkled portion on an end of the GUI object menu corresponding to the second direction, when one or more GUI objects are not available off screen in the second direction. 21. The wireless device of claim 20, further comprising: means for displaying an unwrinkled, stretched portion on the end of the GUI object menu corresponding to the second direction, when an attempt to continue to move the GUI object menu in the first direction is made. 22. The wireless device of claim 19, further comprising:means for designated a GUI object nearest an edge of the display corresponding to the first direction as an exiting GUI object, when one or more GUI objects are available off screen to the second direction. 23. The wireless device of claim 22, further comprising: means for monitoring a location of the exiting GUI object. 24. The wireless device of claim 23, further comprising: means for wrinkling the end of the GUI object menu corresponding to the first direction when the exiting GUI object is within a first predetermined distance of an edge of the display corresponding to the first direction. 25. The wireless device of claim 24, further comprising: means for collapsing the exiting GUI object when the exiting GUI object is within a second predetermined distance of the side of the display corresponding to the first direction. 26. The wireless device of claim 25, further comprising: means for unwrinkling the end of the GUI object menu corresponding to the second direction. 27. The wireless device of claim 26, further comprising: means for designating a GUI object off screen in the second direction as an entering GUI object. 28. The wireless device of claim 27, further comprising: means for expanding the entering GUI object into the GUI object menu. 29. A wireless device, comprising: a processor, wherein the processor is operable to: display a GUI object menu on a display; and display a wrinkled portion at at least one an end of the GUI object menu. 30. The wireless device of claim 29, wherein the wrinkle indicator indicates one or more GUI objects are available off screen at an edge of the display adjacent to the wrinkle indicator. 31. The wireless device of claim 30, wherein the processor is further operable to: determine whether the GUI object menu is moved. 32. The wireless device of claim 31, wherein the processor is further operable to: determine a direction of motion if the GUI object menu is moved, wherein the direction of motion comprises a first direction and a second direction opposite the first direction. 33. The wireless device of claim 32, wherein the processor is further operable to: determine whether one or more GUI objects are available off screen in the second direction, when the GUI object menu is moved in the first direction. 34. The wireless device of claim 33, wherein the processor is further operable to:display an unwrinkled portion on an end of the GUI object menu corresponding to the second direction, when one or more GUI objects are not available off screen in the second direction. 35. The wireless device of claim 34, wherein the processor is further operable to: display an unwrinkled, stretched portion on the end of the GUI object menu corresponding to the second direction, when an attempt to continue to move the GUI object menu in the first direction is made. 36. The wireless device of claim 33, wherein the processor is further operable to: designate a GUI object nearest an edge of the display corresponding to the first direction as an exiting GUI object, when one or more GUI objects are available off screen to the second direction. 37. The wireless device of claim 36, wherein the processor is further operable to: monitor a location of the exiting GUI object. 38. The wireless device of claim 37, wherein the processor is further operable to: wrinkle the end of the GUI object menu corresponding to the first direction when the exiting GUI object is within a first predetermined distance of an edge of the display corresponding to the first direction. 39. The wireless device of claim 38, wherein the processor is further operable to: collapse the exiting GUI object when the exiting GUI object is within a second predetermined distance of the side of the display corresponding to the first direction. 40. The wireless device of claim 39, wherein the processor is further operable to: unwrinkle the end of the GUI object menu corresponding to the second direction. 41. The wireless device of claim 40, wherein the processor is further operable to: designate a GUI object off screen in the second direction as an entering GUI object. 42. The wireless device of claim 41, wherein the processor is further operable to: expand the entering GUI object into the GUI object menu. 43. A computer program product, comprising: at least one instruction for displaying a GUI object menu on a display; and at least one instruction for displaying a wrinkled portion at at least one an end of the GUI object menu. 44. The computer program product of claim 43, wherein the wrinkle indicator indicates one or more GUI objects are available off screen at an edge of the display adjacent to the wrinkle indicator. 45. The computer program product of claim 44, further comprising: at least one instruction for determining whether the GUI object menu is moved. 46. The computer program product of claim 45, further comprising: at least one instruction for determining a direction of motion if the GUI object menu is moved, wherein the direction of motion comprises a first direction and a second direction opposite the first direction. 47. The computer program product of claim 46, further comprising: at least one instruction for determining whether one or more GUI objects are available off screen in the second direction, when the GUI object menu is moved in the first direction. 48. The computer program product of claim 47, further comprising: at least one instruction for displaying an unwrinkled portion on an end of the GUI object menu corresponding to the second direction, when one or more GUI objects are not available off screen in the second direction. 49. The computer program product of claim 48, further comprising: at least one instruction for displaying an unwrinkled, stretched portion on the end of the GUI object menu corresponding to the second direction, when an attempt to continue to move the GUI object menu in the first direction is made. 50. The computer program product of claim 47, further comprising: at least one instruction for designated a GUI object nearest an edge of the display corresponding to the first direction as an exiting GUI object, when one or more GUI objects are available off screen to the second direction. 51. The computer program product of claim 50, further comprising: at least one instruction for monitoring a location of the exiting GUI object. 52. The computer program product of claim 51, further comprising: at least one instruction for wrinkling the end of the GUI object menu corresponding to the first direction when the exiting GUI object is within a first predetermined distance of an edge of the display corresponding to the first direction. 53. The computer program product of claim 52, further comprising: at least one instruction for collapsing the exiting GUI object when the exiting GUI object is within a second predetermined distance of the side of the display corresponding to the first direction. 54. The computer program product of claim 53, further comprising: at least one instruction for unwrinkling the end of the GUI object menu corresponding to the second direction. 55. The computer program product of claim 54, further comprising:at least one instruction for designating a GUI object off screen in the second direction as an entering GUI object. 56. The computer program product of claim 55, further comprising: at least one instruction for expanding the entering GUI object into the GUI object menu. |
SYSTEM AND METHOD OF DISPLAYING GRAPHICAL USER INTERFACE OBJECTS RELATED APPLICATIONS [0001] The present application claims priority to and incorporates by reference U.S. Provisional Patent Application Serial Number 61/312,117, entitled SYSTEM AND METHOD OF DISPLAYING GRAPHICAL USER INTEFACE OBJECTS, filed on March 9, 2010. DESCRIPTION OF THE RELATED ART [0002] Portable computing devices (PDs) are ubiquitous. These devices may include cellular telephones, portable digital assistants (PDAs), portable game consoles, palmtop computers, and other portable electronic devices. Many portable computing devices include a touch screen interface in which a user may interact with the device and input commands. Further, the touch screen interface may be used to display multiple items, e.g., application icons, thumbnails, tiles, or a combination thereof. Many displays include scrolling functionality as a way to navigate through the items and locate specific items. Oftentimes, the scrolling functionality may cumbersome and difficult to use. [0003] Accordingly, what is needed is an improved method of displaying graphical user interface objects on a touchscreen user interface. BRIEF DESCRIPTION OF THE DRAWINGS [0004] In the figures, like reference numerals refer to like parts throughout the various views unless otherwise indicated. [0005] FIG. 1 is a front plan view of a first aspect of a portable computing device (PCD) in a closed position; [0006] FIG. 2 is a front plan view of the first aspect of a PCD in an open position; [0007] FIG. 3 is a block diagram of a second aspect of a PCD; [0008] FIG. 4 is a first portion of a flowchart illustrating a method of displaying graphical user interface objects; [0009] FIG. 5 is a second portion of a flowchart illustrating a method of displaying graphical user interface objects;[0010] FIG. 6 is a third portion of a flowchart illustrating a method of displaying graphical user interface objects; [0011] FIG. 7 is a fourth portion of a flowchart illustrating a method of displaying graphical user interface objects; [0012] FIG. 8 is a fifth portion of a flowchart illustrating a method of displaying graphical user interface objects; [0013] FIG. 9 is a sixth portion of a flowchart illustrating a method of displaying graphical user interface objects; [0014] FIG. 10 is a seventh portion of a flowchart illustrating a method of displaying graphical user interface objects; [0015] FIG. 11 is an eighth portion of a flowchart illustrating a method of displaying graphical user interface objects; [0016] FIG. 12 is a first plan view of a graphical user interface object menu; [0017] FIG. 13 is a second plan view of the graphical user interface object menu; [0018] FIG. 14 is a first detailed view of a graphical user interface object menu; [0019] FIG. 15 is a third plan view of the graphical user interface object menu; [0020] FIG. 16 is a second detailed view of a graphical user interface object menu; [0021] FIG. 17 is a fourth plan view of the graphical user interface object menu; [0022] FIG. 18 is a third detailed view of a graphical user interface object menu; [0023] FIG. 19 is a fifth plan view of the graphical user interface object menu; [0024] FIG. 20 is a fourth detailed view of a graphical user interface object menu; [0025] FIG. 21 is a sixth plan view of the graphical user interface object menu; [0026] FIG. 22 is a fifth detailed view of a graphical user interface object menu; [0027] FIG. 23 is a seventh plan view of the graphical user interface object menu; [0028] FIG. 24 is an eight plan view of the graphical user interface object menu; [0029] FIG. 25 is a ninth plan view of the graphical user interface object menu; [0030] FIG. 26 is a tenth plan view of the graphical user interface object menu; [0031] FIG. 27 is a eleventh plan view of the graphical user interface object menu; [0032] FIG. 28 is a twelfth plan view of the graphical user interface object menu; [0033] FIG. 29 is a thirteenth plan view of the graphical user interface object menu; and [0034] FIG. 30 is a fourteenth plan view of the graphical user interface object menu. DETAILED DESCRIPTION[0035] The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects. [0036] In this description, the term "application" may also include files having executable content, such as: object code, scripts, byte code, markup language files, and patches. In addition, an "application" referred to herein, may also include files that are not executable in nature, such as documents that may need to be opened or other data files that need to be accessed. [0037] The term "content" may also include files having executable content, such as: object code, scripts, byte code, markup language files, and patches. In addition, "content" referred to herein, may also include files that are not executable in nature, such as documents that may need to be opened or other data files that need to be accessed. [0038] As used in this description, the terms "component," "database," "module," "system," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computing device and the computing device may be a component. One or more components may reside within a process and/or thread of execution, and a component may be localized on one computer and/or distributed between two or more computers. In addition, these components may execute from various computer readable media having various data structures stored thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the Internet with other systems by way of the signal). [0039] Referring initially to FIG. 1 and FIG. 2, an exemplary portable computing device (PCD) is shown and is generally designated 100. As shown, the PCD 100 may include a housing 102. The housing 102 may include an upper housing portion 104 and a lower housing portion 106. FIG. 1 shows that the upper housing portion 104 may include a display 108. In a particular aspect, the display 108 may be a touch screen display. The upper housing portion 104 may also include a trackball input device 1 10.Further, as shown in FIG. 1, the upper housing portion 104 may include a power on button 112 and a power off button 114. As shown in FIG. 1, the upper housing portion 104 of the PCD 100 may include a plurality of indicator lights 116 and a speaker 118. Each indicator light 116 may be a light emitting diode (LED). [0040] In a particular aspect, as depicted in FIG. 2, the upper housing portion 104 is movable relative to the lower housing portion 106. Specifically, the upper housing portion 104 may be slidable relative to the lower housing portion 106. As shown in FIG. 2, the lower housing portion 106 may include a multi-button keyboard 120. In a particular aspect, the multi-button keyboard 120 may be a standard QWERTY keyboard. The multi-button keyboard 120 may be revealed when the upper housing portion 104 is moved relative to the lower housing portion 106. FIG. 2 further illustrates that the PCD 100 may include a reset button 122 on the lower housing portion 106. [0041] Referring to FIG. 3, an exemplary, non- limiting aspect of a portable computing device (PCD) is shown and is generally designated 320. As shown, the PCD 320 includes an on-chip system 322 that includes a digital signal processor 324 and an analog signal processor 326 that are coupled together. The on-chip system 322 may include more than two processors. For example, the on-chip system 322 may include four core processors and an ARM 11 processor, i.e., as described below in conjunction with FIG. 32. [0042] As illustrated in FIG. 3, a display controller 328 and a touch screen controller 330 are coupled to the digital signal processor 324. In turn, a touch screen display 332 external to the on-chip system 322 is coupled to the display controller 328 and the touch screen controller 330. [0043] FIG. 3 further indicates that a video encoder 334, e.g., a phase alternating line (PAL) encoder, a sequential couleur a memoire (SECAM) encoder, or a national television system(s) committee (NTSC) encoder, is coupled to the digital signal processor 324. Further, a video amplifier 336 is coupled to the video encoder 334 and the touch screen display 332. Also, a video port 338 is coupled to the video amplifier 336. As depicted in FIG. 3, a universal serial bus (USB) controller 340 is coupled to the digital signal processor 324. Also, a USB port 342 is coupled to the USB controller 340. A memory 344 and a subscriber identity module (SIM) card 346 may also be coupled to the digital signal processor 324. Further, as shown in FIG. 3, a digital camera 348 may be coupled to the digital signal processor 324. In an exemplary aspect,the digital camera 348 is a charge-coupled device (CCD) camera or a complementary metal-oxide semiconductor (CMOS) camera. [0044] As further illustrated in FIG. 3, a stereo audio CODEC 350 may be coupled to the analog signal processor 326. Moreover, an audio amplifier 352 may coupled to the stereo audio CODEC 350. In an exemplary aspect, a first stereo speaker 354 and a second stereo speaker 356 are coupled to the audio amplifier 352. FIG. 3 shows that a microphone amplifier 358 may be also coupled to the stereo audio CODEC 350. Additionally, a microphone 360 may be coupled to the microphone amplifier 358. In a particular aspect, a frequency modulation (FM) radio tuner 362 may be coupled to the stereo audio CODEC 350. Also, an FM antenna 364 is coupled to the FM radio tuner 362. Further, stereo headphones 366 may be coupled to the stereo audio CODEC 350. [0045] FIG. 3 further indicates that a radio frequency (RF) transceiver 368 may be coupled to the analog signal processor 326. An RF switch 370 may be coupled to the RF transceiver 368 and an RF antenna 372. As shown in FIG. 3, a keypad 374 may be coupled to the analog signal processor 326. Also, a mono headset with a microphone 376 may be coupled to the analog signal processor 326. Further, a vibrator device 378 may be coupled to the analog signal processor 326. FIG. 3 also shows that a power supply 380 may be coupled to the on-chip system 322. In a particular aspect, the power supply 380 is a direct current (DC) power supply that provides power to the various components of the PCD 320 that require power. Further, in a particular aspect, the power supply is a rechargeable DC battery or a DC power supply that is derived from an alternating current (AC) to DC transformer that is connected to an AC power source. [0046] FIG. 3 further indicates that the PCD 320 may also include a network card 388 that may be used to access a data network, e.g., a local area network, a personal area network, or any other network. The network card 388 may be a Bluetooth network card, a WiFi network card, a personal area network (PAN) card, a personal area network ultra-low-power technology (PeANUT) network card, or any other network card well known in the art. Further, the network card 388 may be incorporated into a chip, i.e., the network card 388 may be a full solution in a chip, and may not be a separate network card 388. [0047] As depicted in FIG. 3, the touch screen display 332, the video port 338, the USB port 342, the camera 348, the first stereo speaker 354, the second stereo speaker 356, the microphone 360, the FM antenna 364, the stereo headphones 366, the RFswitch 370, the RF antenna 372, the keypad 374, the mono headset 376, the vibrator 378, and the power supply 380 are external to the on-chip system 322. [0048] In a particular aspect, one or more of the method steps described herein may be stored in the memory 344 as computer program instructions. These instructions may be executed by a processor 324, 326 in order to perform the methods described herein. Further, the processors 324, 326, the memory 344, the display controller 328, the touch screen controller 330, or a combination thereof may serve as a means for executing one or more of the method steps described herein in order to display graphical user interface objects at the display/touch screen 332. [0049] Referring to FIG. 4 through FIG. 11, a method of displaying user interface objects at a display is shown and is generally designated 400. The method 400 may commence at block 402 of FIG. 4 with a do loop in which when user interface is displayed, the following steps may be performed, e.g., by a display controller. At block 404, the controller may display a GUI object menu. Next, at block 406, the controller may display a subset of a total number of GUI objects, N, in the GUI object menu. At block 408, the controller may display a wrinkle indicator, e.g., a wrinkled portion, at an end of the GUI object menu to indicate additional GUI objects available off screen. The wrinkle indicator may be at the left end of the GUI object menu, at the right end of the GUI object menu, or a combination thereof. From block 408, the method 400 may continue to block 409 and the controller may wait for an input from a user, e.g., movement of the GUI object menu to the left or to the right. In other words, the controller may monitor the GUI object menu in order to detect whether the GUI object menu is moved in either direction. [0050] Moving to decision 410, the controller may determine whether the menu moved, e.g., left or right, up or down, one corner to another corner, etc. For ease of discussion, the method 400 is only described in terms of left-to-right or right-to-left movement. It may be appreciated that by replacing left and right with up and down, or other paired-directional indicators, the method 400 may be applied to other types of linear movement in opposite directions. [0051] Returning to decision 410, if the menu is not moved, the method 400 may return to block 409 and the method 400 may continue as described herein. If the menu is moved, the method 400 may proceed to block 412 and the controller may determine a direction of motion. In a particular aspect, the direction of motion may be a first direction or a second direction. Further, the first direction of motion and the seconddirection of motion may be in opposite directions. For example, at decision 414, the controller may determine whether the direction of motion is to the left or to the right. If the direction of motion is to the right, i.e., a first direction, the method 400 may proceed directly to decision 802 of FIG. 8. [0052] On the other hand, if the direction of motion is to the left, a second direction, the method 400 may proceed to decision 416 and the controller may determine whether there are any GUI objects off screen to the right. If there are not any GUI objects off screen to the right, the method 400 may continue to block 418 and the controller may display an unwrinkled right end of the menu. Thereafter, at block 420, the controller may allow the menu to stretch from the right end of the menu. Then, the method 400 may return to block 409 and the method 400 may continue as described herein. [0053] Returning to decision 416, if there are GUI objects off screen to the right, the method 400 may move to block 502 of FIG. 5. At block 502 of FIG. 5, the controller may designate a GUI object that is nearest the left end of the menu as an exiting GUI object and the controller may designate a GUI object that is immediately off screen to the right as an entering GUI object. At block 504, the controller may monitor a location of the exiting GUI object. Moreover, at decision 506, the controller may determine whether an exiting GUI object is within a first distance, Dls of the left side, or left edge, of the display. If not, the method 400 may return to block 504 and the method 400 may continue as described herein. [0054] Conversely, if the exiting GUI object is within the first distance, Dls of the left side of the display, the method 400 may move to block 508 and the controller may begin wrinkling the left end of the GUI object menu. Next, at decision 510, the controller may determine whether there is continued movement of the GUI object menu. If not, the method 400 may move to block 512, and the controller may maintain current stationary display. Then, the method 400 may return to block 409 of FIG. 4 and the method 400 may continue as described herein. [0055] Returning to decision 510, if there is continued movement of the GUI object menu, the method 400 may continue to decision 514 and the controller may determine whether the movement of the GUI object menu is in the same direction as the previous motion. If not, the method 400 may move directly to decision 802 of FIG. 8 and the method 400 may continue as described herein. If the movement of the GUI object menu is in the same direction as the previous movement of the GUI object menu, the method 400 may move to block 516 and the controller may continue wrinkling the left end ofthe GUI object menu. Next, at decision 518, the controller may determine whether the exiting GUI object is within a second distance, D2, of the left side, or left edge, of the display. If the GUI object is not within the second distance of the left end of the toolbar, the method 400 may return to block 516. Otherwise, the method 400 may move to block 602 of FIG. 6. [0056] At block 602 of FIG. 6, the controller may begin collapsing the exiting GUI object. At block 604, the controller may begin unwrinkling the right end of the GUI object menu. Also, at block 606, the controller may begin expanding an entering GUI object, i.e., a GUI object that is entering the GUI object menu, from off screen to the right. At decision 608, the controller may determine whether there is continued movement, or motion, of the GUI object menu. If not, the method 400 may move to block 610 and the controller may maintain current stationary display. Then, the method 400 may return to block 409 of FIG. 4 and the method 400 may proceed as described herein. [0057] Returning to decision 608, if there is continued movement of the GUI object menu, the method 400 may move to decision 612 and the controller may determine whether the movement is in the same direction as the previous movement of the GUI object menu. If not, the method 400 may move directly to decision 802 of FIG. 8 and the method 400 may continue as described herein. On the other hand, if the movement is in the same direction as the previous movement, or motion, the method 400 may continue to block 614 and the controller may continue collapsing the exiting GUI object. At block 616, the controller may continue unwrinkling the right end of the GUI object menu. Further, at block 618, the controller may continue expanding the entering GUI object. At decision 620, the controller may determine whether the exiting GUI object is completely off screen, i.e., it has moved off of the display to the left. If not, the method 400 may return to block 614 and the method 400 may continue as described herein. Otherwise, if the exiting GUI object is off screen, the method 400 may proceed to block 702 of FIG. 7. [0058] At block 702 of FIG. 7, the controller may display a wrinkle indicator, e.g., a wrinkled portion, at the left end of the GUI object menu to indicate additional GUI objects available off screen left. Further, at block 704, the controller may fully expand the entering GUI object. Moving to decision 706, the controller may determine whether there are more GUI objects available off screen to the right. If not, the method 400 may proceed to block 708 and the controller may display a non-wrinkle indicator, e.g., anunwrinkled portion, at the right end of the GUI object menu to indicate no additional GUI objects are available off screen to the right. Next, at decision 710, the controller may determine whether the device is powered off. If so, the method 400 may end. Conversely, if the device remains powered on, the method 400 may return to block 409 of FIG. 4. [0059] Returning to decision 706, if more GUI objects are available off screen to the right, the method 400 may move to block 712 and the controller may display a wrinkle indicator at the right end of the GUI object menu to indicate that there are additional GUI objects available off screen to the right. From block 712, the method 400 may continue to decision 710 and the method 400 may continue as described herein. [0060] Returning to decision 414 of FIG. 4, if the direction of motion, or movement, of the GUI object menu is to the right, the method 400 may move to decision 802 of FIG. 8. [0061] At decision 802 of FIG. 8, the controller may determine whether there are any GUI objects off screen to the left. If there are not any GUI objects off screen to the left, the method 400 may continue to block 804 and the controller may display an unwrinkled left end of the menu. Thereafter, at block 806, the controller may allow the menu to stretch from the left end of the menu. Then, the method 400 may return to block 409 of FIG. 4, and the method 400 may continue as described herein. [0062] Returning to decision 802, if there are GUI objects off screen to the left, the method 400 may move to block 808. At block 808, the controller may designate a GUI object that is nearest the right end of the menu as an exiting GUI object and the controller may designate a GUI object that is immediately off screen to the left as an entering GUI object. At block 810, the controller may monitor a location of the exiting GUI object. Moreover, at decision 812, the controller may determine whether an exiting GUI object is within a first distance, Dls of the right side, or right edge, of the display. If not, the method 400 may return to block 810 and the method 400 may continue as described herein. [0063] Conversely, if the exiting GUI object is within the first distance, Dls of the right side of the display, the method 400 may move to block 902 of FIG. 9. At block 902 of FIG. 9, the controller may begin wrinkling the right end of the GUI object menu. Next, at decision 904, the controller may determine whether there is continued movement of the GUI object menu. If not, the method 400 may move to block 906, andthe controller may maintain current stationary display. Then, the method 400 may return to block 409 of FIG. 4 and the method 400 may continue as described herein. [0064] Returning to decision 904, if there is continued movement of the GUI object menu, the method 400 may continue to decision 908 and the controller may determine whether the movement of the GUI object menu is in the same direction as the previous motion. If not, the method 400 may return to decision 416 of FIG. 4 and the method 400 may continue as described herein. If the movement of the GUI object menu is in the same direction as the previous movement of the GUI object menu, the method 400 may move to block 910 and the controller may continue wrinkling the right end of the GUI object menu. Next, at decision 912, the controller may determine whether the exiting GUI object is within a second distance, D2, of the right side, or right edge, of the display. If not, the method 400 may return to block 910 and continue as described herein. Otherwise, the method 400 may proceed to block 1002 of FIG. 10. [0065] At block 1002 of FIG. 10, the controller may begin collapsing the exiting GUI object. At block 1004, the controller may begin unwrinkling the left end of the GUI object menu. Also, at block 1006, the controller may begin expanding an entering GUI object, i.e., a GUI object that is entering the GUI object menu from off screen to the left. At decision 1008, the controller may determine whether there is continued movement, or motion, of the GUI object menu. If not, the method 400 may move to block 1010 and the controller may maintain current stationary display. Then, the method 400 may return to block 409 of FIG. 4 and the method 400 may proceed as described herein. [0066] Returning to decision 1008, if there is continued movement of the GUI object menu, the method 400 may move to decision 1012 and the controller may determine whether the movement is in the same direction as the previous movement of the GUI object menu. If not, the method 400 may move directly to decision 416 of FIG. 4 and the method 400 may continue as described herein. On the other hand, if the movement is in the same direction as the previous movement, or motion, the method 400 may continue to block 1014 and the controller may continue collapsing the exiting GUI object. At block 1016, the controller may continue unwrinkling the left end of the GUI object menu. Further, at block 1018, the controller may continue expanding the entering GUI object. Next, at decision 1020, the controller may determine whether the exiting GUI object is completely off screen, i.e., it has moved off of the display to the right. If not, the method 400 may return to block 1014 and the method 400 maycontinue as described herein. Otherwise, if the exiting GUI object is off screen, the method 400 may proceed to block 1102 of FIG. 11. [0067] At block 1102 of FIG. 11, the controller may display a wrinkle indicator at the right end of the GUI object menu to indicate additional GUI objects available off screen right. Further, at block 1104, the controller may fully expand the entering GUI object. Moving to decision 1106, the controller may determine whether there are more GUI objects available off screen to the left. If not, the method 400 may move to block 1108 and the controller may display a non-wrinkle indicator at the left end of the GUI object menu to indicate no additional GUI objects are available off screen to the left. Next, at decision 1110, the controller may determine whether the device is powered off. If so, the method 400 may end. Conversely, if the device remains powered on, the method 400 may return to block 409 of FIG. 4. [0068] Returning to decision 1106, if more GUI objects are available off screen to the right, the method 400 may move to block 1112 and the controller may display a wrinkle indicator at the left end of the GUI object menu to indicate that there are additional GUI objects available off screen to the left. From block 1 112, the method 400 may continue to decision 1110 and the method 400 may continue as described herein. [0069] In a particular aspect, as described herein, the method may utilize a first predetermined distance from a left side of a display, a second predetermined distance from a left side of a display, a first predetermined distance from a right side of a display, and a second predetermined distance from a right side of a display in conjunction with an exiting GUI object in order to trigger the formation of a wrinkled portion and the collapsing of a GUI object. It may be appreciated, that other distances to the sides of the display may be used to trigger other actions relating to the GUI object menu and the GUI objects. [0070] Referring now to FIG. 12, a graphical user interface (GUI) object menu is shown and is generally designated 1200. As shown, the GUI object menu 1200 may be displayed on a display 1202 of a portable computing device (PCD) 1204. The GUI object menu 1200 may include a left end 1206 and a right end 1208. Further, the GUI object menu 1200 may include one or more GUI objects 1210. In a particular aspect, each GUI object 1210 may be an icon, a thumbnail, or a combination thereof. Also, each GUI object 1210 may represent an application, a file, a web page, or a combination thereof. Further, the GUI object menu may include a single row of GUI objects 1210, as shown, or multiple rows of GUI objects 1210.[0071] As shown in FIG. 12, the left end 1206 of the GUI object menu 1200 include an unwrinkled portion 1212 in order to indicate that there are not any GUI objects available off screen to the left of the display 1202. Further, the right end 1208 of the GUI object menu 1200 may include a wrinkled portion 1214 in order to indicate that there are GUI objects available off screen to the right of the display 1202. As further indicated in FIG. 12, the display 1202 may include a left side 1216 and a right side 1218. [0072] During operation, the GUI object menu 1200 may be moved to the left and right if GUI objects are available off screen on opposite sides of the attempted direction of motion. For example, if there are not any GUI objects available off screen to the left as indicated by the unwrinkled portion 1212 of the left end 1206 of the GUI object menu 1200 and a user attempts to move the GUI object menu 1200 to the right, the left end 1206 of the GUI object menu 1200 may stretch and return to a starting position as if the left end 1206 of the GUI object menu 1200 is elastic. [0073] On the other hand, if there are GUI objects available off screen to the right, as indicated by the wrinkled portion 1214 of the right end 1208 of the GUI object menu 1200, the GUI object menu may be moved to the left as indicated by arrow 1220 in FIG. 13. Further, as the GUI object menu is moved to the left, the GUI object 1210 that is closest to the left side of the display 1202 may be designated as an exiting GUI object 1222. As shown in FIG. 13, when the exiting GUI object 1222 is within a first predetermined distance, Dls of the left side 1216 of the display 1202, the left end 1206 of the GUI object menu 1200 may begin to form an initial wrinkled portion 1224. [0074] FIG. 14 illustrates the initial wrinkled portion 1224 in detail. As shown, the initial wrinkled portion 1224 may be a waveform having a generally triangular shape. Further, the initial wrinkled portion 1224 may have an amplitude, A, and a wavelength, WL. [0075] As the GUI object menu 1200 continues to move to the left, as indicated in FIG. 15, when the exiting GUI object 1222 is within a second predetermined distance, D2, of the left side 1216 of the display 1202, the left end 1206 of the GUI object menu 1200 may form a first intermediate wrinkled portion 1226 and the exiting GUI icon 1222 may begin to collapse. Further, the right end 1208 of the GUI object menu 1200 may form an initial unwrinkled portion 1228 and an entering GUI object 1230 may begin to expand into the GUI object menu 1200.[0076] FIG. 16 illustrates the first intermediate wrinkled portion 1226 in detail. As shown, the intermediate wrinkled portion 1226 may be a waveform having a generally triangular shape. As shown in FIG. 16, the first intermediate wrinkled portion 1226 may have an amplitude, A, and a wavelength, WL. In a particular aspect, the amplitude of the first intermediate wrinkled portion 1226 may be substantially the same as the amplitude of the initial wrinkled portion 1224. Moreover, the wavelength of the first intermediate wrinkled portion 1226 may be less than the wavelength of the initial wrinkled portion 1224. [0077] FIG. 17 illustrates that as the GUI object menu 1200 continues to move to the left, the exiting GUI object 1222 may continue to collapse and a second intermediate wrinkled portion 1232 may be formed on the left end 1206 of the GUI object menu 1200. Further, the entering GUI object 1230 may continue to expand and a first intermediate unwrinkled portion 1234 may be formed on the right end 1208 of the GUI object menu 1200. [0078] FIG. 18 depicts the second intermediate wrinkled portion 1232 in detail and indicates that the second intermediate wrinkled portion 1232 may be a waveform having a generally triangular shape. Moreover, FIG. 18 shows that the second intermediate wrinkled portion 1232 may have an amplitude, A, and a wavelength, WL. In a particular aspect, the amplitude of the second intermediate wrinkled portion 1232 may be substantially the same as the amplitude of the initial wrinkled portion 1224 and the amplitude of the first intermediate wrinkled portion 1228. The wavelength of the second intermediate wrinkled portion 1232 may be less than the wavelength of the first intermediate wrinkle 1226. [0079] Referring now to FIG. 19, as the GUI object menu 1200 continues to move to the left, the exiting GUI object 1222 may continue to collapse and a third intermediate wrinkled portion 1236 may be formed. The entering GUI object 1230 may continue to expand and a second intermediate unwrinkled portion 1238 may be formed on the right end 1208 of the GUI object menu 1200. [0080] FIG. 20 shows the third intermediate wrinkled portion 1236 in detail. FIG. 20 indicates that the third intermediate wrinkled portion 1236 may be a waveform having a generally triangular shape. Also, FIG. 20 indicates that the third intermediate wrinkled portion 1236 may have an amplitude, A, and a wavelength, WL. In a particular aspect, the amplitude of the third intermediate wrinkled portion 1230 may be substantially the same as the amplitude of the initial wrinkled portion 1224, the amplitude of the firstintermediate wrinkled portion 1226, and the amplitude of the second intermediate wrinkled portion 1232. The wavelength of the third intermediate wrinkled portion 1236 may be less than the wavelength of the second intermediate wrinkled portion 1232. [0081] As the GUI object menu 1200 continues to move to the left, as indicated in FIG. 21, the exiting GUI object 1222 may collapse completely and disappear from the display 1202. Further, the left end 1206 of the GUI object menu 1200 may form a final wrinkled portion 1240. Also, the entering GUI object 1230 may fully expand into the GUI object menu 1200 and a third intermediate unwrinkled portion 1242 may be formed on the right end 1208 of the GUI object menu 1200. [0082] FIG. 22 shows the final wrinkled portion 1240 in detail. FIG. 22 indicates that the final wrinkled portion 1240 may be a waveform having a generally triangular shape. Also, FIG. 22 indicates that the final wrinkled portion 1240 may have an amplitude, A, and a wavelength, WL. In a particular aspect, the amplitude of the final wrinkled portion 1240 may be substantially the same as the amplitude of the initial wrinkled portion 1224, the amplitude of the first intermediate wrinkled portion 1226, the amplitude of the second intermediate wrinkled portion 1232, and the amplitude of the third intermediate wrinkled portion 1236. Further, the wavelength of the final wrinkled portion 1240 may be less than the wavelength of the third intermediate wrinkled portion 1236. [0083] In a particular aspect, as illustrated in FIG. 23, when the exiting GUI object (not shown in FIG. 23) completely exits the GUI object menu 1200 (and the display 1202), the entering GUI object 1230 completely enters the GUI object menu 1200 (and the display 1202), and there are remaining GUI objects off screen to the right of the display 1202, the left end 1206 of the GUI object menu 1200 may include a final wrinkled portion 144. Further, the right end 1208 of the GUI object menu 1200 may include a final wrinkled portion 146. The final wrinkled portions 144, 146 on each end 1206, 1208 of the GUI object menu 1200 may indicate that there are GUI objects off screen to the left of the display 1202 and to the right of the display 1202. [0084] FIG. 24 through FIG. 29 show that a user may continue to move the GUI object menu 1200 to the left in the same manner as described herein. As the user continues to move the GUI object menu 1200 to the left, exiting GUI objects may collapse, entering GUI objects may expand, and the various wrinkled portions and unwrinkled portions on the GUI object menu 1200 may be displayed. When there are no more GUI objects available to the right of the display 1202, the right end 1208 of theGUI object menu 1200 may display a final unwrinkled portion 1244, as shown in FIG. 29. The final unwrinkled portion 1244 may indicate that there are not any more GUI objects available to the right of the display 1202. If a user continues to attempt to move the GUI object menu 1200 to the left, even though there are not any GUI objects available to the right of the display, the right end 1208 of the GUI object menu 1200 may form an unwrinkled, stretched portion 1246 as illustrated in FIG. 30. When the user releases the GUI object menu 1200, the right end 1208 of the GUI object menu 1200 may return to the final unwrinkled portion 1244 shown in FIG. 29. It may be appreciated that the final unwrinkled portion 1244 is also an unstretched portion. [0085] In a particular embodiment, the initial unwrinkled portion 1224 is similarly sized and shaped as the third intermediate unwrinkled portion 1242. The first intermediate wrinkled portion 1226 is similarly sized and shaped as the second intermediate unwrinkled portion 1238. Moreover, the second intermediate wrinkled portion 1232 is similarly sized and shaped as the first intermediate unwrinkled portion 1234. The third intermediate wrinkled portion 1236 is similarly sized and shaped as the initial unwrinkled portion 1228. The difference between the various wrinkled portions and the unwrinkled portions is the final shape, e.g., a final wrinkled portion or a final unwrinkled portion, to which the wrinkled portion or unwrinkled portion leads. [0086] FIG. 12 through FIG. 30 indicate the movement of the GUI object menu 1200 as a user swipes, or otherwise moves, the GUI object menu 1200 to the left. However, it may be appreciated that moving, or swiping, the GUI object menu 1200 to the right would cause the GUI object menu 1200 to move in the same manner, but in the opposite direction, e.g., starting at FIG. 29 and moving backwards through the FIGs to FIG. 12. [0087] It is to be understood that the method steps described herein need not necessarily be performed in the order as described. Further, words such as "thereafter," "then," "next," etc. are not intended to limit the order of the steps. These words are simply used to guide the reader through the description of the method steps. Moreover, the methods described herein are described as executable on a portable computing device (PCD). The PCD may be a mobile telephone device, a portable digital assistant device, a smartbook computing device, a netbook computing device, a laptop computing device, a desktop computing device, or a combination thereof. [0088] The method disclosed herein provides one or more ways to display graphical user interface objects on a display. When a GUI object menu is display, either end of the GUI object menu may be wrinkled. The wrinkle may indicate that one or more GUIobjects may be available off screen on the same side of the display as the wrinkled end of the GUI object menu. If the GUI object menu is wrinkled on both ends, GUI objects may be available off screen on each side of the display. If an end of the GUI object menu is unwrinkled, no GUI objects may be available off screen on the side of the display corresponding to the unwrinkled end of the GUI object menu. [0089] In a particular aspect, the GUI object menu may be moved in a first direction and a second direction that is opposite the first direction. Om a particular aspect, the first direction may be left and the second direction may be right. In another aspect, the first direction may be right and the second direction may be left. In yet another aspect, the first direction may be up and the second direction may be down. In still another aspect, the first direction may be down and the second direction may be up. In another aspect, the first direction may be forward and the second direction may be backward. In another aspect, the first direction may be backward and the second direction may be forward. In yet another aspect, the first direction may be at an angle and the second direction may be one hundred and eighty degrees (180°) from that angle, e.g., the first direction may be at an angle equal to forty- five degrees (45°) and the second direction may be at an angle equal to two hundred twenty- five degrees (225°). [0090] In one or more exemplary aspects, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer program product such as a machine readable medium, i.e., a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that may be accessed by a computer. By way of example, and not limitation, such computer-readable media may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to carry or store desired program code in the form of instructions or data structures and that may be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such asinfrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. [0091] Although selected aspects have been illustrated and described in detail, it will be understood that various substitutions and alterations may be made therein without departing from the spirit and scope of the present invention, as defined by the following claims. |
A method (100) of forming a shallow junction semiconductor device comprises forming source/drain extension regions with a non-amorphizing tail implant (105) which is annealed (spike/RTP) and amorphizing implant which is re-grown epitaxially(SPER) (110). The non-amorphizing tail implant is generally annealed (106) before a doped amorphous layer for SPE is formed (107). SPE provides a high active dopant concentration in a shallow layer. The non-amorphizing tail implant (105) expands the source/drain extension region beyond the range dictated by the SPE-formed layer and keeps the depletion region of the P-N junction away from where end-of-range defects form during the SPE process. Thus, the SPE-formed layer primarily determines the conductivity of the junction while the tail implant determines the location of the depletion region. End-of-range defects form, but are not in a position to cause significant reverse |
CLAIMS 1. A method of forming a P-N junction within a semiconductor substrate, comprising: providing a crystalline semiconductor substrate having a first doping type; forming a gate stack over the substrate and patterning the stack to form gates; forming spacers adjacent the gate stacks; implanting a dopant of a second type to form deep source/drain implants; etching the spacers; implanting dopant of the second type to form source/drain extension region tail implants having a depth within the substrate; forming an amorphous semiconductor layer having the second doping type within and overlying the source/drain extension region tail implants; and performing a low temperature anneal to cause epitaxial growth within the amorphous layer, wherein the region of amorphization and epitaxial growth does not extend to the depth of the source/drain extension region tail implant. _ 2. The method of Claim I5 further comprising annealing the source/drain extension region tail implants prior to forming the amorphous semiconductor layer. 3. The method of Claim 1, wherein forming the amorphous semiconductor layer comprises an amorphizing ion implant. 4. The method of Claim 1, further comprising: forming a second set of spacers adjacent the gate stacks after forming the amorphous semiconductor layer; and forming a second, deeper, amorphous layer over the source/drain regions prior to performing the low temperature anneal. 5. A semiconductor device, comprising: a gate having a gate electrode, a channel region below the gate electrode, a source region and a drain region to either side of the channel, and source and drain extension regions bridging small gaps between the source and drain regions and the channel region, the source and drain extension regions comprising a conductive layer having active dopant concentrations of at least about 2.OxIO<20> atoms/cm<3>; and end-of-range defects at the limit of the conductive layer; wherein the extension regions extend beyond the conductive layer and the end-of-range defects are displaced from depletion regions formed between the source/drain extension regions and the channel regions beneath the gates. 6. The semiconductor device of Claim 5, wherein the conductive layer has active dopant concentrations of at least about 2.5xlO<20> atoms/cm<3>; wherein the channel is about 45nm wide or less; and wherein the conductive layer in no more than about 200 A deep. |
HIGHLY CONDUCTIVE SHALLOW JUNCTION FORMATION The invention relates generally to semiconductor device manufacturing and more particularly to methods of manufacturing devices with ultra-shallow junctions. BACKGROUND In the semiconductor industry, there is a continuing trend toward high device densities.To achieve these high device densities, small features on semiconductor wafers are required. These features include source regions, drain regions, and channel regions that relate to devices, such as field effect transistors (FETs).In the process of scaling down complementary metal oxide semiconductor (CMOS) devices, which are a type of FET, a vertical dimension must be reduced at the same time as horizontal dimensions are reduced. In particular, in order to avoid short channel effects, source and drain regions, or at least source/drain extension regions adjacent the channel must be made extremely shallow with a corresponding increase in dopant density to avoid excessive resistance. The formation of ultra-shallow junctions, that is, junctions having source/drain regions no more than about 35 nm thick and with a dopant concentration not less than 5X10<19> atoms/cm3, is considered one of the significant challenges in manufacturing the next generation of CMOS devices. The usual approach to forming source/drain regions is ion implantation. In the conventional approach, following implantation, the substrate is typically annealed to repair the lattice damage and activate the dopants. Such conventional anneal processes result in a modest amount of diffusion.A process that limits diffusion and results in higher than equilibrium dopant activation is solid phase epitaxial re-growth (SPER). SPER involves re-crystallizing an amorphous doped region at a relatively low temperature, wherein the resulting dopant profile is close to the implanted profile, with little dopant diffusion occurring during the re-crystallization process. While SPER of amorphous doped regions is effective in forming shallow implants with high dopant concentrations, there are obstacles to its implementation. The main concern is that end-of-range defects form in the region where re-crystallization begins. These defects, which are thought to involve interstitial silicon atoms, increase reverse bias leakage (and thereby increase Off-State current). Thus, there remains an unsatisfied need for effective methods of forming ultra-shallow junctions. SUMMARYThe following presents a simplified summary in order to provide a basic understanding of some aspects of the invention. This summary is not an extensive overview of the invention. The primary purpose of this summary is to present some concepts of the invention in a simplified form as a prelude to the more detailed description that is presented later.One aspect of the invention relates to a method of forming a shallow junction. The method comprises forming source/drain extension regions with both a non-amorphizing tail implant followed by spike annealing and amorphizing implant(s) (which result in a doped amorphous layer) followed by solid phase epitaxy re-growth (SPER). The non-amorphizing tail implant is generally annealed before the shallow doped amorphous layer. The amorphous layer is subsequently re-grown expitaxially (SPER), e resulting in a high active dopant concentration in the shallow epitaxially re-grown layer. The non-amorphizing tail implant expands the source/drain extension region beyond the original amorphous/crystalline interface and thus keeps the depletion region of the P-N junction away from where end-of-range defects form during the SPER process. The epitaxially re-grown doped layer primarily determines the conductivity of the junction while the tail implant determines the location of the depletion region. End-of-range defects form, but are not in a position to cause significant reverse bias leakage since they_are located or positioned away from the depletion region.Another aspect of the invention relates to a semiconductor device, comprising a gate having a channel, source and drain regions, and source and drain extension regions. The source and drain extension regions comprise a highly conductive epitaxially re-grown layer having dopant concentrations of at least about 2.OxIO<20> atoms/cm<3>. There are end-of-range defects at the end of the epitaxially re-grown layer, but the extension regions extend beyond the location of the end-of-range defects, and therefore the defects are displaced from the depletion regions formed between the source/drain extension regions and the channel. BRIEF DESCRIPTION OF THE DRAWINGSExample embodiments of the invention are described by way of illustration, with reference to the accompanying drawings wherein:FIG. 1 is a flow chart of an exemplary process according to the invention; FIG. 2 is a flow chart of another exemplary process according to the invention; FIG. 3 is a plan view of a semiconductor substrate illustrating field oxide islands formed thereon;FIG. 4 is a cross section diagram illustrating of the semiconductor substrate of FIG. 3 taken along line A-A' after forming gate stacks; FIG. 5 is a cross section diagram illustrating the semiconductor substrate of FIG. 4 after patterning the gate stack;FIG. 6 is a cross section diagram illustrating the semiconductor substrate of FIG. 5 after forming spacers and deep source/drain implants;FIG. 7 is a cross section diagram illustrating the semiconductor substrate of FIG. 6 after removing the spacers and forming tail implants for source/drain extension regions;FIG. 8 is a cross section diagram illustrating the semiconductor substrate of FIG. 7 after SPER; andFIG. 9 is a plot showing exemplary tail and SPE implants for a shallow junction according to one aspect of the invention. DETAILED DESCRIPTION OF THE EMBODIMENTSFIG. 1 illustrates an exemplary process 100 for forming a P-N junction within a semiconductor substrate according to one aspect of the invention. The process 100 includes gate formation 101, forming sidewall spacers 102, forming deep source/drain implants 103, and etching away the sidewall spacer 104. The process further includes forming a source/drain extension region tail implant 105, annealing to activate the dopants 106, forming a doped amorphous layer 107, forming low temperature spacers 108, forming a thicker doped amorphous layer for the source/drain regions 109, and annealing at low temperature to induce solid phase epitaxial re-growth (SPER) within the doped amorphous layers 110.The gates are formed over a semiconductor substrate. The semiconductor substrate includes a semiconductor crystal, typically silicon. Other examples of semiconductors include GaAs, SiGe and InP. In addition to a semiconductor crystal, the substrate may include various elements therein and/or layers thereon. These can include metal layers, barrier layers, dielectric layers, device structures, active elements and passive elements including word lines, source regions, drain regions, bit lines, bases, emitters, collectors, conductive lines, conductive vias, etc. Act 101 is forming gates on the semiconductor substrate. Forming gates generally involves forming isolation regions, providing a threshold implant into the substrate for the gate channels, forming a gate oxide layer, forming a gate electrode layer, and lithographically patterning the gates from the resulting gate stack. Lithographically patterning the gates generally involves forming a resist coating, patterning the resist, using the patterned resist to pattern the gate stacks, and then removing the resist. Act 102 is forming side wall spacers. Sidewall spacers are generally formed from a silicon nitride or other dielectric materials. The silicon nitride is deposited and then etched to expose the source/drain regions except immediately adjacent the gates, whereby the sidewall spacers provide a mask for subsequently performed deep source/drain implants.Act 103 is performing a deep source/drain tail implant. Act 104 is etching to remove the sidewall spacers and expose the semiconductor surface adjacent the gates where the source/drain extension regions will be formed. Act 105 comprises a non-amorphizing tail implant for the source/drain extension regions. This is a relatively light and shallow implant. Although the implant is shallow, it changes the conductivity type of the substrate to a depth greater than the depth of a subsequently provided amorphous layer created during act 107. The non-amorphizing implant provides contact between the channel regions and the source/drain extension regions but is not so deep or heavy as to cause short channel effects. Preferably, the implant alters the conductivityjype of the substrate to a depth of about 500 A or less, more preferably about 300 A or less, most preferably about 200 A or less.Act 106 is a spike anneal to activate the dopants associated with the deep source/drain tail implant and the source/drain extension region tail implant. The temperature is raised briefly, but not so long as to cause excessive diffusion of the dopants. Spike anneals can be carried out with peak temperatures up to about 1100 <0>C. Act 107 comprises forming a doped amorphous layer. Generally this comprises amorphizing a layer of the substrate by ion bombardment. The amorphizing ions can be the dopant ions, however, where the dopant ions are light, as in boron ions, neutral ions can be used for amorphization prior to implanting the dopant. An amorphous layer in the range from about 10 to about lOOnm thick (or deep) can be formed by bombarding the surface with from about IxIO<13> to about 1.5xlO<15> atoms/cm<2> or more at an energy from about 2 to about 100 keV. For example, an amorphous layer from about 15 to about 20 nm thick (or deep) in silicon can be produced using about IxIO<14> to about 2xlO<14> atoms/cm<2> Ge at an energy of about 15 keV, or alternatively with about 4xlO<13> to about 5xlO<13> atoms/cm<2> In or Sb at an energy of about 25 keV. Where doping and amorphization are two separate steps, amorphization takes place first in order to prevent dopant channeling during implantation.Act 108 is forming low temperature spacers. The purpose of these spacers is to mask the source/drain extension regions while forming a deep doped amorphous layer for the source/drain regions during step 109. Low temperatures are used, because high temperatures are avoided during all steps following the formation of the doped amorphous layer for the source drain extension regions. This is done to ensure that the epitaxial re-growth of the extensions is simultaneous or concurrent with that of the deep source/drain regions. If the extension regions re-grow during the spacer formation, then the subsequent thermal treatment to activate/re-grow the deep source/drain regions may cause deactivation as well as diffusion of the dopants in the extensions regions. Act 109 comprises forming a doped amorphous layer in the source/drain regions. This layer is deeper than the implant of act 107; however, it is shallower than the implant act of 103. Act 110 comprises heating the substrate to cause solid phase epitaxial re- growth (SPER) in the doped amorphous layers (in both the extension and deep source/drain regions). Mild heating, such as in the temperature range from about 550 [deg.]C to about 700 [deg.]C for about 10 minutes to about an hour, generally brings about crystal re-growth. For example, a silicon crystal can generally be re-grown by maintaining it at a temperature of about 600 <0>C for about half an hour. Crystals grow from the intact portion of the substrate beneath the amorphized layer. Preferably, the dopants within the amorphous layer substantially maintain their as-implanted concentration profiles during the SPER process. SPER incorporates the dopants into the re-grown crystal structure in substitutional sites. The resulting active dopant concentrations can exceed about 2.0x10" atoms/cm , and preferably exceed about 2.5x10 atoms/cm<3>.FIG. 9 is a plot showing the typical dopant concentration profiles resulting from the acts 105 through 107 of the process 100. The Y-axis is the dopant concentration in atoms/cm<3> and the X-axis is depth in Angstroms. The tail implant provided by act 105 and indicated by the line122 with diamond-shaped points is deeper than the region formed by act 107 and indicated by the line 120 with square points. The region identified as amoiphous is re-crystallized by act 110.End-of-range defects remain at the boundary of the amorphous region after crystallization, however, due to the tail implant these defects advantageously are not at the boundary of the doped region, which is where the depletion region occurs. FIG. 2 illustrates another exemplary process 200 for forming a P-N junction within a semiconductor substrate according to one aspect of the invention. The process 200 includes gate formation 201, forming sidewall spacers 202, forming deep source/drain implants 203, and etching away the sidewall spacer 204. The process further includes forming a source/drain extension region tail implant 205, annealing to activate the dopants 206, forming a doped amorphous layer 207, and annealing at low temperature to induce solid phase epitaxial re-growth (SPER) within the doped amorphous layers 208.Act 201 is forming gates on the semiconductor substrate, which is illustrated in one example with device 400 in FIGS. 3-5. The device 400 includes semiconductor substrate 401 and field oxide islands 403. The field oxide can comprise any suitable insulator, including for example silicon dioxide or tetraethyl orthosilicate (TEOS). The field oxide islands 403 can be formed by any suitable process, for example LOCOS (local oxidation of silicon) or STI (shallow trench isolation), and can be formed in any type of pattern. In fact, in many instances the isolation is formed in rings or other patterns to surround various different active regions. Act 201 further includes providing a threshold implant to the semiconductor of the substrate. This implant provides a first conductivity type within a layer of the semiconductor adjacent a surface of the substrate. _Act 201 also comprises providing a gate layer. Generally, gate layers are formed with silicon dioxide and are referred to as gate oxide layers. However, for very small devices, it is often desirable to use a material that has a lower electrical resistance than silicon dioxide and can be provided in greater thickness than an equivalent silicon dioxide layer. Such materials are referred to as high-K dielectrics and include, for example, silicates, aluminates, titanates, and metal oxides. Examples of silicate high-K dielectrics include silicates of Ta, Al, Ti, Zr, Y, La and Hf, including Zr and Hf doped silicon oxides and silicon oxynitrides. Examples of aluminates high-K dielectrics include transition metal aluminates, such as compounds of Zr and Hf. Examples of titanate high-K dielectrics include BaTiO3, SrTiO3, and PdZrTiO3. Examples of metal oxide high-K dielectrics include oxides of refractory metals, such as Zr and Hf, and oxides of Lanthanide series metals, such as La, Lu, Eu, Pr, Nd, Gd, and Dy. Additional examples of metal oxide high-K dielectrics include Al2O3, TiO2, Ta2O5, Nb2O5 and Y2O3. The gate layer is formed by any suitable process including, for example, oxidation, spin coating, or CVD. In one embodiment, the layer is from about 1 nm to about 100 nm thick. In another embodiment, the layer is from about 3 nm to about 50 nm thick. In a further embodiment, the layer is from about 5 nm to about 30 nm thick.Act 201 still further includes forming a gate electrode layer over the gate oxide layer. The gate electrode layer is typically a poly layer. FIG. 4 illustrates a cross-section of the substrate 400, taken along the line A-A' of FIG. 3 after formation of a gate layer 405 and a poly layer 407. A poly layer is one containing either amorphous silicon or polysilicon. In one embodiment, the poly layer has a thickness of about 40 nm to about 120 nm. In another embodiment, the poly layer has a thickness of about 50 nm to about 1000 nm. In a further embodiment, the poly layer has a thickness of about 60 nm to about 90 nm. Act 201 also includes patterning the poly layer. The first step in patterning is generally forming a resist coating over the poly layer. Any suitable resist may be used. The resist is lithographically patterned and the pattern is transferred by etching the exposed portion of the underlying poly and gate layers. FIG. 5 illustrates the substrate 400 after patterning with resist coating 409. After patterning the gate stacks, the resist is stripped The pattern includes gaps that have any suitable size or shape. In one embodiment, the pattern includes gaps having widths within the range from about from 0.01 to about 10 [mu]m. In another embodiment, the pattern includes gaps having widths within the range from about from 0.01 to about 1.0 [mu]m. In a further embodiment, the pattern includes gaps having widths within the range from about from 0.01 to about 0.045 [mu]m. Act 202 is forming the sidewall spacers 419. This comprises depositing a spacer material and anisotropically etching the material. The spacer material remains only adjacent the gate stacks, as illustrated for the device 400 in FIG. 6.Act 203 comprises a source/drain implant. FIG. 6 illustrates the device 400 provided with source/drain regions 421. The spacer material 419 creates a separation between the source/drain regions 421 and the gate stacks.Act 204 is etching to remove the sidewall spacers. After the sidewall spacers are removed, the source/drain tail implants are formed by act 205. Act 206 is a spike anneal to activate the implants. The resulting structure is illustrated in FIG. 7. The source/drain regions 421 have expanded to include extension regions 423. Act 207 is an amorphizing implant for the source/drain extension regions. This provides a doped amorphous layer across the entirety of the source/drain regions including the source/drain extension regions. In this example, a deeper doped amorphous layer for the deep source/drain regions is not provided. Act 208 is an SPER anneal to re-crystallize the amorphous layer and form the shallow highly conductive region 425 illustrated in FIG. 8. If contacts are placed in the source and drain regions, the resistances between the contacts include a resistance across the channel, a resistance through source/drain extension regions beyond the shallow highly conductive region 425, a resistance through the shallow highly conductive region 425, and a resistance through the deeper part of the source drain regions 421. The source/drain extension regions beyond the shallow highly conductive region have a small conductive cross- section and a relatively low conductivity, but are very short and therefore do not substantially increase the overall resistance. The shallow highly conductive regions 425 are the dominant conductive element of the source/drain extension regions and greatly reduce the resistivity of these regions relative to the tail implant alone. The deep source/drain implants provide a much larger cross sectional area for conduction and can maintain low resistance over comparatively long distances. The invention is particularly useful for semiconductor devices that are not stable at high temperatures. Examples of such devices include devices using SiGe semiconductor crystals and devices that use high-K dielectics. Highly conductive shallow junctions and source/drain regions can be formed with a minimum of high-temperature processing.Those skilled in the art to which the invention relates will appreciate that additions, deletions, substitutions and other modifications can be made in the described examples, without departing from the scope of the claimed invention. |
Embodiments include computing devices, apparatus, and methods implemented by the apparatus for implementing profile guided indirect jump checking on a computing device, including encountering an indirect jump location of implementing an indirect jump during execution of a program, identifying an indirect jump target of the indirect jump, determining whether the indirect jump location and the indirect jump target are associated in a profile guided indirect jump table, and determining whether the indirect jump location and the indirect jump target are associated in a compiler guided indirect jump table in response to determining that the indirect jump location and the indirect jump target are not associated in the profile guided indirect jump table. |
CLAIMSWhat is claimed is:1. A method of implementing profile guided indirect jump checking on a computing device, comprising:identifying an indirect jump target of an indirect jump in response toencountering an indirect jump location while implementing the indirect jump during execution of a program;determining whether the indirect jump location and the indirect jump target are associated in a profile guided indirect jump table; anddetermining whether the indirect jump location and the indirect jump target are associated in a compiler guided indirect jump table in response to determining that the indirect jump location and the indirect jump target are not associated in the profile guided indirect jump table.2. The method of claim 1, further comprising continuing to execute the program in response to determining that the indirect jump location and the indirect jump target are associated in the profile guided indirect jump table.3. The method of claim 1, further comprising:continuing to execute the program with a warning in response to determining that the indirect jump location and the indirect jump target are associated in the compiler guided indirect jump table; andaborting the program in response to determining that the indirect jump location and the indirect jump target are not associated in the compiler guided indirect jump table.4. The method of claim 1, further comprising:determining whether the indirect jump location is associated with a high confidence level in response to determining that the indirect jump location and the indirect jump target are not associated in the profile guided indirect jump table; and aborting the program in response to determining that the indirect jump location is associated with a high confidence level.5. The method of claim 4, wherein determining whether the indirect jump location and the indirect jump target are associated in a compiler guided indirect jump table comprises determining whether the indirect jump location and the indirect jump target are associated in the compiler guided indirect jump table in response to determining that the indirect jump location is not associated with a high confidence level,the method further comprising:continuing to execute the program with a warning in response to determining that the indirect jump location and the indirect jump target are associated in the compiler guided indirect jump table; andaborting the program in response to determining that the indirect jump location and the indirect jump target are not associated in the compiler guided indirect jump table.6. The method of claim 4, wherein determining whether the indirect jump location is associated with a high confidence level comprises retrieving a confidence level associated with the indirect jump location in the profile guided indirect jump table.7. The method of claim 4, wherein determining whether the indirect jump location is associated with a high confidence level comprises identifying a confidence level designated for the profile guided indirect jump table.8. The method of claim 1, wherein the profile guided indirect jump table is one of a plurality of indirect jump tables each containing less than all of the indirect jump locations for the program.9. A computing device, comprising:a processing device configured to perform operations comprising:identifying an indirect jump target of an indirect jump in response to encountering an indirect jump location while implementing the indirect jump during execution of a program;determining whether the indirect jump location and the indirect jump target are associated in a profile guided indirect jump table; anddetermining whether the indirect jump location and the indirect jump target are associated in a compiler guided indirect jump table in response to determining that the indirect jump location and the indirect jump target are not associated in the profile guided indirect jump table.10. The computing device of claim 9, wherein the processing device is configured to perform operations further comprising continuing to execute the program in response to determining that the indirect jump location and the indirect jump target are associated in the profile guided indirect jump table.11. The computing device of claim 9, wherein the processing device is configured to perform operations further comprising:continuing to execute the program with a warning in response to determining that the indirect jump location and the indirect jump target are associated in the compiler guided indirect jump table; andaborting the program in response to determining that the indirect jump location and the indirect jump target are not associated in the compiler guided indirect jump table.12. The computing device of claim 9, wherein the processing device is configured to perform operations further comprising:determining whether the indirect jump location is associated with a high confidence level in response to determining that the indirect jump location and the indirect jump target are not associated in the profile guided indirect jump table; and aborting the program in response to determining that the indirect jump location is associated with a high confidence level.13. The computing device of claim 12, wherein:the processing device is device is configured to perform operations such that determining whether the indirect jump location and the indirect jump target are associated in a compiler guided indirect jump table comprises determining whether the indirect jump location and the indirect jump target are associated in the compiler guided indirect jump table in response to determining that the indirect jump location is not associated with a high confidence level;the processing device is configured to perform operations further comprising: continuing to execute the program with a warning in response to determining that the indirect jump location and the indirect jump target are associated in the compiler guided indirect jump table; andaborting the program in response to determining that the indirect jump location and the indirect jump target are not associated in the compiler guided indirect jump table.14. The computing device of claim 12, wherein the processing device is device is configured to perform operations such that determining whether the indirect jump location is associated with a high confidence level comprises retrieving a confidence level associated with the indirect jump location in the profile guided indirect jump table.15. The computing device of claim 12, wherein the processing device is device is configured to perform operations such that determining whether the indirect jump location is associated with a high confidence level comprises identifying a confidence level designated for the profile guided indirect jump table.16. The computing device of claim 9, wherein the profile guided indirect jump table is one of a plurality of indirect jump tables each containing less than all of the indirect jump locations for the program.17. A computing device, comprising:means for identifying an indirect jump target of an indirect jump in response to encountering an indirect jump location while implementing the indirect jump during execution of a program;means for determining whether the indirect jump location and the indirect jump target are associated in a profile guided indirect jump table; andmeans for determining whether the indirect jump location and the indirect jump target are associated in a compiler guided indirect jump table in response to determining that the indirect jump location and the indirect jump target are not associated in the profile guided indirect jump table.18. The computing device of claim 17, further comprising means for continuing to execute the program in response to determining that the indirect jump location and the indirect jump target are associated in the profile guided indirect jump table.19. The computing device of claim 17, further comprising:means for continuing to execute the program with a warning in response to determining that the indirect jump location and the indirect jump target are associated in the compiler guided indirect jump table; andmeans for aborting the program in response to determining that the indirect jump location and the indirect jump target are not associated in the compiler guided indirect jump table.20. The computing device of claim 17, further comprising:means for determining whether the indirect jump location is associated with a high confidence level in response to determining that the indirect jump location and the indirect jump target are not associated in the profile guided indirect jump table; andmeans for aborting the program in response to determining that the indirect jump location is associated with a high confidence level.21. The computing device of claim 20, wherein means for determining whether the indirect jump location and the indirect jump target are associated in a compiler guided indirect jump table comprises means for determining whether the indirect jump location and the indirect jump target are associated in the compiler guided indirect jump table in response to determining that the indirect jump location is not associated with a high confidence level,the computing device further comprising:means for continuing to execute the program with a warning in response to determining that the indirect jump location and the indirect jump target are associated in the compiler guided indirect jump table; andmeans for aborting the program in response to determining that the indirect jump location and the indirect jump target are not associated in the compiler guided indirect jump table.22. The computing device of claim 20, wherein means for determining whether the indirect jump location is associated with a high confidence level comprises means for retrieving a confidence level associated with the indirect jump location in the profile guided indirect jump table.23. The computing device of claim 20, wherein means for determining whether the indirect jump location is associated with a high confidence level comprises means for identifying a confidence level designated for the profile guided indirect jump table.24. A non-transitory processor-readable storage medium having stored thereon processor-executable instructions configured to cause a processor of a computing device to perform operations comprising:identifying an indirect jump target of an indirect jump in response to encountering an indirect jump location while implementing the indirect jump during execution of a program;determining whether the indirect jump location and the indirect jump target are associated in a profile guided indirect jump table; anddetermining whether the indirect jump location and the indirect jump target are associated in a compiler guided indirect jump table in response to determining that the indirect jump location and the indirect jump target are not associated in the profile guided indirect jump table.25. The non-transitory processor-readable storage medium of claim 24, wherein the stored processor-executable instructions are configured to cause the processor to perform operations further comprising continuing to execute the program in response to determining that the indirect jump location and the indirect jump target are associated in the profile guided indirect jump table.26. The non-transitory processor-readable storage medium of claim 24, wherein the stored processor-executable instructions are configured to cause the processor to perform operations further comprising:continuing to execute the program with a warning in response to determining that the indirect jump location and the indirect jump target are associated in the compiler guided indirect jump table; andaborting the program in response to determining that the indirect jump location and the indirect jump target are not associated in the compiler guided indirect jump table.27. The non-transitory processor-readable storage medium of claim 24, wherein the stored processor-executable instructions are configured to cause the processor to perform operations further comprising:determining whether the indirect jump location is associated with a high confidence level in response to determining that the indirect jump location and the indirect jump target are not associated in the profile guided indirect jump table; and aborting the program in response to determining that the indirect jump location is associated with a high confidence level.28. The non-transitory processor-readable storage medium of claim 27, wherein: the stored processor-executable instructions are configured to cause the processor to perform operations such that determining whether the indirect jump location and the indirect jump target are associated in a compiler guided indirect jump table comprises determining whether the indirect jump location and the indirect jump target are associated in the compiler guided indirect jump table in response to determining that the indirect jump location is not associated with a high confidence level;the stored processor-executable instructions are configured to cause the processor to perform operations further comprising:continuing to execute the program with a warning in response to determining that the indirect jump location and the indirect jump target are associated in the compiler guided indirect jump table; andaborting the program in response to determining that the indirect jump location and the indirect jump target are not associated in the compiler guided indirect jump table.29. The non-transitory processor-readable storage medium of claim 27, wherein the stored processor-executable instructions are configured to cause the processor to perform operations such that determining whether the indirect jump location is associated with a high confidence level comprises retrieving a confidence level associated with the indirect jump location in the profile guided indirect jump table.30. The non-transitory processor-readable storage medium of claim 27, wherein the stored processor-executable instructions are configured to cause the processor to perform operations such that determining whether the indirect jump location is associated with a high confidence level comprises identifying a confidence level designated for the profile guided indirect jump table. |
TITLEProfile Guided Indirect Function Call Check for Control Flow Integrity BACKGROUND[0001] Control flow integrity aims to ensure the order in which individual statements, instructions, or function calls of a software program are executed or evaluated by a processor. A part of control flow integrity prevents calling of a modified pointer to indirect jump/branch targets, such as could occur from arbitrary modifications of function pointers, virtual function calls, and function returns. The prevention of arbitrary modification of indirect jump/branch targets uses static analysis (by a compiler or instrumentation) to build tables of the legitimate indirect jump/branch targets. At runtime, the tables are used to check whether an indirect jump/branch is to a valid target.[0002] Such control flow integrity implementations have been shown to be insecure. To minimize runtime overhead, some runtime checks of the tables of the legitimate indirect jump/branch targets are removed or weakened. The control flow integrity also depends on static analysis to determine the legitimate jump/branch targets, which can result in incomplete identification of all legitimate jump/branch targets for a program. Thus, the tables are too coarse-grain, missing legitimate jump/branch targets and resulting in false negatives. The tables are also susceptible to attacks that swap pointers in the same table (e.g., pointers to read and write functions). A dynamic approach, such as cryptographic control flow integrity, can help address thesusceptibility to attacks. However, such dynamic solutions incur much higher overhead, typically a 30% increase or a two times slow down in program execution.SUMMARY[0003] Various disclosed embodiments may include apparatuses and methods for implementing profile guided indirect jump checking on a computing device. Various embodiments may include identifying an indirect jump target of an indirect jump in response to encountering an indirect jump location while implementing the indirect jump during execution of a program. Some embodiments may include determining whether the indirect jump location and the indirect jump target are associated in a profile guided indirect jump table. Some embodiments may include determining whether the indirect jump location and the indirect jump target are associated in a compiler guided indirect jump table in response to determining that the indirect jump location and the indirect jump target are not associated in the profile guided indirect jump table.[0004] Some embodiments may include continuing to execute the program in response to determining that the indirect jump location and the indirect jump target are associated in the profile guided indirect jump table.[0005] Some embodiments may include continuing to execute the program with a warning in response to determining that the indirect jump location and the indirect jump target are associated in the compiler guided indirect jump table and aborting the program in response to determining that the indirect jump location and the indirect jump target are not associated in the compiler guided indirect jump table.[0006] Some embodiments may include determining whether the indirect jump location is associated with a high confidence level in response to determining that the indirect jump location and the indirect jump target are not associated in the profile guided indirect jump table and aborting the program in response to determining that the indirect jump location is associated with a high confidence level.[0007] In some embodiments, determining whether the indirect jump location and the indirect jump target are associated in a compiler guided indirect jump table may include determining whether the indirect jump location and the indirect jump target are associated in the compiler guided indirect jump table in response to determining that the indirect jump location is not associated with a high confidence level. Some embodiments may include continuing to execute the program with a warning in response to determining that the indirect jump location and the indirect jump target are associated in the compiler guided indirect jump table and aborting the program in response to determining that the indirect jump location and the indirect jump target are not associated in the compiler guided indirect jump table.[0008] In some embodiments, determining whether the indirect jump location is associated with a high confidence level may include retrieving a confidence level associated with the indirect jump location in the profile guided indirect jump table.[0009] In some embodiments, determining whether the indirect jump location is associated with a high confidence level may include identifying a confidence level designated for the profile guided indirect jump table.[0010] In some embodiments, the profile guided indirect jump table is one of a plurality of indirect jump tables each containing less than all of the indirect jump locations for the program.[0011] Various embodiments may include a computing device having a processing device configured for profile guided indirect jump checking. The processing device may be configured to perform operations of one or more of the embodiment methods summarized above.[0012] Various embodiments may include a computing device having means for performing functions of one or more of the embodiment methods summarized above.[0013] Various embodiments may include a non-transitory processor-readable storage medium having stored thereon processor-executable instructions configured to cause a processor of a computing device to perform operations of one or more of the embodiment methods summarized above.BRIEF DESCRIPTION OF THE DRAWINGS[0014] The accompanying drawings, which are incorporated herein and constitute part of this specification, illustrate example embodiments of various embodiments, and together with the general description given above and the detailed description given below, serve to explain the features of the claims.[0015] FIG. 1 is a component block diagram illustrating a computing device suitable for implementing various embodiments.[0016] FIG. 2 is a component block diagram illustrating an example multicore processor suitable for implementing various embodiments.[0017] FIG. 3 is a block diagram illustrating an example indirect jump profiling system suitable for implementing various embodiments.[0018] FIG. 4 is a diagram illustrating an example compiler guided indirect jump table suitable for implementing various embodiments.[0019] FIGS. 5A-5C are tables illustrating example profile guided indirect jump tables suitable for implementing various embodiments.[0020] FIG. 6 is a process flow diagram illustrating a method for implementing indirect jump profiling according to various embodiments.[0021] FIG. 7 is a process flow diagram illustrating a method for implementing indirect jump profiling according to various embodiments.[0022] FIG. 8 is a process flow diagram illustrating a method for implementing profile guided indirect jump checking according to an embodiment.[0023] FIG. 9 is a component block diagram illustrating an example mobile computing device suitable for use with the various embodiments.[0024] FIG. 10 is a component block diagram illustrating an example mobile computing device suitable for use with the various embodiments.[0025] FIG. 11 is a component block diagram illustrating an example server suitable for use with the various embodiments. DETAILED DESCRIPTION[0026] The various embodiments will be described in detail with reference to the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. References made to particular examples and implementations are for illustrative purposes, and are not intended to limit the scope of the claims.[0027] Various embodiments may include methods, and systems and devices implementing such methods for improving control flow integrity security using smaller, more fine-grain, and rated tables of legitimate indirect jump/branch targets (profile guided indirect jump tables) used for profile guided indirect function call checks. The apparatus and methods of the various embodiments may include using profile data to identify legitimate indirect jump/branch targets ("jump targets") for identified indirect jump/branch locations ("jump locations"), using statistical analysis to rate the tables for each indirect jump location, and determining from the profile guided indirect jump tables whether to execute or abort an application.[0028] The terms "computing device" and "mobile computing device" are used interchangeably herein to refer to any one or all of cellular telephones, smartphones, personal or mobile multi-media players, personal data assistants (PDA's), laptop computers, tablet computers, convertible laptops/tablets (2-in-l computers), smartbooks, ultrabooks, netbooks, palm-top computers, wireless electronic mail receivers, multimedia Internet enabled cellular telephones, mobile gaming consoles, wireless gaming controllers, and similar personal electronic devices that include a memory, and a programmable processor. The term "computing device" may further refer to stationary computing devices including personal computers, desktop computers, all-in-one computers, workstations, super computers, mainframe computers, embedded computers, servers, home theater computers, and game consoles. [0029] The terms "jump" and "branch" refer to the control flow instructions that may direct execution of a program to an instruction at designated address, either directly using the designated address or indirectly using a reference to a location storing the designated address. For clarity and brevity of explanation, the terms "jump" and "branch" are used interchangeably herein. Use of one of the terms "jump" and"branch" in place of the other is nonlimiting as the disclosures herein may apply equally to both jump instructions and branch instructions.[0030] Tables of legitimate indirect jump targets are generally large tables including the indirect jump locations associated with indirect jump targets for a program.Profiling data of offline analysis of a program may be used to generate multiple profile guided indirect jump tables of smaller size. For example, each profile guided indirect jump table may be created for as few as a single indirect jump location and its associated indirect jump targets. A profiler maybe implemented to collect indirect jump target traces and frequencies of the indirect jump target traces. The profiler may use this information to profile a program with representative training inputs. For example, for indirect jumps identified to occur at indirect jump location W, the indirect jump target traces may show 10,000 indirect jumps to target Tl, 50 indirect jumps to target T2, 9,500 indirect jumps to target T3, and 10 indirect jumps to target T4. For the same program, for indirect jumps identified to occur at indirect jump location Y, the indirect jump target traces may show 1,500 indirect jumps to target T7, 1,450 indirect jumps to target T8, and 1,500 indirect jumps to target T9. The profiling of a program to collect the profiling data to build the profile guided indirect jump tables may be collected over numerous executions of the program.[0031] Using the profiling data alone to build the profile guided indirect jump tables may result in too many false positives (i.e., an instruction thought to be illegal that actually is correct) if not all of the indirect jump targets for the indirect jump locations are identified. To reduce such false positives, statistical analysis of the profiling data for each indirect jump location may be done to assign a confidence level to the profile guided indirect jump tables including the profiling data for the different indirect jump locations. Statistical analysis of the profiling data for each indirect jump location may be used to identify whether an identified indirect jump target is more likely than other identified indirect jump targets using various metrics, whether multiple indirect jump targets are significant indirect jump targets, and/or whether a tail of the indirect jump targets is long. For profiling data of a first indirect jump location having a dominant indirect jump target and/or a short or no tail, a first profile guided indirect jump table for the first indirect jump location may be assigned a high level of confidence. For profiling data of a second indirect jump location having multiple significant indirect jump targets and/or a long tail, a second profile guided indirect jump table for the second indirect jump location may be assigned a low level of confidence. The high and low levels of confidence may indicate the likelihood of whether an indirect jump target from an indirect jump location is a legitimate indirect jump target relative to the metric used to determine the confidence levels. For example, the metric may be set such that a high level of confidence indicates that the likelihood of a legitimate indirect jump target is greater than a 50% chance, and a low level of confidence indicates that the likelihood of a legitimate indirect jump target is less than a 50% chance.[0032] At runtime, the profile guided indirect jump tables may be used in conjunction with a coarse-grain, compiler guided indirect jump table (as described in the background) to determine whether an indirect jump target is legitimate. Upon encountering an indirect jump location in an executing program, a check of the profile guided indirect jump table for the indirect jump location may be executed to determine whether the indirect jump target for the indirect jump location is in the profile guided indirect jump table. In response to determining that the indirect jump target for the indirect jump location matches an indirect jump target in the profile guided indirect jump table for the indirect jump location, the processor may continue normal execution of the program, including the indirect jump. In response to determining that the indirect jump target for the indirect jump location does not match an indirect jump target in the profile guided indirect jump table for the indirect jump location, the processor may determine whether the profile guided indirect jump table is a high confidence (or low confidence) profile guided indirect jump table. In response to determining that the profile guided indirect jump table is a highconfidence (or is not a low confidence) indirect jump table, the processor may abort execution of the program. In response to determining that the profile guided indirect jump table is not a high confidence (or is a low confidence) profile guided indirect jump table, the processor may execute a check of the compiler guided indirect jump table may to determine whether the indirect jump target for the indirect jump location is in the compiler guided indirect jump table. In response to determining that the indirect jump target for the indirect jump location is in the compiler guided indirect jump table, the processor may continue normal execution the program, including the indirect jump, though with a warning. In response to determining that that the indirect jump target for the indirect jump location is not in the compiler guided indirect jump table, the processor may abort execution of the program.[0033] FIG. 1 illustrates a system including a computing device 10 suitable for use with the various embodiments. The computing device 10 may include a system-on- chip (SoC) 12 with a processor 14, a memory 16, a communication interface 18, and a storage memory interface 20. The computing device 10 may further include a communication component 22, such as a wired or wireless modem, a storage memory 24, and an antenna 26 for establishing a wireless communication link. The processor 14 may include any of a variety of processing devices, for example a number of processor cores.[0034] The term "system-on-chip" (SoC) is used herein to refer to a set ofinterconnected electronic circuits typically, but not exclusively, including a processing device, a memory, and a communication interface. A processing device may include a variety of different types of processors 14 and processor cores, such as a general purpose processor, a central processing unit (CPU), a digital signal processor (DSP), a graphics processing unit (GPU), an accelerated processing unit (APU), an auxiliary processor, a single-core processor, and a multicore processor. A processing device may further embody other hardware and hardware combinations, such as a field programmable gate array (FPGA), an application-specific integrated circuit (ASIC), other programmable logic device, discrete gate logic, transistor logic, performance monitoring hardware, watchdog hardware, and time references. Integrated circuits may be configured such that the components of the integrated circuit reside on a single piece of semiconductor material, such as silicon.[0035] An SoC 12 may include one or more processors 14. The computing device 10 may include more than one SoC 12, thereby increasing the number of processors 14 and processor cores. The computing device 10 may also include processors 14 that are not associated with an SoC 12. Individual processors 14 may be multicore processors as described below with reference to FIG. 2. The processors 14 may each be configured for specific purposes that may be the same as or different from other processors 14 of the computing device 10. One or more of the processors 14 and processor cores of the same or different configurations may be grouped together. A group of processors 14 or processor cores may be referred to as a multi-processor cluster.[0036] The memory 16 of the SoC 12 may be a volatile or non-volatile memory configured for storing data and processor-executable code for access by the processor 14. The computing device 10 and/or SoC 12 may include one or more memories 16 configured for various purposes. One or more memories 16 may include volatile memories such as random access memory (RAM) or main memory, or cache memory. These memories 16 may be configured to temporarily hold a limited amount of data received from a data sensor or subsystem, data and/or processor-executable code instructions that are requested from non- volatile memory, loaded to the memories 16 from non- volatile memory in anticipation of future access based on a variety of factors, and/or intermediary processing data and/or processor-executable code instructions produced by the processor 14 and temporarily stored for future quick access without being stored in non-volatile memory.[0037] The memory 16 may be configured to store data and processor-executable code, at least temporarily, that is loaded to the memory 16 from another memory device, such as another memory 16 or storage memory 24, for access by one or more of the processors 14. The data or processor-executable code loaded to the memory 16 may be loaded in response to execution of a function by the processor 14. Loading the data or processor-executable code to the memory 16 in response to execution of a function may result from a memory access request to the memory 16 that isunsuccessful, or a "miss," because the requested data or processor-executable code is not located in the memory 16. In response to a miss, a memory access request to another memory 16 or storage memory 24 may be made to load the requested data or processor-executable code from the other memory 16 or storage memory 24 to the memory device 16. Loading the data or processor-executable code to the memory 16 in response to execution of a function may result from a memory access request to another memory 16 or storage memory 24, and the data or processor-executable code may be loaded to the memory 16 for later access.[0038] The storage memory interface 20 and the storage memory 24 may work in unison to allow the computing device 10 to store data and processor-executable code on a non-volatile storage medium. The storage memory 24 may be configured much like an embodiment of the memory 16 in which the storage memory 24 may store the data or processor-executable code for access by one or more of the processors 14. The storage memory 24, being non- volatile, may retain the information after the power of the computing device 10 has been shut off. When the power is turned back on and the computing device 10 reboots, the information stored on the storage memory 24 may be available to the computing device 10. The storage memory interface 20 may control access to the storage memory 24 and allow the processor 14 to read data from and write data to the storage memory 24. [0039] Some or all of the components of the computing device 10 may be arranged differently and/or combined while still serving the functions of the variousembodiments. The computing device 10 may not be limited to one of each of the components, and multiple instances of each component may be included in various configurations of the computing device 10.[0040] FIG. 2 illustrates a multicore processor suitable for implementing anembodiment. The multicore processor 14 may include multiple processor types, including, for example, a central processing unit, a graphics processing unit, and/or a digital processing unit. The multicore processor 14 may also include a custom hardware accelerator which may include custom processing hardware and/or general purpose hardware configured to implement a specialized set of functions.[0041] The multicore processor may have a plurality of homogeneous orheterogeneous processor cores 200, 201, 202, 203. A homogeneous multicore processor may include a plurality of homogeneous processor cores. The processor cores 200, 201, 202, 203 may be homogeneous in that, the processor cores 200, 201, 202, 203 of the multicore processor 14 may be configured for the same purpose and have the same or similar performance characteristics. For example, the multicore processor 14 may be a general purpose processor, and the processor cores 200, 201, 202, 203 may be homogeneous general purpose processor cores. The multicore processor 14 may be a graphics processing unit or a digital signal processor, and the processor cores 200, 201, 202, 203 may be homogeneous graphics processor cores or digital signal processor cores, respectively. The multicore processor 14 may be a custom hardware accelerator with homogeneous processor cores 200, 201, 202, 203. For ease of reference, the terms "custom hardware accelerator," "processor," and "processor core" may be used interchangeably herein.[0042] A heterogeneous multicore processor may include a plurality of heterogeneous processor cores. The processor cores 200, 201, 202, 203 may be heterogeneous in that the processor cores 200, 201, 202, 203 of the multicore processor 14 may be configured for different purposes and/or have different performance characteristics. The heterogeneity of such heterogeneous processor cores may include different instruction set architecture, pipelines, operating frequencies, etc. An example of such heterogeneous processor cores may include what are known as "big. LITTLE" architectures in which slower, low-power processor cores may be coupled with more powerful and power-hungry processor cores. In similar embodiments, an SoC (for example, SoC 12 of FIG. 1) may include any number of homogeneous orheterogeneous multicore processors 14. In various embodiments, not all off the processor cores 200, 201, 202, 203 need to be heterogeneous processor cores, as a heterogeneous multicore processor may include any combination of processor cores 200, 201, 202, 203 including at least one heterogeneous processor core.[0043] Each of the processor cores 200, 201, 202, 203 of a multicore processor 14 may be designated a private cache 210, 212, 214, 216 that may be dedicated for read and/or write access by a designated processor core 200, 201, 202, 203. The private cache 210, 212, 214, 216 may store data and/or instructions, and make the stored data and/or instructions available to the processor cores 200, 201, 202, 203, to which the private cache 210, 212, 214, 216 is dedicated, for use in execution by the processor cores 200, 201, 202, 203. The private cache 210, 212, 214, 216 may include volatile memory as described herein with reference to memory 16 of FIG. 1.[0044] The multicore processor 14 may further include a shared cache 230 that may be configured to for read and/or write access by the processor cores 200, 201, 202, 203. The private cache 210, 212, 214, 216 may store data and/or instructions, and make the stored data and/or instructions available to the processor cores 200, 201, 202, 203, for use in execution by the processor cores 200, 201, 202, 203. The shared cache 230 may also function as a buffer for data and/or instructions input to and/or output from the multicore processor 14. The shared cache 230 may include volatile memory as described herein with reference to memory 16 of FIG. 1. [0045] In the example illustrated in FIG. 2, the multicore processor 14 includes four processor cores 200, 201, 202, 203 (i.e., processor core 0, processor core 1, processor core 2, and processor core 3). In the example, each processor core 200, 201, 202, 203 is designated a respective private cache 210, 212, 214, 216 (i.e., processor core 0 and private cache 0, processor core 1 and private cache 1, processor core 2 and private cache 2, and processor core 3 and private cache 3). For ease of explanation, the examples herein may refer to the four processor cores 200, 201, 202, 203 and the four private caches 210, 212, 214, 216 illustrated in FIG. 2. However, the four processor cores 200, 201, 202, 203 and the four private caches 210, 212, 214, 216 illustrated in FIG. 2 and described herein are merely provided as an example and in no way are meant to limit the various embodiments to a four-core processor system with four designated private caches. The computing device 10, the SoC 12, or the multicore processor 14 may individually or in combination include fewer or more than the four processor cores 200, 201, 202, 203 and private caches 210, 212, 214, 216 illustrated and described herein.[0046] FIG. 3 illustrates an example embodiment of an indirect jump profiling system 300 configured to generate profile guided indirect jump tables 310a, 310b, 310c. The indirect jump profiling system 300 may provide input data 302 to an indirect jump profiler 304. The input data 302 may include data relating to traces of indirect jumps from indirect jump locations to indirect jump targets. The input data 302 may also include further trace data of the instructions executed at the indirect jump target and subsequent instructions executed as a result of the indirect jump, including types and/or numbers of instructions executed. The input data 302, including the trace data of the instructions executed, may indicate information regarding a tail of an indirect jump.[0047] In various embodiments, the input data 302 may be gathered during multiple offline testing runs of a program, and provided to the indirect jump profiler 304 in various manners, including individually, in batches, and/or as a whole, either over time or at once. In various embodiments, the input data 302 may be gathered during runtime executions of the program on a computing device (e.g., computing device 10 in FIG. 1), and provided to the indirect jump profiler 304 at, during, and/or after execution of the program. In various embodiments, the input data 302 gathered during runtime executions of the program may be used to build a profile guided indirect jump table 310a, 310b, 310c, and/or to update a profile guided indirect jump tables 310a, 310b, 310c built using input data 302 gathered during offline program runs and/or during runtime program runs.[0048] The indirect jump profiler 304 may analyze the input data 302 to generate profiling results 306 that may identify indirect jump targets associated with indirect jump locations and frequencies of the indirect jump targets for the indirect jump locations. The indirect jump profiler 304 may associate an indirect jump location with its indirect jump target(s) and the frequency of the occurrence of the indirect jump target(s).[0049] The example illustrated in FIG. 3 includes four (4) indirect jump locations, W, X, Y, and Z. Each of the indirect jump locations may be associated with its indirect jump target(s) as identified by the indirect jump profiler 304 from the trace data of the input data 302. The example illustrated in FIG. 3 includes indirect jump location W associated with indirect jump targets Tl, T2, T3, and T4; indirect jump location X associated with indirect jump targets T5 and T6; indirect jump location Y associated with indirect jump targets T7, T8, and T9; and indirect jump location Z associated with indirect jump targets T10 and Ti l.[0050] Each of the indirect jump targets may be associated with its frequency as an indirect jump target of an associated indirect jump location as identified by the indirect jump profiler 304 from the trace data of the input data 302. The example illustrated in FIG. 3 includes indirect jump target Tl as an indirect jump target of indirect jump location W 10,000 times, indirect jump target T2 50 time, indirect target T3 9,500 times, and indirect target T4 10 times; indirect jump target T5 as an indirect jump target of indirect jump location X 4,000 times and indirect jump target T6 100 times; indirect jump target T7 as an indirect jump target of indirect jump location Y 1,500 times, indirect target T8 1,450 times, and indirect target T9 1,500 times; and indirect jump target T 10 as an indirect jump target of indirect jump location Z 3,500 times and indirect target Ti l 3,250 times.[0051] The indirect jump profiler 304 may also analyze the input data 302 to generate profiling results 306 that may identify lengths of tails of indirect jump targets associated with indirect jump locations. Each and/or the longest length of a tail of an indirect jump target associated with an indirect jump location may be associated with the indirect jump location as identified by the indirect jump profiler 304 from the trace data of the input data 302. The example illustrated in FIG. 3 includes the length of the longest tail associated with and indirect jump target associated with an indirect jump locations. The example illustrated in FIG. 3 includes a longest tail of an indirect jump associated with indirect jump location W with a length of 15 instructions, a longest tail with a length of 350 instructions associated with indirect jump location X; a longest tail with a length of 60 instructions associated with indirect jump location Y; and a longest tail with a length of 120 instructions associated with indirect jump location Z.[0052] The indirect jump profiler 304 may include a confidence analyzer 308 capable of analyzing the profiling results 306 for assigning a confidence level for an indirect jump target associated with an indirect jump location. In various embodiments, the confidence analyzer 308 may use various forms of mathematical analysis to determine whether profiling results 306 result in high or low confidence levels for an indirect jump target associated with an indirect jump location. The confidence analyzer 308 may determine a confidence level for individual pairings of an indirect jump target associated with an indirect jump location, and/or groups of pairings of multiple indirect jump targets associated with an indirect jump location. The confidence levels may be determined for the pairings based on individual analysis of the profiling results 306 for each pairing, and/or based on analysis of the profiling results 306 in relation to the profiling results 306 of other pairings for the same and/or other indirect jump locations. The confidence analyzer 308 may analyze the frequency of the indirect jump target(s) associated with the indirect jump location(s) to determine a confidence level. For example, the confidence analyzer 308 may use absolute and/or relative thresholds and/or ratios, such as comparing a frequency value of a pairing against an absolute frequency value and/or a relative frequency value of an average and/or total frequency value of multiple pairings. In another example, the confidence analyzer 308 may use probabilities, such as likelihood of a pairing occurring with respect to another pairing(s). In another example, the confidence analyzer 308 may use predefined rules relating to the number of pairings and their frequencies in relation to each other.[0053] In general, the confidence analyzer 308 may assign a high confidence level for pairings of an indirect jump location and at least one indirect jump target, in response to the analysis of profiling results 306 determining that a minority of pairings for the indirect jump location is more likely to occur than a majority of pairings by at least a certain measure. Similarly, the determination may be that the majority of pairings for the indirect jump location is less likely to occur than the minority of pairings by at least a certain measure. Conversely, the confidence analyzer 308 may assign a low confidence level for pairings of an indirect jump location and at least one indirect jump target, in response to the analysis of profiling results 306 determining that the majority of pairings for the indirect jump location is more likely to occur than the minority of pairings by at least a certain measure. Similarly, the determination may be that the minority of pairings for the indirect jump location is less likely to occur than the majority of pairings by at least a certain measure. In these examples, majority and minority may also be replaced by equal numbers. Whether a majority/minority or equal numbers are used, and the relative sizes of the majority and minority, may depend on a total number of pairings of indirect jump targets and an indirect jump location. For example, a small number of pairings may use equal numbers or near equal numbers for the relative sizes of the majority and minority. As the number of pairings increases the difference between the relative sizes of the majority and minority may become more pronounced.[0054] In the example illustrated in FIG. 3, the indirect jump targets associated with indirect jump location W may be assigned a high confidence level because the frequency of a small number of indirect jump targets indicates a greater likelihood of those indirect jump targets occurring than the rest of the indirect jump targets. The frequency of indirect jump target Tl is illustrated as 10,000 times and the frequency of indirect jump target T3 is illustrated as 9,500 times, while the frequency of indirect jump target T2 is illustrated as 50 times and the frequency of indirect jump target T4 is illustrated as 10 times. The likelihood of indirect jump target Tl and indirect jump target T3 is greater by a certain measure than the likelihood of indirect jump target T2 and indirect jump target T4. The indirect jump targets associated with indirect jump location X may be assigned a high confidence level for similar reasons.[0055] In the example illustrated in FIG. 3, the indirect jump targets associated with indirect jump location Y may be assigned a low confidence level because the frequency of a majority of the number of indirect jump targets indicates greater likelihood of those indirect jump targets occurring than the rest of the indirect jump targets. The frequency of indirect jump target T7 is illustrated as 1,500 times, the frequency of indirect jump target T8 is illustrated as 1,450 times, and the frequency of indirect jump target T9 is illustrated as 1,500 times. The likelihood of any of or a minority of the indirect jump targets T7, T8, and T9 having a greater likelihood of occurring than another of or a majority of the indirect jump targets T7, T8, and T9 is not greater by a certain measure. The indirect jump targets associated with indirect jump location Z may be assigned a low confidence level for similar reasons.[0056] In various embodiments, the confidence analyzer 308 may also use a length of a tail of at least one of the indirect jump targets associated with an indirect jump location in determining whether to assign a high or low confidence level. The confidence analyzer 308 may analyze the number of instruction executions following each indirect jump targets associated with an indirect jump location to determine a confidence level. Based on analysis of the profiling results 306 for each pairing of the indirect jump targets associated with the indirect jump location, the confidence analyzer 308 may determine whether any of the pairings has a long tail.[0057] Determination of a long tail may be based on a comparison of the number of execution instructions to various absolute and/or relative metrics, includingpredetermined values or thresholds, calculated values, average values, total values, ratio values, and percentage values. A number of execution instructions following an indirect jump exceeding a designated metric may be determined to be a long tail for the indirect jump target associated with the indirect jump location.[0058] In the example illustrated in FIG. 3, the indirect jump targets associated with indirect jump location X may be assigned a low confidence level based on having a long tail regardless of the high confidence level that could have been assigned based on indirect jump target T5 being more likely to occur by a certain measure than indirect jump target T6. The longest tail of either indirect jump target T5 or indirect jump target T6 when associated with indirect jump location X is illustrated as 350 instructions. The metric for designating a tail as long may be such that 350instructions is a long tail. The confidence analyzer 308 may assign a low confidence level to the indirect jump targets associated with indirect jump location X based on the determination of a long tail for the indirect jump targets when associated with indirect jump location X. The indirect jump targets associated with indirect jump location Z may be assigned a low confidence level for similar reasons.[0059] Conversely, in the example illustrated in FIG. 3, the longest tail for the indirect jump targets associated with indirect jump location W is 15 instructions, and the longest tail for the indirect jump targets associated with indirect jump location Y may be 60 instructions. In either of these examples, the metric for designating a tail as long may be such that 15 or 60 instructions is not a long tail (or is a short tail). As a result the confidence level assigned based on analysis of the frequency of the indirect jump target(s) associated with the indirect jump locations may remain.[0060] The indirect jump profiler 304 may generate indirect jump tables 310a, 310b, 310c. The indirect jump tables 310a, 310b, 310c may be generated in various forms, as discussed further herein with reference to FIGS. 5A-5C. The indirect jump tables 310a, 310b, 310c may indicate an association of an indirect jump location and at least one indirect jump target with a confidence level for the associated indirect jump targets. In the example illustrated in FIG. 3, an indirect jump table 310a, 310b, 310c may be generated of each of the indirect jump locations W (indirect jump table 310a), X (indirect jump table 310b), Y (indirect jump table 310c), and Z (indirect jump table 3 lOd). Corresponding to the profiling results 306, the indirect jump table 310a may indicate an association between the indirect jump location W and the indirect target locations Tl, T2, T3, and T4; the indirect jump table 310b may indicate an association between the indirect jump location X and the indirect target locations T5 and T6; the indirect jump table 310c may indicate an association between the indirect jump location Y and the indirect target locations T7, T8, and T9; and the indirect jump table 3 lOd may indicate an association between the indirect jump location Z and the indirect target locations T10 and Ti l. Corresponding to the profiling results 306 analyses by the confidence analyzer 308, the indirect jump table 310a may indicate a high confidence level, and the indirect jump tables 310b, 310c, 3 lOd may indicate a low confidence level.[0061] FIG. 4 illustrates an example embodiment of a compiler guided indirect jump table 400. At compile time, a compiler (not shown) executed by a processor (e.g., processor 14 in FIGS. 1 and 2) may analyze a program code and generate compiler guided indirect jump table 400. The compiler guided indirect jump table 400 may include a column for indirect jump locations 402 and a column for indirect jump targets 404. Each row (or entry) 406, 408, 410, 412, 414 of the compiler guided indirect jump table 400 may indicate an association of an indirect jump location and at least one indirect jump target. The compiler guided indirect jump table 400 may include as many rows 406, 408, 410, 412, 414 as indirect jump locations identified during compiling the program code. The compiler may not identify all of the indirect jump targets for an indirect jump locations, for example, because some of the indirect jump targets may be variable based on conditions during execution of the program code.[0062] FIGS. 5A-5C illustrate examples of profile guided indirect jump tables 500, 502a, 502b, 502c, 520, 522, 524 (e.g., profile guided indirect jump tables 310a, 310b, 310c, 3 lOd in FIG. 3) suitable for use with various embodiments. These example tables continue from the example illustrated in FIG. 3, including the profiling data (e.g., profiling data 306 in FIG. 3) and the confidence levels.[0063] The profile guided indirect jump tables 500, 502a, 502b, 502c, 520, 522, 524 may include a column for indirect jump locations 402 and a column for indirect jump targets 404. Each row (or entry) 506, 508, 510, 512 of the profile guided indirect jump tables 500, 502a, 502b, 502c, 520, 522, 524 may indicate an association of an indirect jump location and at least one indirect jump target. The row 506 may indicate the associations of indirect jump location W with the jump targets Tl, T2, T3, and T4. The row 508 may indicate the associations of indirect jump location X with the jump targets T5 and T6. The row 510 may indicate the associations of indirect jump location Y with the jump targets T7, T8, and T9. The row 512 may indicate the associations of indirect jump location X with the jump targets T 10 and Ti l.[0064] In various embodiments, a confidence level may be assigned to a profile guided indirect jump table 500, 502a, 502b, 502c, 520, 522, such that each of the indirect jump locations in the profile guided indirect jump table 500, 502a, 502b, 502c, 520, 522 has the same confidence level. The confidence level of each profile guided indirect jump table 500, 502a, 502b, 502c, 520, 522 may be identified by metadata or by a storage location designated for profile guided indirect jump tables 500, 502a, 502b, 502c, 520, 522 having a designated confidence level. In various embodiments, the profile guided indirect jump tables 500, 502a, 502b, 502c, 520, 522, 524 may include a column for confidence levels 504. In various embodiments including the column for confidence levels 504, the rows 506, 508, 510, 512 may further indicate an association of a confidence level for an indirect jump location.[0065] FIG. 5 A illustrates example embodiments of profile guided indirect jump tables 500, 502a, 502b, 502c each dedicated for a single indirect jump location. In the example illustrated in FIG. 5 A, the profile guided indirect jump table 500 may be dedicated to indirect jump location W. In various embodiments, the profile guided indirect jump table 500 may be designated as having a high confidence level. In various embodiments, the profile guided indirect jump table 500 may include the column for confidence levels 504 indicating a high confidence level. Similarly, the profile guided indirect jump tables 502a, 502b, 502c may be dedicated to indirect jump locations X, Y, and Z, respectively. In various embodiments, the profile guided indirect jump tables 502a, 502b, 502c may be designated as having a low confidence level. In various embodiments, the profile guided indirect jump tables 502a, 502b, 502c may include the column for confidence levels 504 indicating a low confidence level.[0066] FIG. 5B illustrates example embodiments of profile guided indirect jump tables 520, 522. The profile guided indirect jump tables 520, 522 may dedicated for a single confidence level. In the example illustrated in FIG. 5B, the profile guided indirect jump table 520 may be dedicated to indirect jump locations with high confidence levels. In various embodiments, the profile guided indirect jump table 520 may be designated as having a high confidence level. In various embodiments, the profile guided indirect jump table 520 may include the column for confidence levels 504 indicating a high confidence level for each row 506 in the profile guided indirect jump table 520. In the example illustrated in FIG. 5B the profile guided indirect jump table 520 may include the row 506 for high confidence indirect jump location W. Similarly, the profile guided indirect jump table 522 may be dedicated to indirect jump locations with high confidence levels. In various embodiments, the profile guided indirect jump table 522 may be designated as having a low confidence level. In various embodiments, the profile guided indirect jump table 522 may include the column for confidence levels 504 indicating a low confidence level for each row 508, 510, 512 in the profile guided indirect jump table 522. In the example illustrated in FIG. 5B the profile guided indirect jump table 522 may include the rows 508, 510, 512 for low confidence indirect jump locations X, Y, and Z.[0067] FIG. 5C illustrates an example embodiment of a profile guided indirect jump table 524. The profile guided indirect jump tables 524 may include some or all indirect jump profiler (e.g., indirect jump profiler 304 in FIG.3) analyzed indirect jump locations for a program. In the example illustrated in FIG. 5C, the profile guided indirect jump table 524 may include the column for confidence levels 504 indicating a high or low confidence level for each row 506, 508, 510, 512 in the profile guided indirect jump table 524. In the example illustrated in FIG. 5C the profile guided indirect jump table 524 may indicate a high confidence level for indirect jump location W in the row 506, and may indicate a low confidence level for indirect jump locations X in the row 508, Y in the row 510, and Z in the row 512.[0068] As noted herein, the examples illustrated in FIGS. 5A-5C continue the example illustrated in FIG. 3. FIGS. 3, 5A-5C illustrate non-limiting examples of profile guided indirect jump tables. The examples illustrated and described herein, particularly with reference to those of and relating to FIGS. 3, 5A-5C, are non- limiting. The profiling results may include any number of indirect jump locations associated with any number of indirect jump targets. The indirect jump targets may be associated with more than one indirect jump location. The frequencies, tail lengths, and certain measures and metrics for determining confidence levels may also be any number. The profile guided indirect jump tables and their rows may vary for various programs. An indirect jump profiling system (e.g., indirect jump profiling system 300 in FIG. 3) may generate any combination of profile guided indirect jump tables, such as any combination of the types of profile guided indirect jump tables in the examples illustrated in FIGS. 5A-5C.[0069] FIG. 6 illustrates a method 600 for implementing indirect jump profiling according to an embodiment. The method 600 may be implemented in a computing device in software executing in a processor (e.g., the processor 14 in FIGS. 1 and 2), in general purpose hardware, in dedicated hardware, or in a combination of a software-configured processor and dedicated hardware, such as a processor executing software within an indirect jump profiling system (e.g., indirect jump profiling system 300 in FIG. 3) that includes other individual components. In order to encompass the alternative configurations enabled in the various embodiments, the hardware implementing the method 600 is referred to herein as a "processing device."[0070] In block 602, the processing device may encounter an indirect jump in an executing program.[0071] In block 604, the processing device may trace the execution of the indirect jump to an indirect jump target. In some embodiments, the processing device may continue to trace the execution beyond the indirect jump target and trace the execution of subsequent program instructions.[0072] In block 606, the processing device may receive indirect jump input data. The indirect jump input data may include indirect jump input data gathered during multiple offline program runs and/or during a runtime program run, and may be received as individual data of a single program run, in batches of multiple program runs, and/or in a group of all of the program runs. The indirect jump input data may include data from the program trace, including indirect jump locations, indirect jump targets, and executed instructions following the indirect jump targets.[0073] In block 608, the processing device may identify an indirect jump location. The processing device may select at least one indirect jump location from the indirect jump input data. [0074] In determination block 610, the processing device may determine whether an entry exists for the indirect jump location in a profile guided indirect jump table. The processing device may search various existing profile guided indirect jump tables to determine whether any entry may be found in any of the profile guided indirect jump tables. In various embodiments, determination block 610 may be optionally implemented for updating existing profile guided indirect jump tables. In various embodiments, determination block 610 may be optionally implemented for offline and/or runtime runs of the program.[0075] Following identification of the indirect jump location in block 608; or in response to determining that an entry does not exists for the indirect jump location in a profile guided indirect jump table (i.e., determination block 610 = "No"), the processing device may associate the indirect jump target for the selected indirect jump location and indirect jump target in block 612. The processing device may identify which indirect jump targets to associate with an indirect jump location from the trace data of the indirect jump input data showing the instructions at the indirect jump location executed after the indirect jump from the indirect jump location.[0076] In block, 614, the processing device may assign a confidence level for the indirect jump location, as described further herein with reference to FIGS. 3 and 7. Assigning a confidence level may be optionally implemented for an indirect jump profiling system and/or processing device using confidence levels.[0077] In block 616, the processing device may create a profile guided indirect jump table and/or profile guided indirect jump table entry for the indirect jump location. The creation of the profile guided indirect jump table and/or profile guided indirect jump table entry may include using the associated indirect jump location and the indirect jump target. In various embodiments, the profile guided indirect jump table may be created in a manner designating the profile guided indirect jump table with a confidence level associated with the indirect jump location. In various embodiments, the entry created in a profile guided indirect jump table may include a confidence level associated with the indirect jump location. In various embodiments, the entry created in a profile guided indirect jump table may be created in a table designated with a confidence level associated with the indirect jump location. In various embodiments, creating a profile guided indirect jump table may include creating an entry in the profile guided indirect jump table.[0078] Each of blocks 618-620 may be optional, as determination block 610, for updating existing profile guided indirect jump tables. In response to determining that an entry does exists for the indirect jump location in a profile guided indirect jump table (i.e., determination block 610 = "Yes"), the processing device may retrieve indirect jump data for the selected indirect jump location in block 618. In various embodiments, retrieving indirect jump data for the selected indirect jump location may include stored indirect jump input data for the selected indirect jump location from pervious offline and/or runtime runs of the program. The indirect jump data may be retrieved from a memory (e.g., memory 16, 24 in FIG. 1).[0079] In block 620, the processing device may associate the indirect jump target for the selected indirect jump location and indirect jump target in a manner similar to block 612.[0080] In block, 622, the processing device may assign a confidence level for the indirect jump location, as described further herein with reference to FIGS. 3 and 7. Assigning a confidence level may be optionally implemented for an indirect jump profiling system and/or processing device using confidence levels.[0081] In block 624, the processing device may update a profile guided indirect jump table and/or profile guided indirect jump table entry for the indirect jump location. Updating a profile guided indirect jump table and/or profile guided indirect jump table entry may include editing the information of an entry in a profile guided indirect jump table, including associations of the indirect jump location with an indirect jump target and/or a confidence level. In various embodiments, updating a profile guided indirect jump table and/or profile guided indirect jump table entry may include editing the information of an entry in a profile guided indirect jump table may include deleting and/or adding an entry to at least one profile guided indirect jump table. In various embodiments, updating a profile guided indirect jump table may include editing a designation of a confidence level for the profile guided indirect jump table.[0082] In various embodiments, blocks 608-622 may be repeated and/or various implementations of the blocks may be run in parallel to profile and create profile guided indirect jump tables for all of the indirect jump input data.[0083] FIG. 7 illustrates a method 700 for implementing indirect jump profiling according to an embodiment. The method 700 may be implemented in a computing device in software executing in a processor (e.g., the processor 14 in FIGS. 1 and 2), in general purpose hardware, in dedicated hardware, or in a combination of a software-configured processor and dedicated hardware, such as a processor executing software within an indirect jump profiling system (e.g., indirect jump profiling system 300 in FIG. 3) that includes other individual components. In order to encompass the alternative configurations enabled in the various embodiments, the hardware implementing the method 600 is referred to herein as a "processing device." In various embodiments, the method 700 may include operations of blocks 614, 622 of the method 600.[0084] In block 702, the processing device may associate the selected indirect jump location and a length of a tail following an indirect jump to an indirect jump target. The processing device may determine the length of the tail from the trace data of the indirect jump input data showing the instructions executed after the indirect jump from the indirect jump location to the indirect jump target. Associating a length of a tail with an indirect jump location may be optionally implemented for an indirect jump profiling system and/or processing device using length of a tail for assigning confidence levels.[0085] In determination block 704, the processing device may determine whether a frequency of an indirect jump from an indirect jump location to an indirect jump target exceeds a threshold. As discussed herein with reference to FIG. 3, the threshold may be expressed in a variety of forms, including a threshold of at least one indirect jump target relative to other indirect jump targets. In various embodiments, further comparisons may be made to determine whether a certain number of indirect jump targets exceed the threshold and/or a comparison of the number of indirect jump targets that exceed the threshold to a number of indirect jump targets that do not exceed the threshold.[0086] In response to determining that a frequency of an indirect jump from an indirect jump location to an indirect jump target exceeds a threshold (i.e.,determination block 704 = "Yes"), the processing device may determine whether a length of a tail of any indirect jump from an indirect jump location to an indirect jump target exceeds a threshold in determination block 706. As discussed herein with reference to FIG. 3, the threshold may be expressed in a variety of forms.Determining whether a length of a tail exceeds a threshold may be optionally implemented for an indirect jump profiling system and/or processing device using length of a tail for assigning confidence levels.[0087] In response to determining that a frequency of an indirect jump from an indirect jump location to an indirect jump target exceeds a threshold (i.e.,determination block 704 = "Yes"), or in response to determining that a length of a tail of any indirect jump from an indirect jump location to an indirect jump target does not exceed a threshold (i.e., determination block 706 = "No"), the processing device may output a high confidence indicator for the indirect jump location in block 708.[0088] In response to determining that a frequency of an indirect jump from an indirect jump location to an indirect jump target does not exceeds a threshold (i.e., determination block 704 = "No"), or in response to determining that a length of a tail of any indirect jump from an indirect jump location to an indirect jump target exceed a threshold (i.e., determination block 706 = "Yes"), the processing device may output a low confidence indicator for the indirect jump location in block 710. [0089] The processor may then continue with the operations in blocks 616 or 624 of the method 600 as described with reference to FIG. 6.[0090] FIG. 8 illustrates a method 800 for implementing profile guided indirect jump checking according to an embodiment. The method 800 may be implemented in a computing device in software executing in a processor (e.g., the processor 14 in FIGS. 1 and 2), in general purpose hardware, in dedicated hardware, or in a combination of a software-configured processor and dedicated hardware, such as a processor executing software within an indirect jump profiling system (e.g., indirect jump profiling system 300 in FIG. 3) that includes other individual components. In order to encompass the alternative configurations enabled in the various embodiments, the hardware implementing the method 800 is referred to herein as a "processing device."[0091] In block 802, the processing device may load a compiler guided indirect jump table. As discussed herein, the compiler guided indirect jump table may be generated by a compiler run by a processing device for a program code. The compiler may identify the indirect jump locations and associated indirect jump targets available in the program code. But, the compiler may not be able to identify all of the indirect jump locations and associated indirect jump targets that may result from execution of the code as some of the indirect jump targets may be variable based on inputs to and execution of the program at runtime.[0092] In block 804, the processing device may encounter an indirect jump location during runtime of the program. In block 806, the processing device may identify an indirect jump target of the encountered indirect jump location.[0093] In determination block 808, the processing device may determine whether the encountered indirect jump location and the identified indirect jump target match an associated indirect jump location and indirect jump target in a profile guided indirect jump table. The processing device may locate a profile guided indirect jump table and entry having the encountered indirect jump location, and compare the identified indirect jump target with the associated indirect jump targets in the profile guided indirect jump table.[0094] In response to determining that the encountered indirect jump location and the identified indirect jump target match an associated indirect jump location and indirect jump target in a profile guided indirect jump table (i.e., determination block 808 = "Yes"), the processing device may continue execution of the program in block 816.[0095] In response to determining that the encountered indirect jump location and the identified indirect jump target does not match an associated indirect jump location and indirect jump target in a profile guided indirect jump table (i.e., determination block 808 = "No"), the processing device may determine whether the encountered indirect jump location is associated with a high confidence level in determination block 810. In various embodiments, the processing device may retrieve data indicating the confidence level associated with the encountered indirect jump location from the entry for the encountered indirect jump location in the profile guided indirect jump table. In various embodiments, the processing device may identify a designated confidence level of the profile guided indirect jump table having the entry for the encountered indirect jump. Determining whether the encountered indirect jump location is associated with a high confidence level may be optionally implemented for an indirect jump profiling system and/or processing device using confidence levels in checking indirect function calls.[0096] In response to determining that the encountered indirect jump location is not associated with a high confidence level (i.e., determination block 810 = "No"), or in response to determining that the encountered indirect jump location and the identified indirect jump target does not match an associated indirect jump location and indirect jump target in a profile guided indirect jump table (i.e., determination block 808 = "No") when determination block 810 is not performed, the processing device may determine whether the encountered indirect jump location and the identified indirect jump target match an associated indirect jump location and indirect jump target in the compiler guided indirect jump table in determination block 812. The processing device may locate an entry in the compiler guided indirect jump table having the encountered indirect jump location, and compare the identified indirect jump target with the associated indirect jump targets in the compiler guided indirect jump table.[0097] In response to determining that the encountered indirect jump location and the identified indirect jump target match an associated indirect jump location and indirect jump target in the compiler guided indirect jump table (i.e., determination block 812 = "Yes"), the processing device may continue execution of the program with a warning in block 818. In various embodiments, the warning may take various forms, including any combination of an audible, visible, and/or tactile warning to a user of a computing device running the program, a warning recorded in a log file stored locally on and/or remotely from the computing device running the program.[0098] In response to determining that the encountered indirect jump location is associated with a high confidence level (i.e., determination block 810 = "Yes"), or in response to determining that the encountered indirect jump location and the identified indirect jump target do not match an associated indirect jump location and indirect jump target in the compiler guided indirect jump table (i.e., determination block 812 = "No"), the processing device may abort the program in block 814.[0099] The various embodiments (including, but not limited to, embodiments described above with reference to FIGs. 1-8) may be implemented in a wide variety of computing systems including mobile computing devices, an example of which suitable for use with the various embodiments is illustrated in FIG. 9. The mobile computing device 900 may include a processor 902 coupled to a touchscreen controller 904 and an internal memory 906. The processor 902 may be one or more multicore integrated circuits designated for general or specific processing tasks. The internal memory 906 may be volatile or non-volatile memory, and may also be secure and/or encrypted memory, or unsecure and/or unencrypted memory, or anycombination thereof. Examples of memory types that can be leveraged include but are not limited to DDR, LPDDR, GDDR, WIDEIO, RAM, SRAM, DRAM, P-RAM, R- RAM, M-RAM, STT-RAM, and embedded DRAM. The touchscreen controller 904 and the processor 902 may also be coupled to a touchscreen panel 912, such as a resistive-sensing touchscreen, capacitive-sensing touchscreen, infrared sensing touchscreen, etc. Additionally, the display of the computing device 900 need not have touch screen capability.[0100] The mobile computing device 900 may have one or more radio signal transceivers 908 (e.g., Peanut, Bluetooth, Zigbee, Wi-Fi, RF radio) and antennae 910, for sending and receiving communications, coupled to each other and/or to the processor 902. The transceivers 908 and antennae 910 may be used with the above- mentioned circuitry to implement the various wireless transmission protocol stacks and interfaces. The mobile computing device 900 may include a cellular network wireless modem chip 916 that enables communication via a cellular network and is coupled to the processor.[0101] The mobile computing device 900 may include a peripheral device connection interface 918 coupled to the processor 902. The peripheral device connection interface 918 may be singularly configured to accept one type of connection, or may be configured to accept various types of physical and communication connections, common or proprietary, such as Universal Serial Bus (USB), Fire Wire, Thunderbolt, or PCIe. The peripheral device connection interface 918 may also be coupled to a similarly configured peripheral device connection port (not shown).[0102] The mobile computing device 900 may also include speakers 914 for providing audio outputs. The mobile computing device 900 may also include a housing 920, constructed of a plastic, metal, or a combination of materials, for containing all or some of the components described herein. The mobile computing device 900 may include a power source 922 coupled to the processor 902, such as a disposable or rechargeable battery. The rechargeable battery may also be coupled to the peripheral device connection port to receive a charging current from a source external to the mobile computing device 900. The mobile computing device 900 may also include a physical button 924 for receiving user inputs. The mobile computing device 900 may also include a power button 926 for turning the mobile computing device 900 on and off.[0103] The various embodiments (including, but not limited to, embodiments described above with reference to FIGs. 1-8) may be implemented in a wide variety of computing systems include a laptop computer 1000 an example of which is illustrated in FIG. 10. Many laptop computers include a touchpad touch surface 1017 that serves as the computer's pointing device, and thus may receive drag, scroll, and flick gestures similar to those implemented on computing devices equipped with a touch screen display and described above. A laptop computer 1000 will typically include a processor 1011 coupled to volatile memory 1012 and a large capacity nonvolatile memory, such as a disk drive 1013 of Flash memory. Additionally, the computer 1000 may have one or more antenna 1008 for sending and receiving electromagnetic radiation that may be connected to a wireless data link and/or cellular telephone transceiver 1016 coupled to the processor 1011. The computer 1000 may also include a floppy disc drive 1014 and a compact disc (CD) drive 1015 coupled to the processor 1011. In a notebook configuration, the computer housing includes the touchpad 1017, the keyboard 1018, and the display 1019 all coupled to the processor 1011. Other configurations of the computing device may include a computer mouse or trackball coupled to the processor (e.g., via a USB input) as are well known, which may also be used in conjunction with the various embodiments.[0104] The various embodiments (including, but not limited to, embodiments described above with reference to FIGs. 1-8) may also be implemented in fixed computing systems, such as any of a variety of commercially available servers. An example server 1100 is illustrated in FIG. 11. Such a server 1100 typically includes one or more multicore processor assemblies 1101 coupled to volatile memory 1102 and a large capacity nonvolatile memory, such as a disk drive 1104. As illustrated in FIG. 11, multicore processor assemblies 1101 may be added to the server 1100 by inserting them into the racks of the assembly. The server 1100 may also include a floppy disc drive, compact disc (CD) or digital versatile disc (DVD) disc drive 1106 coupled to the processor 1101. The server 1100 may also include network access ports 1103 coupled to the multicore processor assemblies 1101 for establishing network interface connections with a network 1105, such as a local area network coupled to other broadcast system computers and servers, the Internet, the public switched telephone network, and/or a cellular data network (e.g., CDMA, TDMA, GSM, PCS, 3G, 4G, LTE, or any other type of cellular data network).[0105] Computer program code or "program code" for execution on a programmable processor for carrying out operations of the various embodiments may be written in a high level programming language such as C, C++, C#, Smalltalk, Java, JavaScript, Visual Basic, a Structured Query Language (e.g., Transact-SQL), Perl, or in various other programming languages. Program code or programs stored on a computer readable storage medium as used in this application may refer to machine language code (such as object code) whose format is understandable by a processor.[0106] The foregoing method descriptions and the process flow diagrams are provided merely as illustrative examples and are not intended to require or imply that the operations of the various embodiments must be performed in the order presented. As will be appreciated by one of skill in the art the order of operations in the foregoing embodiments may be performed in any order. Words such as "thereafter," "then," "next," etc. are not intended to limit the order of the operations; these words are simply used to guide the reader through the description of the methods. Further, any reference to claim elements in the singular, for example, using the articles "a," "an" or "the" is not to be construed as limiting the element to the singular.[0107] The various illustrative logical blocks, modules, circuits, and algorithm operations described in connection with the various embodiments may beimplemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and operations have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and designconstraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the claims.[0108] The hardware used to implement the various illustrative logics, logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a fieldprogrammable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Alternatively, some operations or methods may be performed by circuitry that is specific to a given function.[0109] In one or more embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a non- transitory computer-readable medium or a non-transitory processor-readable medium. The operations of a method or algorithm disclosed herein may be embodied in a processor-executable software module that may reside on a non-transitory computer- readable or processor-readable storage medium. Non-transitory computer-readable or processor-readable storage media may be any storage media that may be accessed by a computer or a processor. By way of example but not limitation, such non-transitory computer-readable or processor-readable media may include RAM, ROM, EEPROM, FLASH memory, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of non-transitory computer-readable and processor-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a non-transitory processor-readable medium and/or computer-readable medium, which may be incorporated into a computer program product.[0110] The preceding description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the claims. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments and implementations without departing from the scope of the claims. Thus, the present disclosure is not intended to be limited to the embodiments and implementations described herein, but is to be accorded the widest scope consistent with the following claims and the principles and novel features disclosed herein. |
The invention relates to a bipolar junction transistor with a biased structure between base and emitter regions. In described examples, a bipolar junction transistor (100) includes a substrate. An emitter region (114), a base region (112), and a collector region (110) are all formed in the substrate (120). A gate-type structure (102) is formed on the substrate between the base region (112) and the emitter region (114). A contact (130) is coupled to the gate-type structure (102), and the contact is adapted to be coupled to a source of a DC voltage (VCC). |
1.A bipolar junction transistor BJT, which includes:SubstrateAn emitter region formed in the substrate;A base region formed in the substrate;A collector region formed in the substrate;A gate type structure formed on the substrate between the base region and the emitter region; andA contact coupled to the gate type structure, the contact being adapted to be coupled to a source of DC voltage.2.The BJT according to claim 1, further comprising a connector coupled to the contact and a DC voltage source terminal, the connector being adapted to provide the DC voltage to the gate type structure through the contact .3.The BJT according to claim 2, wherein the DC voltage source terminal is adapted to provide a negative DC voltage.4.The BJT according to claim 2, wherein the DC voltage source terminal is adapted to provide a positive DC voltage.5.The BJT of claim 2, wherein the BJT is NPN BJT or PNP BJT.6.The BJT according to claim 1, further comprising a gate oxide layer above the surface of the substrate between the base region and the emitter region, and the gate type structure is formed on the surface of the substrate. Above the gate oxide layer.7.8. The BJT of claim 6, further comprising a metal layer formed over the gate type structure, the contact being coupled to the metal layer.8.The BJT according to claim 1, wherein the gate type structure comprises polysilicon material.9.The BJT according to claim 2, wherein the connection member electrically isolates the emitter region from the base region.10.A method of forming a transistor, the method comprising:Forming a collector region with a majority carrier of the first type in the semiconductor substrate;Forming a base region with a second type of majority carrier;Forming a gate type structure above the base region;Etching the gate-type structure to expose the emitter region of the base region and expose the base contact region of the base region, the base contact region surrounding the gate-type structure;Injecting a first dopant into the emitter region to form an emitter region with the majority carriers of the first type;Implanting a second dopant into the base contact region of the base region to form a base contact region with the second type of majority carriers;Forming contacts on or above the base contact area, the emitter contact area, the collector contact area and the gate type structure;A gate connection is formed that is coupled to the contact of the gate type structure and is adapted to be coupled to a source of DC voltage.11.The method of claim 10, wherein forming the gate type structure further comprises:Forming a gate oxide over the semiconductor substrate;A gate material is formed on the gate oxide, and the gate material and the gate oxide are etched to form the gate type structure surrounding the emitter region.12.The method of claim 11, wherein the gate material includes polysilicon.13.The method of claim 11, wherein the gate connector is isolated from the emitter region and the base region.14.The method of claim 10, wherein a DC voltage source terminal is coupled to the gate type structure through the gate connection.15.The method according to claim 14, wherein the DC voltage source terminal is adapted to provide a negative DC voltage or a positive DC voltage.16.The method of claim 10, wherein the first type is an N type and the second type is a P type.17.The method of claim 10, wherein the first type is a P-type and the second type is an N-type.18.The method of claim 10, wherein forming the contact further comprises:Depositing metal over the exposed portion of the semiconductor substrate and the gate type structure; andThe semiconductor is annealed to form a silicide on or over the exposed portion of the semiconductor substrate and the gate type structure.19.A method of forming an integrated circuit includes:Implanting a dopant having a first conductivity type into the semiconductor substrate to form a first doped region having the first conductivity type;Implanting dopants with a different second conductivity type into the first doped region to form a second doped region with the second conductivity type in the first doped region;Forming a polysilicon gate type structure between the region of the second doped region and the contact region of the first doped region over the first doped region; andA gate connector coupled to the polysilicon gate type structure is formed, wherein the gate connector is electrically isolated from the first doped region and the second doped region.20.The method of claim 19, wherein forming the polysilicon gate type structure further comprises:Forming a gate oxide over the semiconductor;Forming a polysilicon material on the gate oxide, the polysilicon material and the gate oxide are etched to form the polysilicon gate type structure surrounding the region of the second doped region; andA metal gate contact is formed on or over the polysilicon material, and the gate connector is coupled to the metal gate contact. |
Bipolar junction transistor with bias structure between base region and emitter regionRelated applicationThis application claims priority to U.S. Provisional Patent Application Serial No. 62/957880 entitled "BJT WITH BIASED POLY PLATE BETWEENEMITTER AND BASE REGIONS" filed on January 7, 2020, which is incorporated herein by reference in its entirety.Technical fieldThis specification relates to a bipolar junction transistor having a bias structure located between the base region and the emitter region.Background techniqueA bipolar junction transistor (BJT) uses two junctions between two semiconductor types (n-type and p-type), which are regions in a single-material crystal. BJT is used for signal amplification, switching in digital circuits (such as high-voltage switches), for radio frequency amplifiers, or for switching large currents. In such applications, bipolar junction transistors are expected to exhibit relatively high Hfe (high transistor β value) and linearity of collector current to base-emitter voltage (Vbe).Summary of the inventionIn the described example, a bipolar junction transistor includes a substrate. The emitter region, the base region, and the collector region are all formed in the substrate. The gate type structure is formed on the substrate and is located between the base region and the emitter region. The contact is coupled to the gate type structure, and the contact is adapted to be coupled to a source of DC voltage.Another described example relates to a method of forming a transistor. The method includes forming a collector region having a majority carrier of a first type in a semiconductor substrate. The method further includes forming a base region having a majority carrier of the second type and forming a gate type structure over the base region. The method further includes etching the gate type structure to expose the emitter region of the base region and expose the base contact region of the base region, the base contact region surrounding the gate type structure. The method also includes injecting a first dopant into the emitter region to form an emitter region having a majority carrier of the first type. The method further includes implanting a second dopant into the base contact region of the base region to form a base contact region having a second type of majority carrier. Contacts are formed on or above the base contact area, emitter contact area, collector contact area and the gate type structure. The gate connector is coupled to the contacts of the gate type structure and is adapted to be coupled to a source of DC voltage.Yet another described example provides a method of forming an integrated circuit. The method includes implanting a dopant having a first conductivity type into a semiconductor substrate to form a first doped region having a first conductivity type. The method further includes implanting dopants having a different second conductivity type into the first doping region to form a second doping region having the second conductivity type in the first doping region. The method further includes forming a polysilicon gate type structure between the region of the second doped region and the contact region of the first doped region over the first doped region. The method also includes forming a gate connector coupled to the polysilicon gate type structure, wherein the gate connector is electrically isolated from the first doped region and the second doped region.Description of the drawingsFig. 1 is a cross-sectional view of an example of a bipolar junction transistor.Figure 2 depicts an example of an electron current vector for the area under the gate structure coupled to the first DC voltage.Figure 3 depicts an example of an electron current vector for the area under the gate structure coupled to a second DC voltage.Figure 4 is a graph plotting the beta value (BETA) of a bipolar junction transistor as a function of collector current per area.Figure 5 is a graph plotting the n-factor value of a bipolar junction transistor as a function of collector current per area.Fig. 6 is a cross-sectional view of a part of a bipolar junction transistor.FIG. 7 is a graph of donor doping under the gate structure of the bipolar junction transistor of FIG. 6.Figure 8 is a graph plotting the β value of the bipolar junction transistor of Figure 6 as a function of collector current per area for different DC bias values.Fig. 9 is a graph plotting the n-factor value of the bipolar junction transistor of Fig. 6 as a function of collector current per area for different DC bias values.Figure 10 is a flowchart depicting an example method for making a bipolar junction transistor.11-21 are cross-sectional views of a transistor fabricated according to the method of FIG. 10.Detailed waysThe exemplary embodiment relates to a bipolar junction transistor (BJT). The BJT can exhibit an improved β value relative to the ideality of the collector current (Ic). The BJT has an emitter contact area and a base contact area separated from each other by a gate structure. For example, the gate structure is formed of polysilicon and is coupled to a terminal for applying a direct current (DC) bias voltage. The DC bias voltage applied to the gate structure reduces the lateral current flowing between the emitter region and the base region in the BJT. The gate structure can be formed without the need for a dedicated base mask, which can be used to add high-dose and low-energy implants to the base region to increase the emitter contact area and the base contact area Between the surface dopants. As a result, the BJT described herein can be manufactured at a lower cost than other methods and exhibit comparable or improved performance.FIG. 1 depicts a cross-sectional view of a transistor 100 including a gate type structure 102 between the emitter contact region 104 and the base contact region 106. For example, the gate-type structure 102 is called a gate-type structure because the structure is formed over a gate oxide layer (not shown) in combination with forming a gate in a CMOS process. As a further example, the gate-type structure 102 includes a polysilicon gate material that can be doped with N-type dopants or P-type dopants (for example, by implantation or as-deposited when forming the structure). The gate oxide electrically isolates the gate material from the base region 112. The gate type structure 102 also surrounds the emitter region 114. In operation, the gate-type structure 102 may be coupled to a source of DC bias voltage (VDC) through one or more electrical connections 134. This is in contrast to other methods that may couple the gate structure to the emitter. As described herein, using a separate connector to bias the gate structure 102 by a DC voltage electrically isolated from the emitter region 114 will reduce the side flowing between the emitter contact region and the base contact region in the transistor. To current. As a result, the transistor 100 can exhibit an improved transistor β value (Hfe) with respect to the collector current (Ic) and an improved Ic with respect to the ideality of Vbe.As a further example, the transistor is a BJT including a collector region 110, a base region 112, and an emitter region 114 formed in a substrate. The substrate is, for example, a semiconductor substrate or an epitaxial substrate that can be grown or deposited on a semiconductor. Floor. In some examples, the transistor 100 is a PNP transistor, where the collector region 110 and the emitter region 114 are P-type semiconductors, and the base region 112 is an N-type semiconductor. For PNP transistors, the collector region 110 and the emitter region 114 can be fabricated by implanting acceptor dopants into the silicon semiconductor, and the base region 112 can be fabricated by implanting donor dopants into the silicon semiconductor.In other examples, the transistor is an NPN transistor, where the collector region 110 and the emitter region 114 are N-type semiconductors, and the base region 112 is a P-type semiconductor. For NPN transistors, the collector region 110 and the emitter region 114 can be fabricated by implanting donor dopants into the silicon semiconductor, and the base region 112 can be fabricated by implanting acceptor dopants into the silicon semiconductor.As a further example, the transistor is implemented as a BJT including the collector region 110. For example, dopants are implanted into the collector region 110 to form the well region 120, and dopants are implanted into the well region 120 to form the collector contact region 122. The collector contact region 122 may be formed by source-drain implantation or other methods. A shallow trench isolation (STI) region 124 may be formed between the collector contact region 122 and the base contact region 106 to provide electrical isolation.A corresponding metal layer is provided over each contact area to form a collector contact 126, a base contact 128, a gate contact 130, and an emitter contact 132. The gate contact is electrically isolated from the emitter contact and the base contact, for example, by an insulating material (not shown) formed over the exposed portion of the transistor 100. Each of the contacts 126, 128, 130, and 132 may be coupled to a separate terminal of an IC chip that includes a transistor 100 and/or other circuitry integrated in the IC chip. In the example of FIG. 1, the gate connector 134 couples the gate contact 130 of the gate type structure 102 to a source of DC voltage (schematically shown as VDC). For example, the source of the DC voltage is the terminal of the IC chip that implements the transistor 100. In one example, the DC voltage terminal may be coupled to an external DC voltage (VDC), which may exist on another IC chip or other external circuit, for example. In another example, the DC voltage terminal may be coupled to a DC voltage (VDC) inside the IC chip that implements the transistor 100. For example, the DC voltage VDC may be provided by a voltage regulator, a battery, or other circuitry configured to provide DC voltage to the DC voltage terminal 136. The DC voltage VDC may be a positive DC voltage or a negative DC voltage. The magnitude of the DC voltage at the DC voltage terminal 136 may vary according to the type and configuration of the transistor 100 and the desired performance characteristics (for example, the relationship between the β value of the transistor and the Ic and the ideality between Ic and Vbe). As described herein, the minimum positive or negative magnitude of the DC voltage can be determined to operate the transistor within desired operating parameters (e.g., to reduce the lateral current in the transistor 100), thereby achieving the desired performance.For ease of explanation, FIG. 1 does not show the entire semiconductor substrate as part of the wafer, in which other devices may be integrated with the exemplary transistor 100. As an example, the collector region 110 may be fabricated in a well region formed in a substrate (such as a semiconductor substrate or an epitaxial layer), and there may be a shallow trench isolation (STI) region to separate the transistor 100 from other devices ( Not shown) isolated. The semiconductor material on which the exemplary transistor 100 is fabricated may be obtained from crystalline silicon grown from a seed crystal, or the semiconductor material may also include an epitaxial layer grown or deposited on a semiconductor substrate.2 and 3 are cross-sectional views of a portion of the transistor 200 that includes the base region 202 of the substrate below the gate-type structure 204 between the emitter contact region and the base contact region. For example, the transistor corresponds to the transistor 100 and the gate type structure 204 corresponds to the gate type structure 102 of FIG. 1. Each of FIGS. 2 and 3 further shows the vector of the electron current in the base region 202 of the transistor 200 with different bias voltages applied to the gate type structure 204 and having Vbe=0.6V and Vce=2.5V. .In the example of FIGS. 2 and 3, the gate type structure 204 includes a gate oxide 206 formed over the base region 202. In some examples, the gate oxide 206 may include silicon dioxide (SiO 2 ), such as a high-quality oxide thermally grown on the semiconductor on which the transistor 200 is fabricated. The gate type structure 204 also includes a gate material 208 formed over the gate oxide 206. The gate material 208 may include polysilicon, and may be doped with N-type or P-type dopants. For example, the polysilicon material can be formed by a complementary metal oxide semiconductor (CMOS) process. The CMOS process can be further used to form well regions in the base region and the collector region.The metal contact 210 is formed over the gate material and an additional oxide layer 212 may be formed over the gate type structure 204 and other exposed portions of the transistor 200. For example, the metal contact 210 may be a silicide, which is deposited and annealed to form the contact of the gate type structure 204. The connector 214 may be formed to couple the metal contact 210 to a source of DC voltage, which is 0V in the example of FIG. 2 and -1V in the example of FIG. 3.The combination of gate oxide 206, gate material 208, and contact 210 together define a gate structure 204 disposed between the emitter region and the base contact region. When the gate structure 204 is biased by an appropriate DC voltage source (VDC), the gate structure will increase the concentration of holes near the surface of the base region and provide additional potential for electrons injected laterally from the emitter region base. This results in a more vertical flow of electrons in the transistor 200 when the gate structure is biased to -1V as shown in FIG. 3 compared to the situation when biased to 0V shown in FIG.FIG. 4 is a graph 400 of graphs 402, 404, 406, 408, 410, and 412 of transistor β values (BETA) including NPN bipolar junction transistors for the β value applied to the gate structure (the gate of FIG. 1 The multiple different DC bias voltages (VDC) of the type structure 102 or the gate type structure 204 of FIGS. 2 to 3) as a function of the collector current (Ic) of each area. In particular, the graph 402 shows the β value when VDC=0V, the graph 404 shows the β value when VDC=-0.5V, the graph 406 shows the β value when VDC=-1V, and the graph 408 shows The β value at VDC=-2V, the graph 410 shows the β value at VDC=+0.15V, and the graph 412 shows the β value at VDC=+0.45V. Therefore, for NPN transistors, graphs 404, 406, and 408 show that the relationship between β value and Ic becomes more ideal when a negative DC bias is applied to the gate structure.FIG. 5 is a graph 500 including graphs 502, 504, 506, 508, 510, and 512 of n-factor values of NPN BJT as a function of Ic per unit area (which represents the ideality of Ic). Similar to Figure 4, this graph shows that when a negative DC bias is applied to the gate as shown in graphs 504, 506, and 508, compared to the 0 bias and positive DC bias shown in graphs 502, 510, and 512 When structured, the Ic ideality factor shows good linearity.FIG. 6 is a cross-sectional view of a part of the transistor 600. In the example of FIG. 6, the transistor 600 is a high-gain PNP BJT including a dedicated N-type base region 602 formed in the P-type collector region 604. For example, the collector region 604 includes a P-type epitaxial (Pepi) layer into which N-type dopants are implanted to form the base region 602. The transistor 600 also includes an N-well region 606 surrounding the base contact region 608, and the N-well region 606 can be formed by implanting N-type surface dopants around the base contact region using a CMOS process. The emitter region 610 is formed (for example by implanting P-type surface dopants) in the base region 602. The gate type structure 612 is formed between the base contact region 608 and the emitter region 610. For example, the gate structure 612 includes a polysilicon layer formed on the gate oxide, and the polysilicon may be doped (for example, doped with N-type or P-type dopants) or undoped, and may be CMOS The process is combined with the formation of the gates of one or more field effect transistors. Metal contacts 614, 616, and 618 are formed over each of the emitter, base, and gate structures, respectively. An oxide layer 620 may be further formed over the metal and the exposed surface layer of the transistor 600. The oxide layer 620 (eg, SiO2) electrically isolates the metal contacts 614, 616, and 618 and provides support for the connections to the metal contacts 618 made in the through holes formed through the oxide.FIG. 7 is a graph 700 of vertical donor doping taken along the line 622 extending in the Y-axis direction under the gate structure 612 of the PNP transistor 600 of FIG. 6. As shown in FIG. 7, the doping is lower near the surface, and the N base passing through the base region 602 increases and then decreases into the collector region 604. By using a separate contact coupled to the gate contact 618 and applying a bias voltage to the polysilicon gate structure 612, Hfe (transistor β value) can be increased with respect to the collector current (Ic) per unit area, so that the Compared with the BJT of the independently biased gate structure, the Hfe linearity and Ic ideality are improved.Fig. 8 is a graph 800 including graphs 802, 804, 806, 808, 810, and 812 of the beta values of transistors applied to the gate structure (the gate structure 102 of Fig. 1 or the PNP double The multiple different DC bias voltages (VDC) of the gate structure 612) of the polar junction transistor 600 are a function of the collector current (Ic) of each area. In particular, the graph 802 shows the β value when VDC=0V, the graph 804 shows the β value when VDC=-0.7V, the graph 806 shows the β value when VDC=-0.8V, and the graph 808 shows Calculate the β value when VDC=-0.9V, the graph 810 shows the β value when VDC=-1.5V, the graph 812 shows the β value when VDC-2.0V, and the graph 814 shows that VDC=-5.0 The β value at V. Therefore, for the PNP transistor 600, the graphs 810, 812, and 814 show that when a sufficiently negative DC bias of -1.5V or less (more negative) is applied to the gate structure 612, the relationship between the β value and Ic becomes More ideal.9 is a graph 900 that includes graphs 902, 904, 906, 908, 910, 912, and 914 of n-factor values of NPN BJT as a function of Ic per unit area (which represents the ideality of Ic). Similar to FIG. 8, this graph shows that when a negative DC bias voltage of -1.5V or less is applied to the gate structure 612 as shown in the graphs 910, 912, and 914, compared with the graphs 904, 906, and 908 , Ic ideality factor shows good linearity. In the case where IC ideality is a desired transistor parameter but the transistor β value is not a concern, a 0V bias can be applied to the gate structure as shown in graph 902, which provides a reasonable ideality factor compared to negative bias . However, as shown in Figure 8, a 0V bias will result in a lower transistor bias, which may be inappropriate in some applications. Therefore, the bias voltage can be set for a given transistor according to its particular application requirements and desired operating parameters.In view of the previous structural and functional features described above, the example method will be better understood with reference to FIG. 10. Figure 10 is a flowchart depicting an example method 1000 for making a transistor such as a BJT. The method 1000 can be used to fabricate any structure disclosed herein, including the structure 100 of FIG. 1, the structure 200 of FIGS. 2 and 3, or the structure 600 of FIG. Although the example method of FIG. 10 is shown and described as being executed sequentially for the purpose of explanation, the method is not limited to the illustrated order. By way of illustration, the method 1000 of FIG. 10 will be described with respect to FIGS. 11-21 to depict an example of the structure throughout the method 1000.The method 1000 begins at 1002, where dopants are implanted to form the collector region. For example, as shown in FIG. 11, a dopant 1102 is implanted into a semiconductor substrate or epitaxial layer 1100 to form a collector region with a first type of majority carrier. According to the type of BJT manufactured, the first The type can be N-type or P-type. At 1004, as shown for example in FIG. 12, a dopant 1110 is implanted into the collector region 1104 to form a base region 1112 having a second type (P-type or N-type) majority carrier.At 1006, a gate oxide is formed on the semiconductor. For example, as shown at 1114 in FIG. 13, the gate oxide may be a layer of SiO2 thermally grown by thermal oxidation of a silicon semiconductor substrate or epitaxial layer 1100. At 1008, a gate type structure is formed on the oxide. For example, as shown in FIG. 14, the gate structure 1116 may be a layer of polysilicon material deposited on the oxide 1114, for example, a gate is formed by chemical vapor deposition of silane combined with a CMOS process. In some examples, polysilicon doping may also be performed during the deposition process, for example by adding phosphine, arsine or diborane according to the desired doping type (N-type and/or P-type).At 1010, the gate material (e.g., polysilicon) and oxide are etched to form the exposed emitter region. For example, as shown in FIG. 15, the etching (at 1010) forms a gate type structure 1120 surrounding the emitter region 1122 of the base region and exposes the base contact region 1123 in the base region 1112. Therefore, the gate structure (e.g., polysilicon and oxide) after etching at 1010 can serve as a hard mask for defining the emitter area and defining other areas for dopant implantation. At 1012, dopants are implanted to form the emitter region. For example, as shown in FIG. 16, the gate structure 1120 is used for implanting dopants 1124 to form the emission of majority carriers of the first type (for example, the same type as the collector region) in the base region 1112. Mask for the pole region 1126. At 1014, dopants are implanted to form a base contact region. For example, as shown in FIG. 17, dopants are implanted into the base contact region to form the base contact region 1130 having the second type of majority carriers. Steps 1016 and 1018 form the collector region of the transistor. For example, as shown in FIG. 18, a dopant 1132 is implanted into the collector region 1104 to form a well region 1134 having a majority carrier of the first type, and the dopant is implanted into the well region to form a well region with The collector contact region 1136 of the majority carrier of the first type is thus in contact with the collector region.At 1020, metal is deposited and contacts are formed. For example, as shown in FIG. 19, a metal layer 1140 is deposited over the semiconductor as part of the back end of line (BEOL) process. The metal 1140 may be annealed to form a silicide. The metal layer 1140 can then be etched to form metal contacts, namely emitter contact 1142, base contact 1144, collector contact 1146, and gate contact 1148, as shown in FIG. 20. Additional BEOL processing can be performed to form corresponding connectors (e.g. interconnect wires) 1152, 1154, 1156, and 1158 separated by a dielectric layer 1160 (e.g., SiO2, silicate glass, silicon oxycarbide, etc.), as shown in FIG. 21 Shown.In method 1000, before injecting dopants, deposit a photoresist film and expose the photoresist film with radiation through one or more photolithography masks, and then bake and etch the photoresist film in order to The upper-defined pattern is used for dopant implantation. However, for ease of explanation, such steps have not been included in the method 1000 of FIGS. 10 and 11-21. As described herein, the gate contact 1158 may be coupled to a source of DC bias voltage, such as a terminal. For example, the terminal is coupled to a DC voltage that can be generated by circuitry on the same IC die as the transistor or circuitry external to the IC. In operation, the DC bias voltage applied to the gate contact from the source of the DC bias voltage reduces the lateral current between the emitter region and the base region. The DC bias voltage can be set according to the application requirements of the BJT and desired operating characteristics (for example, transistor β value and IC ideality and linearity).In this application, the term "coupled" or "coupled" refers to an indirect or direct connection. Therefore, if the first device is coupled to the second device, the connection may be through a direct connection or through an indirect connection through other devices and connections. For example, if device A generates a signal to control device B to perform an action, then in the first example, device A is coupled with device B, or in the second example, if intermediate component C basically does not change the difference between device A and device B For the functional relationship between the device A, the device A is coupled with the device B through the intermediate component C, so that the device A controls the device B through the control signal generated by the device A.The expression "based on" means "based at least in part on." Therefore, if X is based on Y, X can be a function of Y and any number of other factors.Within the scope of the claims, modifications can be made in the described embodiments, and other embodiments are possible. |
A computer system utilizes subsystem supplemental memory resources to implement operating system supplemental disk caching. A main system processor (e.g., a central processing unit) processes information associated with main system functions. A bulk memory (e.g., a hard disk) stores the information. A main system memory (e.g., a main RAM) caches portions of the bulk information. A subsystem supplemental memory (e.g., a graphics subsystem RAM) provides storage capacity for subsystem operations (e.g., graphics operations) and supplemental storage for portions of said bulk information associatedwith main system functions (e.g., functions performed by the main system processor). Information (e.g., main system information) cached in the subsystem supplemental memory can be accessed by the main system processor directly. |
1.A computer system, including:The bus used to transfer information;A main system processor for processing the information;A mass storage component for storing the information; andA subsystem auxiliary memory used to cache the first part of the information for the main system processor;Where the first part of the information in the subsystem auxiliary memory is written and read directly between the subsystem auxiliary memory and the main system processor, and the The storage of subsystem information takes precedence over the storage of the first part of the information of the main system processor, and the subsystem information is related to subsystem functions.2.The computer system of claim 1, wherein the subsystem auxiliary memory is a random access memory.3.The computer system of claim 1, further comprising a main system memory for caching the second part of the information for the main system processor.4.The computer system of claim 1, wherein the subsystem auxiliary memory is a graphics subsystem memory, the subsystem processor is a graphics processor, and the graphics processor preferentially stores the storage capacity of the graphics subsystem memory.5.The computer system of claim 3, wherein the main system memory and the subsystem auxiliary memory exchange the aforementioned portions of the information with each other.6.The computer system of claim 1, further comprising a subsystem processor for processing subsystem information.7.The computer system of claim 1, wherein the main system processor receives the first part of the information from the subsystem auxiliary memory.8.The computer system of claim 1, wherein the information cached in the subsystem auxiliary memory is written to the mass storage unit before the subsystem specific information is written to the subsystem auxiliary memory.9.The computer system of claim 1, wherein the subsystem is a graphics system, and before the graphics information is written to the subsystem auxiliary memory, the cached information in the subsystem auxiliary memory is written to the large Capacity memory unit.10.An auxiliary cache method, including:Store information in mass storage components;Cache part of the information in the auxiliary memory of the subsystem; andAccess the auxiliary memory of the subsystem to perform storage operations for the main system processor;Where the part of the information in the subsystem auxiliary memory is written and read directly between the subsystem auxiliary memory and the main system processor, and the subsystem information in the subsystem auxiliary memory Storage takes priority over storage of the part of the information of the main system processor, and the subsystem information is related to subsystem functions.11.The auxiliary cache method of claim 10, further comprising performing a storage operation including exchanging a part of the information between the subsystem auxiliary storage and the main storage.12.The auxiliary cache method of claim 10, further comprising performing a subsystem auxiliary coordination process, wherein the subsystem auxiliary coordination process includes, if a subsystem operation is initiated, writing information from the subsystem auxiliary memory to the Mass storage components.13.The auxiliary cache method of claim 10, wherein the subsystem attempts to store information related to the subsystem in the subsystem auxiliary memory.14.The auxiliary cache method of claim 10, further comprising caching another part of the information in the main memory.15.The auxiliary cache method of claim 10, further comprising writing information between the subsystem auxiliary memory and the main memory.16.A graphics subsystem, including:Graphic bus used to transfer information;A graphics processor for processing graphics information; andGraphics memory for storing graphics information and the first part of large-capacity information related to non-graphic applications;Where the first part of the large-capacity information in the graphics memory is written and read directly between the graphics memory and the main system processor, and the storage of the graphics information in the graphics memory takes priority over the Storage of the first part of the mass information of the main system processor.17.The graphics subsystem of claim 16, wherein said graphics memory comprises a frame buffer memory.18.19. The graphics subsystem of claim 16, wherein the host system processor can directly access information related to non-graphics applications from the graphics memory. |
Operating system auxiliary disk cache system and methodRelated applicationThis application requires the joint ownership of the serial number 60 / 693,581, the customer case number # NVID-P001784.PRO, and the title "AN OPERATING SYSTEMSUPPLEMENTAL DISK CACHING SYSTEM AND AND METHOD" applied for on June 24, 2005 The rights and interests of patent applications are hereby incorporated by reference.Technical fieldThe invention relates to the field of information storage systems. In particular, the present invention relates to an operating system auxiliary disk cache system and method.Background techniqueElectronic systems and circuits have made important contributions to the progress of modern society and are used in multiple applications to obtain favorable results. A large number of electronic technologies such as digital computers, calculators, audio devices, video equipment, and telephone systems facilitate the analysis and transmission of data, ideas, and trends in multiple fields of business, science, education, and entertainment to increase productivity and reduce costs . The realization of these results usually involves the processing and storage of huge amounts of information. In order to perform various operations correctly, it is often important to quickly transfer information from the storage medium to the processing unit. However, a storage medium or memory typically has an inverse relationship between storage capacity and access speed.Information processing systems generally include different levels of storage components that change from larger storage capacities with slower access capabilities to smaller storage capacities with faster access capabilities. Conventional computer systems typically include mass storage components (eg, hard disk memory systems) and main system memory (eg, random access memory). Large-capacity memory components such as hard disks can typically store a large amount of information, but reading information from or writing information to the hard disk can take a long time. Attempts to retrieve information directly from the hard disk through the central processing unit will significantly reduce the overall performance of the operation and are likely to adversely affect the end of using the application results. When main system memory such as random access memory (RAM) typically supports faster read and write operations, each storage unit (eg, bit) RAM is generally expensive and RAM typically has a relatively limited storage capacity. The limited storage capacity of the traditional main system memory RAM will significantly affect the applications that computer systems without mass storage components can run.Computer systems often attempt to solve the dilemma between speed and storage capacity by dividing storage activities between different types of memory in a hierarchical configuration and transferring information between different memory hierarchical components. The processor typically accesses information from the main system memory at a relatively fast access speed to a small amount of information. The main system memory in turn exchanges a relatively large amount of information with slower mass storage components such as hard disks. Input and output memory access operations can be a key bottleneck in operating system performance.The exchange of information in hierarchical storage is commonly referred to as disk cache. A cache is usually a memory designed to retain the most recently accessed data in a manner designed to cause subsequent access to the same data. When reading from or writing to the hard disk, a copy is also stored in the cache. The cache monitors disk reads to see if the required data is already in the cache. If the information is already in the cache, the information is returned immediately without attempting disk read. The disk cache uses system memory, so it takes less time to complete a "cache hit". However, because system memory is used, the operating system and application programs have less memory available for other information.The common feature of the operating system is the swap file. The swap file uses the hard disk as virtual storage. When the requested memory is larger than the physically existing memory, part of the memory is written to the hard disk to simulate a larger memory. When the swap file allows the simulation of the auxiliary storage, the performance is still degraded in the following aspects, because the program uses a slower swap file to retrieve information from the hard disk, and the information access will take longer.Summary of the inventionEmbodiments of the operating system auxiliary disk cache system and method of the present invention provide convenient and effective information storage and access. Information can be stored and accessed in an automated manner that preserves memory resources and accesses quickly. The present invention can promote flexible access to information through the balanced use of subsystem storage components (eg, graphics subsystem memory) to store information for the main system processor.In one embodiment, the computer system utilizes subsystem memory resources to implement an operating system auxiliary disk cache. The main system processor (eg, central processing unit) processes information related to the main system functions. Mass storage components (eg, hard disks) store large amounts of information (eg, application program instructions and data). The main system memory (eg, main system RAM) caches a portion of a large amount of information. Subsystem auxiliary memory (eg, graphics subsystem RAM) provides storage capacity for subsystem operations (eg, graphics operations) and auxiliary storage of information related to main system functions (eg, functions performed by the main system processor). The subsystem auxiliary coordination process is executed, and if the subsystem operation is started, information is written from the subsystem auxiliary storage to the mass storage unit.BRIEF DESCRIPTIONThe drawings that are incorporated and constitute a part of this specification illustrate embodiments of the present invention by way of example but not limitation. Unless otherwise specified, the drawings referred to in this specification should be understood as not drawn to scale.FIG. 1 is a flowchart of a typical auxiliary cache method according to an embodiment of the present invention;2 is a block diagram of a typical computer system according to an embodiment of the present invention;3 is a block diagram of a typical computer system including a graphics subsystem according to an embodiment of the present invention.Detailed description of the inventionReference will now be made in detail to the preferred embodiments of the invention, examples of which are illustrated in the accompanying drawings. When describing the invention in conjunction with the preferred embodiments, it can be understood that they do not attempt to limit the invention to these embodiments. On the contrary, the invention is intended to cover alternatives, modifications, and equivalents included within the spirit and scope of the invention as defined by the appended claims. In addition, in the following detailed description of the present invention, in order to provide a thorough understanding of the present invention, a large number of specific details are stated. However, it is obvious to anyone skilled in the art that the present invention can be implemented without these specific details. In other cases, well-known methods, procedures, components and circuits have not been described in detail so as not to unnecessarily obscure the features of the present invention.Some parts of the following detailed description are presented according to the process of data bit operations in computer memory, logical blocks, processing, and other symbolic representations. These descriptions and representations are methods commonly used by those skilled in the data processing field to effectively convey the substance of their work to others skilled in the art. The processes, logic blocks, processes, etc. here are usually conceived as a consistent sequence of steps or instructions that lead to the desired result. This step includes physical manipulation of physical quantities. Usually, although not necessary, these physical quantities take the form of electrical, magnetic, optical, or quantum signals that can be stored, transferred, combined, compared, manipulated, etc. in a computer system. Mainly because of public usage, it has been proven that it is sometimes convenient to refer to these signals as bits, values, elements, symbols, letters, terms, numbers, and so on.It should be remembered, however, that all these and similar terms are related to appropriate physical quantities and are merely convenient labels for these physical quantities. Unless specifically stated otherwise, it will be considered to be expressed from the following discussion. It is understandable that from the beginning to the end, the The term discussion refers to the operation and processing of computer systems, or similar processing and processing devices (eg, electronic, optical, or quantum, computing devices) that manipulate and convert data expressed in physical (eg, electronic) quantities. The term refers to the operation and processing of a processing device that operates or converts physical quantities in computer system components (eg, registers, memory, logic circuits, other such information storage, transmission or display devices, etc.) In other components, other data is also expressed as physical quantities.The invention facilitates the effective and convenient storage of information. In one embodiment of the present invention, a flexible hierarchical memory implements balanced use of hardware components for information storage and communication activities and various other activities. For example, the processing device of the embodiment of the present invention can utilize subsystem auxiliary memory (eg, graphics subsystem memory) to provide an operating disk cache. Information used by multiple main system applications can be stored in secondary subsystem auxiliary storage. The balanced use of the storage capacity of subsystem auxiliary storage (eg, graphics subsystem storage, etc.) can facilitate quick and convenient access to information.FIG. 1 is a flowchart of a typical auxiliary cache method 100 according to an embodiment of the present invention. In one embodiment, the auxiliary caching method 100 facilitates the efficient and convenient storage and access of information in an information processing system. For example, the auxiliary cache method 100 can utilize other free subsystem memory to cache main system function information for a main system processor (eg, a central processing unit).In step 110, the information is stored in the mass storage unit. In one embodiment of the invention, a large amount of information is stored in the hard disk. It can be understood that a large amount of information can be stored in various mass storage components including CD-ROM, DVD, and / or network files.In step 120, a portion of the information is cached in subsystem auxiliary storage. In one embodiment, a portion of the information is transferred from the mass storage component to the subsystem auxiliary storage. In a typical implementation, the subsystem is a graphics subsystem and this information is cached in graphics subsystem memory. For example, transfer information directly between the hard disk and the graphics subsystem memory.In step 130, the subsystem auxiliary memory is accessed to perform the storage operation of the main processing unit. In one embodiment, information is transferred directly between the subsystem auxiliary storage and the main system processing unit (eg, central processing unit). In one embodiment of the present invention, performing the storage operation of the main processing component includes directly writing and reading part of the information between the subsystem auxiliary memory and the main processing component.In step 140, a subsystem assisted coordination process is performed. In one embodiment, if the subsystem operation is initiated, the subsystem auxiliary coordination process includes writing information from the subsystem auxiliary storage to the mass storage component. For example, if the subsystem attempts to store basic subsystem functions related to the information in the subsystem auxiliary storage, the information from the subsystem auxiliary storage is written to the mass storage unit. In one embodiment, the information related to basic subsystem functions is graphical information.In one embodiment of the invention, the secondary cache method (eg, secondary cache method 100) includes caching another portion of information in the main system memory. In one embodiment, the information related to the first application is cached in the main system memory and the information related to the second application is cached in the subsystem auxiliary memory. In a typical implementation, information is exchanged between main system memory and subsystem auxiliary memory. For example, write information between the subsystem auxiliary storage and the main storage.2 is a block diagram of a typical computer system 200 according to an embodiment of the present invention. The computer system 200 includes a mass storage 210, a central processing unit (CPU) 220, a main memory 230, and a secondary subsystem 240. The secondary subsystem 240 includes a subsystem processor 241 and a subsystem auxiliary memory 242. The mass storage 210, the central processing unit (CPU) 220, the main memory 230, and the secondary subsystem 240 are communicatively connected to the bus 250. The subsystem processor 241 is communicably connected to the subsystem auxiliary memory 242.The components of the computer system 200 operate together to provide information processing and operating system auxiliary disk cache. The bus 250 transfers information between the components of the computer system 200. The central processor 220 processes this information. The mass storage 210 provides a large storage capacity for information. The main memory 230 caches a part of large-capacity information for the central processor 220. Subsystem 240 provides support for subsystem operations (eg, graphical operations). The subsystem processor 241 processes information related to subsystem functions (eg, graphics functions) and the subsystem auxiliary storage 242 stores information (eg, frame buffer information) for the subsystem processor 241. The subsystem 240 also provides the central processing unit 220 with an operating system auxiliary disk cache capability. In a typical implementation, the subsystem auxiliary storage 242 caches a portion of large-capacity information for the central processing unit 220. In one embodiment of the invention, the subsystem 240 is a graphics subsystem in which the subsystem processor 241 is a graphics processor and the subsystem auxiliary memory 242 is a graphics subsystem memory.In one embodiment of the present invention, information can be directly transferred or exchanged between the mass storage 210 and the main storage 230 and / or the subsystem auxiliary storage 242. In a typical implementation, the subsystem auxiliary memory 242 serves as the main storage component of the subsystem processor 241 and serves as the “auxiliary main” memory of the central processing unit 220. In one embodiment, the storage of information in the subsystem auxiliary storage 242 is coordinated between the main system function and the subsystem function. In a typical implementation, the storage of subsystem information (eg, graphics information) in the secondary subsystem memory has priority over the primary system storage. In the current example, subsystem auxiliary storage coordination includes writing information related to main system functions from the subsystem auxiliary storage to the mass storage before rewriting the subsystem information with the main system information. For example, if the subsystem 240 is a graphics subsystem, the main system information stored in the subsystem auxiliary memory 242 is written into the mass storage 210 before the graphics operation of rewriting the main memory function information with the graphics function information.The main memory 230 and / or the subsystem auxiliary memory 242 can operate as the main memory of the central processing unit 220. For example, the central processing unit 220 can directly receive a part of information from the subsystem auxiliary storage 242 instead of the main storage 230. In one embodiment of the present invention, the main memory 230 and the subsystem auxiliary memory 242 are random access memory (RAM).It can be understood that the present invention is easily implemented in various configurations to provide an operating system auxiliary disk cache. For example, if the main memory 230 is full, the subsystem auxiliary memory 242 can cache a part of the large-capacity information. The main memory 230 and the subsystem auxiliary memory 242 can exchange a part of large-capacity information with each other. The main memory 230 can cache the first part of the bulk information and the subsystem auxiliary memory 242 can cache the second part of the bulk information. The invention can also be used to access large volumes of information from multiple components or systems. For example, access to hard drives, CD-ROMs, DVDs, and / or network files can be performed by caching information in the auxiliary storage of the subsystem.FIG. 3 is a block diagram of a computer system 300. According to an embodiment of the computer system, an embodiment of the present invention can be implemented. The computer system 300 includes a central processing unit 301, a main system memory 302 (eg, random access memory), a chipset 303 with a north bridge 309 and a south bridge 305, a removable data storage device 304, an input device 307, a signal transmission port 308 And a graphics subsystem 310 connected to the display 320. The computer system 300 includes multiple buses for communicatively connecting components of the computer system 300. A communication bus 391 (eg, front side bus) connects the north bridge 309 of the chipset 303 to the central processing unit 301. The communication bus 392 (eg, main memory bus) connects the north bridge 309 of the chipset 303 to the main system memory 302. A communication bus 393 (eg, accelerated graphics port interface) connects the north bridge of the chipset 303 to the graphics subsystem 310. A communication bus 394-397 (for example, a PCI bus) connects the south bridge 305 of the chipset 303 to the removable data storage device 304, the input device 307, and the signal transfer port 308, respectively. The graphics subsystem 310 includes a graphics processor 311 and a graphics buffer 315.The components of computer system 300 coordinate operations to provide a graphical image representation. The communication buses 391 to 397 transfer information. The central processor 301 processes information. The main system memory 302 stores information and instructions of the central processor 301. The removable data storage device 304 also stores information and instructions (eg, functions as a large information storage). The removable data storage device may be a variety of different devices including hard disks, CDs, DVDs, jump drives, etc. The input device 306 provides a mechanism for inputting information and / or for indicating or highlighting information on the display 320. The signal transmission port 308 provides a communication interface with an external device (for example, an interface with a network). The display device 309 displays information related to the data stored in the frame buffer 315.The graphics subsystem 310 performs graphics operations and provides auxiliary memory supporting the central processing unit 301. The graphics processor 311 processes graphics instructions from the central processor 301 and provides the resulting data to the graphics auxiliary memory 315 through the display monitor 320 for storage and retrieval. For example, the graphics auxiliary memory 315 can provide the image processor 311 with a frame buffer. The graphics auxiliary memory 315 can also provide the auxiliary main system memory for the central processing unit 301. For example, large-capacity information can be transferred from the removable data storage unit 304 and / or a network resource (not shown) communicably connected to the signal transfer port 308 to the graphic auxiliary memory 315. The central processing unit 301 can then access this information directly from the graphics auxiliary storage 315.It can be understood that the present invention can be implemented in various embodiments. In a typical implementation, the present invention can be utilized in a processing system to provide a variety of graphics applications and unrelated applications. For example, the present invention may be used to perform processing on a personal computer, personal digital assistant, cellular phone, handheld device, or any platform for performing processing. It can also be understood that the implementation of the reference computer system is exemplary and the present invention is not limited by the implementation of the traditional computer system, but the present invention is easily implemented in a variety of electronic systems including main system memory and subsystem auxiliary memory. It can be understood that the present invention can be implemented in various embodiments. In a typical implementation, the present invention can be utilized in a processing system that supports multiple graphics applications including video games. For example, the present invention can be utilized in game-controlled graphics rendering processing, personal computers, personal digital assistants, cellular phones, or any number of platforms for implementing video games. It is also understandable that the reference video game application implementations are exemplary and the invention is not limited by these implementations.Thus, the present invention promotes the efficient and convenient storage and access of information in the information processing system. Embodiments of the present invention support maximizing component usage and advancing resource conservation by optimizing the storage capacity of the subsystem memory used for main system operations. Use other idle subsystem memory resources to obtain more memory for the main system application, improve the access operation of the entire hard disk, start the exchange speed of all enhanced virtual memory, and promote the longer life of the hard disk (for example, reduce hard disk access Frequency and related mechanical wear). Reducing access to hard drives can also allow power savings and extend battery life.In summary, this specification clarifies the following. The computer system uses subsystem auxiliary memory resources to implement the operating system auxiliary disk cache. The main system processor (eg, central processing unit) processes information related to the main system functions. Mass storage (for example, a hard disk) stores this information. The main system memory (eg, main RAM) caches part of the large-capacity information. Subsystem auxiliary memory (e.g., graphics subsystem RAM) provides storage capacity for subsystem operations (e.g., graphics operations) and part of the bulk information related to main system functions (e.g., functions performed by the main system processor) Provide auxiliary storage. Information cached in the auxiliary memory of the subsystem (eg, main system information) can be directly accessed by the main system processor.The following also introduces some brief summary statements of the contents stated in this manual.Short summaryAs a first item, the computer system taught in this record includes:The bus used to transfer information;A main system processor for processing the information;A mass storage component for storing the information; andA subsystem auxiliary memory for caching the first part of the large-capacity information for the main system processor.The computer system of the first item, wherein the subsystem auxiliary memory is a random access memory.The computer system of the first item further includes a main system memory for caching the second part of the large-capacity information for the main system processor.The computer system of the first item, wherein the subsystem auxiliary memory is a graphics subsystem memory.In the computer system of the first item, the main system memory and the subsystem auxiliary memory exchange the above-mentioned parts of the large-capacity information with each other.The computer system of the first item further includes a subsystem processor for processing subsystem information.The computer system of this first item, wherein the main system processor receives the first part of the mass information from the subsystem auxiliary memory.The computer system of the first item, wherein the information cached in the subsystem auxiliary memory is written to the mass storage before the subsystem specific information is written to the subsystem auxiliary memory.The computer system of the first item, wherein the cached information in the subsystem auxiliary memory is written into the mass storage before the graphic information is written into the subsystem auxiliary memory.As a second item, this specification introduces a secondary cache method including:Store information in mass storage components;Cache part of the information in the auxiliary memory of the subsystem; andAccess to the subsystem auxiliary memory to perform storage operations for the main processing unit.The secondary cache method of the second item further includes performing a storage operation including directly writing and reading multiple parts of the information between the subsystem auxiliary memory and the main processing unit.The auxiliary cache method of the second item further includes executing a subsystem auxiliary coordination process.The auxiliary cache method of the second item, wherein the subsystem auxiliary coordination process includes, if a subsystem operation is started, writing information from the subsystem auxiliary memory to the mass storage component.The auxiliary cache method of the second item, wherein the subsystem attempts to store information related to the subsystem in the subsystem auxiliary memory.The secondary cache method of the second item further includes caching another part of the information in the main memory.The auxiliary cache method of the second item further includes writing information between the auxiliary memory of the subsystem and the main memory.As a third item, this record discloses a graphics subsystem including:Graphic bus used to transfer information;A graphics processor for processing graphics information; andGraphics memory for storing graphics information and multiple parts of large-capacity information related to non-graphics applications.The graphics subsystem of the third item, wherein the graphics memory includes a frame buffer memory.The graphics subsystem of the third item, wherein the graphics processor preferentially stores the storage capacity of the graphics memory.In the image subsystem of the third item, the central processing unit can directly access information related to non-graphics application programs from the graphics memory.For the purposes of illustration and description, the foregoing description of a specific embodiment of the invention has been introduced. They do not intend to exhaustively or limit the present invention to the precise form disclosed, and many modifications and alterations are possible according to the above teachings. In order to better explain the principle of the present invention and its practical application, the embodiments are selected and described, so that other skilled persons in the art can make better use of the present invention and various embodiments with various modifications, which are suitable for the expected Special Purpose. It is intended that the scope of the invention be defined by the appended claims and their equivalents. In the claims, the order of elements does not imply any particular order of operations, steps, etc., unless a particular element specifically refers to another element deemed appropriate before and after. |
An integrated circuit having a substrate and a semiconductor device thereon. A stop layer over the substrate has a first dielectric layer formed thereon having an opening into which a first conformal barrier is formed. A first conformal barrier liner is formed in the opening, processed, and treated to improve adhesion. Portions of the first conformal barrier liner on the sidewalls act as a barrier to diffusion of conductor core material to the first dielectric layer. A conductor material is formed in the opening over the vertical portions of the first conformal barrier liner and the first stop layer. |
The invention claimed is:1. An integrated circuit comprising;a substrate having a semiconductor device thereon;a first stop layer over the substrate having a portion open to the semiconductor device;a first dielectric layer over the first stop layer having an opening provided therein having sidewalls in the first dielectric layer;a first conformal barrier liner in the opening, the first conformal barrier liner having only vertical portions of a constant thickness on the sidewalls of the opening in the first dielectric layer, the vertical portions of the first conformal barrier liner on the sidewalls acting as a barrier to diffusion of conductor core material to the first dielectric layer;a treated area on the first conformal barrier liner and the first stop layer to increase adhesion properties thereof;a second stop layer over the first dielectric layer and having a stepped opening provided therein; anda first conductor core in the opening over the vertical portions of the first conformal barrier liner and the first stop layer, the first conductor core connected to the semiconductor device.2. The integrated circuit as claimed in claim 1 including:a second dielectric layer over the second stop layer and having an opening provided therein having sidewalls;a third stop layer over the second dielectric layer and having an opening provided therein;a second conformal barrier liner in the opening in the second dielectric layer, the second conformal barrier liner having only vertical portions of a second constant thickness on the sidewalls of the openings in the second dielectric layer and the second dielectric layer;a treated area on the second conformal barrier liner and the second stop layer for increasing adhesion properties thereof; andthe first conductor core in the opening over the vertical portions of the second conformal barrier liner and the second stop layer, the second conductor core connected to the first conductor core through the opening in the via stop layer.3. The integrated circuit as claimed in claim 1 wherein the first stop layer over the substrate has a thickness "t" and the second stop layer has a thickness "T" of greater than about 2t.4. The integrated circuit as claimed in claim 1 wherein the first conformal barrier liner has a region selected from a group consisting of silicon-enriched, wetting layer covered, and a combination thereof.5. The integrated circuit as claimed in claim 1 wherein the first conformal barrier liner is a nonconductive barrier material selected from a group consisting of a nitride, a BLok, a carbide, an oxynitride, and a combination thereof.6. The integrated circuit as claimed in claim 1 wherein the first conductor core is a material selected from a group consisting of copper, aluminum, gold, silver, compounds thereof, and combinations thereof.7. The integrated circuit as claimed in claim 1 wherein the first dielectric layer comprises a low dielectric constant material.8. An integrated circuit comprising;a substrate having a semiconductor device thereon;a device dielectric layer over the substrate;a first channel stop layer over the substrate and the device dielectric layer having a portion open to the semiconductor device;a first channel dielectric layer over the first channel stop layer having a first channel opening provided therein having sidewalls in the first channel dielectric layer;a first conformal-barrier liner in the opening, the first conformal barrier liner having only vertical portions on the sidewalls of the opening in the first channel dielectric layer, the vertical portions of the first conformal barrier liner on the sidewalls acting as a barrier to diffusion of conductor core material to the first channel dielectric layer;a treated area on the first conformal barrier liner and the first channel stop layer to increase adhesion properties thereof;a via stop layer under the first channel dielectric layer and having a stepped opening provided therein; anda first conductor core in the opening over the vertical portions of the first conformal barrier liner and the first channel stop layer, the first conductor core connected to the semiconductor device.9. The integrated circuit as claimed in claim 8 including:a via dielectric layer under the first channel stop layer and having a via opening provided therein having sidewalls;the first channel stop layer having a stepped opening provided therein, a first portion of the stepped opening of the same size as the opening in the first channel dielectric layer and a second portion of the stepped opening of the same size as the opening in the via stop layer;a second conformal barrier liner in the via opening, the second conformal barrier liner having only vertical portions on the sidewalls of the via opening, the vertical portion of the second conformal barrier liner on the sidewalls of the via opening acting as a barrier to diffusion of first conductor core material to the via dielectric layer;a treated area on the second conformal barrier liner and the via stop layer to increase adhesion properties thereof; andthe first conductor core in the via opening over the vertical portions of the second conformal barrier liner and the first channel stop layer.10. The integrated circuit as claimed in claim 8 wherein the via stop layer has a thickness "t" and the first channel stop layer has a thickness "T" of greater than about twice the thickness "t" distal from the stepped opening.11. The integrated circuit as claimed in claim 8 wherein the first and second conformal barrier liners have regions selected from a group consisting of silicon-enriched, wetting layer covered, and a combination thereof.12. The integrated circuit as claimed in claim 8 wherein the first conformal barrier liner comprises a nonconductive barrier material selected from a group consisting of a nitride, a BLok, a carbide, an oxynitride, and a combination thereof in a thickness between 20 Ȧ and 70 Ȧ.13. The integrated circuit as claimed in claim 8 wherein the first channel dielectric layer comprises a porous low dielectric constant material having a dielectric constant under 3.9.14. The integrated circuit as claimed in claim 8 wherein the first conductor core comprises a material selected from a group consisting of copper, aluminum, gold, silver, compounds thereof, and combinations thereof. |
CROSS-REFERENCE TO RELATED APPLICATIONThis is a Divisional of application Ser. No. 10/165,510 filed Jun. 6, 2002 now U.S. Pat. No. 6,657,304.The present application contains subject matter related to copending U.S. patent application Ser. No. 10/079,515 by Christy Mei-Chu Woo, John E. Sanchez, Darrell M. Erb, and Amit P. Marathe entitled "COPPER INTERCONNECT WITH IMPROVED BARRIER LAYER". The related application is assigned to Advanced Micro Devices, Inc.TECHNICAL FIELDThe present invention relates generally to semiconductor technology and more particularly to an integrated circuit interconnect.BACKGROUND ARTIn the manufacture of integrated circuits, after the individual devices such as the transistors have been fabricated in and on the semiconductor substrate, they must be connected together to perform the desired circuit functions. This interconnection process is generally called "metallization" and is performed using a number of different photolithographic, deposition, and removal techniques.In one interconnection process, which is called a "dual damascene" technique, two interconnect channels of conductor materials are separated by interlayer dielectric layers in vertically separated planes perpendicular to each other and interconnected by a vertical connection, or "via", at their closest point. The dual damascene technique is performed over the individual devices which are in a device dielectric layer with the gate and source/drain contacts extending up through the device dielectric layer to contact one or more channels in a first channel dielectric layer.The first channel formation of the dual damascene process starts with the deposition of a thin first channel stop layer. The first channel stop layer is an etch stop layer which is subject to a photolithographic processing step which involves deposition, patterning, exposure, and development of a photoresist, and an anisotropic etching step through the patterned photoresist to provide openings to the device contacts. The photoresist is then stripped. A first channel dielectric layer is formed on the first channel stop layer. Where the first channel dielectric layer is of an oxide material, such as silicon oxide (SiO2), the first channel stop layer is a nitride, such as silicon nitride (SiN), so the two layers can be selectively etched.The first channel dielectric layer is then subject to further photolithographic process and etching steps to form first channel openings in the pattern of the first channels. The photoresist is then stripped.An optional thin adhesion layer is deposited on the first channel dielectric layer and lines the first channel openings to ensure good adhesion of subsequently deposited material to the first channel dielectric layer. Adhesion layers for copper (Cu) conductor materials are composed of compounds such as tantalum nitride (TaN), titanium nitride (TiN), or tungsten nitride (WN).These nitride compounds have good adhesion to the dielectric materials and provide good barrier resistance to the diffusion of copper from the copper conductor materials to the dielectric material. High barrier resistance is necessary with conductor materials such as copper to prevent diffusion of subsequently deposited copper into the dielectric layer, which can cause short circuits in the integrated circuit.However, these nitride compounds also have relatively poor adhesion to copper and relatively high electrical resistance.Because of the drawbacks, pure refractory metals such as tantalum (Ta), titanium (Ti), or tungsten (W) are deposited on the adhesion layer to line the adhesion layer in the first channel openings. The refractory metals are good barrier materials, have lower electrical resistance than their nitrides, and have good adhesion to copper.In some cases, the barrier material has sufficient adhesion to the dielectric material that the adhesion layer is not required, and in other cases, the adhesion and barrier material become integral. The adhesion and barrier layers are often collectively referred to as a "barrier" layer herein.For conductor materials such as copper, which are deposited by electroplating, a seed layer is deposited on the barrier layer and lines the barrier layer in the first channel openings to act as an electrode for the electroplating process. Processes such as electroless, physical vapor, and chemical vapor deposition are used to deposit the seed layer.A first conductor material is deposited on the seed layer and fills the first channel opening. The first conductor material and the seed layer generally become integral, and are often collectively referred to as the conductor core when discussing the main current-carrying portion of the channels.A chemical-mechanical polishing (CMP) process is then used to remove the first conductor material, the seed layer, and the barrier layer above the first channel dielectric layer to form the first channels. An abrasiveless chemical is used for the chemical-mechanical polishing process in order to prevent abrasives from being left in the channel. When a layer is placed over the first channels as a final layer, it is called a "capping" layer and a "single" damascene process is completed. When the layer is processed further for placement of additional channels over it, the layer is a via stop layer.The via formation step of the dual damascene process starts with the deposition of a thin via stop layer over the first channels and the first channel dielectric layer. The via stop layer is an etch stop layer which is subject to photolithographic processing and anisotropic etching steps to provide openings to the first channels. The photoresist is then stripped.A via dielectric layer is formed on the via stop layer. Again, where the via dielectric layer is of an oxide material, such as silicon oxide, the via stop layer is a nitride, such as silicon nitride, so the two layers can be selectively etched. The via dielectric layer is then subject to further photolithographic process and etching steps to form the pattern of the vias. The photoresist is then stripped.A second channel dielectric layer is formed on the via dielectric layer. Again, where the second channel dielectric layer is of an oxide material, such as silicon oxide, the via stop layer is a nitride, such as silicon nitride, so the two layers can be selectively etched. The second channel dielectric layer is then subject to further photolithographic process and etching steps to simultaneously form second channel and via openings in the pattern of the second channels and the vias. The photoresist is then stripped.An optional thin adhesion layer is deposited on the second channel dielectric layer and lines the second channel and the via openings.A barrier layer is then deposited on the adhesion layer and lines the adhesion layer in the second channel openings and the vias.Again, for conductor materials such as copper and copper alloys, a seed layer is deposited by electroless deposition on the barrier layer and lines the barrier layer in the second channel openings and the vias.A second conductor material is deposited on the seed layer and fills the second channel openings and the vias.A CMP process is then used to remove the second conductor material, the seed layer, and the barrier layer above the second channel dielectric layer to form the first channels. When a layer is placed over the second channels as a final layer, it is called a "capping" layer and the "dual" damascene process is completed.The layer may be processed further for placement of additional levels of channels and vias over it. Individual and multiple levels of single and dual damascene structures can be formed for single and multiple levels of channels and vias, which are collectively referred to as "interconnects".The use of the single and dual damascene techniques eliminates metal etch and dielectric gap fill steps typically used in the metallization process. The elimination of metal etch steps is important as the semiconductor industry moves from aluminum (Al) to other metallization materials, such as copper, which are very difficult to etch.A major problem with using copper in the conductor core is that copper tends to migrate into the dielectric layer in a process known as diffusion. The migration of copper atoms can lead to electrical short circuits, rendering the circuit unusable. Barrier layers deposited by self-ionized plasma (SIP) deposition have traditionally had high barrier resistance to limit the diffusion of copper atoms, but as the dimensions of semiconductor devices shrink in the quest to improve chip performance, the proportional scaling of barrier layer dimensions in vias leads to extremely thin (10-20 angstroms) via sidewalls.In addition, the size reductions have caused the channels to be closer together which requires the use of low dielectric constant (low-k) dielectric materials having dielectric constants under 3.9. These dielectric materials are porous and, where the barrier depositions were formerly conformal to the conventional dielectric constant dielectric materials, the barrier layers are no longer conformal to these materials. In addition, these depositions have been found to damage the dielectric materials as well as causing poor adhesion to seed layers.Both the thinness of the barrier layer, and its now non-conformal characteristic, has led to its ineffectiveness as a diffusion barrier and also to the formation of voids in the associated seed layer and conductor core leading to reductions in electromigration (EM) resistance.Diffusion relates to the movement of copper atoms from the conductor core into the dielectric layer, causing short circuits and EM relates to the movement of copper atoms under influence of current, particularly at the interface between layers or areas of poor adhesion, which form voids that can lead to an open circuit in the via.While the problems have been well known and many attempts have been made to solve individual problems, a solution that would solve all the problems has long been sought by those skilled in the art.DISCLOSURE OF THE INVENTIONThe present invention provides an integrated circuit having a substrate and a semiconductor device thereon. A stop layer over the substrate has a dielectric layer formed thereon having an opening into which a conformal barrier is formed. A conformal barrier liner is formed in the opening, processed, and treated to improve adhesion. Portions of the conformal barrier liner on the sidewalls act as a barrier to diffusion of conductor core material to the dielectric layer. A conductor material in the opening over the vertical portions of the conformal barrier liner and the stop layer complete the conductor core. The integrated circuit has reduced size and good barrier resistance to electro-migration.The above and additional advantages of the present invention will become apparent to those skilled in the art from a reading of the following detailed description when taken in conjunction with the accompanying drawings.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a plan view of aligned channels with a connecting via;FIG. 2 is a cross-section of FIG. 1 along line 2-2 showing an interconnect in accordance with the present invention;FIG. 3 shows a step in a dual damascene process according to the present invention;FIG. 4 is the structure of FIG. 3 after deposition of a conformal barrier linerFIG. 5 is the structure of FIG. 4 after removal of the via stop layer in the via opening;FIG. 6 is the structure of FIG. 5 after a sputter etch pre-clean process and a silane treatment;FIG. 7 is the structure of FIG. 6 after deposition of an ultra-thin conductor-wetting layer;FIG. 8 is the structure of FIG. 7 after deposition of the seed layer and the conductor core; andFIG. 9 is the structure of FIG. 8 after planarization.BEST MODE FOR CARRYING OUT THE INVENTIONReferring now to FIG. 1, therein is shown a plan view of a semiconductor wafer 100 with a silicon semiconductor substrate 101 having semiconductor devices 103 formed thereon. Above the semiconductor substrate 101 in various dielectric layers are first and second channels 102 and 104 connected by a via 106. The first and second channels 102 and 104 are respectively disposed in first and second channel dielectric layers 108 and 110. The via 106 is an integral part of the second channel 104 and is disposed in a via dielectric layer 112. The semiconductor wafer 100 is shown without a capping layer, which will be discussed later.The term "horizontal" as used in herein is defined as a plane parallel to the conventional plane or surface of a wafer, such as the semiconductor wafer 100, regardless of the orientation of the wafer. The term "vertical" refers to a direction perpendicular to the horizontal as just defined. Terms, such as "on", "above", "below", "side" (as in "sidewall"), "higher", "lower", "over", and "under", are defined with respect to the horizontal plane.Referring now to FIG. 2, therein is shown a cross-section of FIG. 1 along line 2-2. A portion of a first channel 102 is disposed in a first channel stop layer 114 and is on a device dielectric layer 116, which is on the semiconductor substrate 101. Generally, metal contacts 118 are formed in the device dielectric layer 116 to connect to the semiconductor devices 103. The various layers above the device dielectric layer 116 are sequentially: the first channel stop layer 114, the first channel dielectric layer 108, a via stop layer 120, the via dielectric layer 112, a second channel stop layer 122, the second channel dielectric layer 110, a second via stop layer 124, and a capping layer 125.The first channel 102 includes a barrier layer 126, which could optionally be a combined adhesion and barrier layer, and a seed layer 128 around a first conductor core 130. The first channel 102 could also be made according to a single damascene process in accordance with the present invention.The second channel 104 and the via 106 include a barrier layer 132, according to the present invention, and a seed layer 134 around a second conductor core 136. The barrier layers 126 and 132 are used to prevent diffusion of the conductor materials into the adjacent areas of the semiconductor device. The seed layers 128 and 134 are optional depending on the conductor material deposition process. The seed layers 128 and 134 are used during electrochemical deposition of the conductor core material to form electrodes on which the conductor material of the conductor cores 130 and 136 is deposited. The seed layers 128 and 134 are of substantially the same conductor material as the first and second conductor cores 130 and 136 and become part of the respective first and second conductor cores 130 and 136 after the deposition.With particular regard to conductor cores of conductor materials such as copper, the migration of copper atoms can lead to electrical short circuits, rendering the entire integrated circuit unusable. The barrier layers, used prior to the barrier layers 126 and 132 according to the present invention, have traditionally had high barrier resistance to limit the diffusion of copper atoms, but as the dimensions of semiconductor devices have shrunk, the proportional scaling of barrier layer dimensions in the via 106 led to an extremely thin (10-20 Angstroms thick) via sidewall.In addition, the size reductions have caused the channels formed, such as the first and second channels 102 and 104, to be closer together which requires the use of low dielectric constant dielectric materials having dielectric constants under 3.9 and even ultra-low dielectric constant dielectric materials having dielectric constants under 2.8. Thus, the first channel dielectric layer 108, the via dielectric layer 112, and the second channel dielectric layer 110 are all of very low dielectric constant materials. These low dielectric constant materials are porous and, where the barrier depositions were formerly conformal to the conventional dielectric constant dielectric materials, the barrier layers were no longer conformal to these materials. It will be understood that the present invention contemplates low dielectric constant materials but is not restricted to such materials.Both the thinness of the previous barrier layers and their non-conformal characteristic, have led to their ineffectiveness as diffusion barriers and also to the formation of voids in the seed layers and the conductor cores. The voids have also lead to reductions in electromigration (EM) resistance. In addition, these depositions have been found to damage the low dielectric constant materials as well as causing poor adhesion to seed layers.Referring now to FIG. 3, therein is shown a step in a dual damascene process according to the present invention. It will be understood that the present invention is also applicable to a single damascene process, which is simpler than the dual damascene process shown.The first channel 102 is disposed in the first channel stop layer 114 on the device dielectric layer 116, which is on the semiconductor substrate 101. A metal contact 118 is formed in the device dielectric layer 116 to connect to the semiconductor devices 103. The device dielectric layer 116, the first channel stop layer 114, the first channel dielectric layer 108, and the via stop layer 120 have all been formed. An opening has been made in the first channel dielectric layer 108 and lined successively with the barrier layer 126 and the seed layer 128. The first conductor core 130 fills the opening and the barrier layer 126, the seed layer 128, and the first conductor core 130 have been planarized to form the first channel 102 covered by the via stop layer 120.The via dielectric layer 112 is deposited over the via stop layer 120. The second channel stop layer 122 is deposited over the via dielectric layer 112. It will be noted that in accordance with the present invention, the second channel stop layer 122 has a thickness "T" which is about twice the thickness "t" of the other stop layers, such as the via stop layer 120. This thickness "T" is used to maintain a thickness of at least "t" of the second channel stop layer 122 after etching of the via, as will later be explained.Above the second channel stop layer 122 is the second channel dielectric layer 110 and the second via stop layer 124.In FIG. 3, an etching process has been applied to form an interconnect opening 140 which includes a second channel opening 141 and a via opening 142. The second channel opening 141 is through the second via stop layer 124 and the second channel dielectric layer 110. The via opening 142 is through the second channel stop layer 122 and the via dielectric layer 112. It will be noted that the via opening 142 does not extend through the via stop layer 120 at this point.Referring now to FIG. 4, therein is shown the structure of FIG. 3 after deposition of a conformal barrier liner 144. The conformal barrier liner 144 is nonconductive and protects the via dielectric layer 112 and the second channel dielectric layer 110 from damage during conductor material deposition and prevents diffusion of the conductor material through to these layers during operation. Despite the porosity of the various dielectric layers, the conformal barrier liner 144 is conformal and has a constant thickness.Referring now to FIG. 5, therein is shown the structure of FIG. 4 after removal of the via stop layer 120 in the via opening 142. An anisotropic etching process such as reactive ion etching is used to first remove the horizontal portions of the conformal barrier liner 144 such that remaining liner portions 146 of the constant thickness remain on the vertical side walls of the via dielectric layer 112 and the second channel dielectric layer 110. It will be noted that, after the etching process has removed the via stop layer 120 in the via opening 142, the thickness of the second channel stop layer 122 in the second channel opening 141 has been reduced by approximately the same thickness as the thickness "t" of the via stop layer 120 in a stepped region 148. The stepped region 148 acts as a barrier to prevent conductor diffusion into the via dielectric layer 112.Referring now to FIG. 6, therein is shown the structure of FIG. 5 after an optional sputter etch pre-clean process and a silane treatment by thermal decomposition or soft plasma activation. The silane treatment provides a silicon-rich surface 150 over the second via stop layer 124, the stepped region 148, the remaining liner portions 146, and the first channel 102.Referring now to FIG. 7, therein is shown the structure of FIG. 6 after treatment by deposition of an ultra-thin conductor-wetting layer 152. The wetting layer 152 may be deposited by a metal sputter deposition process to cover and bond to the silicon-rich surface 150.In the present invention, it has been discovered that that the silane treatment for providing silicon-enriched surfaces 150 and/or the deposition of the wetting layer 152 will provide acceptable surfaces for seed layer deposition. Either or both the silicon-enrichment and wetting layer treatments appear to increase adhesion of the seed layer 134 to the remaining liner portions 146, the second via stop layer 124, the stepped region 148, the remaining portions of the conformal barrier liner 146, and the first channel 102 over the adhesion without such treatments.Referring now to FIG. 8, therein is shown the structure of FIG. 7 after deposition of the seed layer 134 and the conductor core 136. The seed layer 134 is generally deposited by a chemical vapor deposition or physical vapor deposition process. This is followed by deposition of the conductor core 136 by electroplating, electroless plating, or chemical vapor deposition.Referring now to FIG. 9, therein is shown the structure of FIG. 8 after planarization by a process such as chemical-mechanical polishing (CMP). This leaves a planar surface 154.Referring back to FIG. 1, therein is shown the structure of FIG. 9 after deposition of the capping layer 125 on the planar surface 154.In various embodiments of the present invention, the conformal barrier liner 144 is used for all levels of interconnect and is a non-conductive barrier layer of materials such as a nitride (e.g., silicon nitride), BLok (available from Applied Materials Corporation of Santa Clara, Calif.), a carbide (e.g., silicon carbide), and an oxynitride (e.g., silicon oxynitride). It has been discovered that there is a critical range in thickness between 20 Angstroms and 70 Angstroms to maximize diffusion protection and minimize resistance of the channel and via.In various embodiments of the present invention, the sputter etch pre-clean is performed by a process such as argon ion bombardment or reactive helium and dilute hydrogen pre-clean.In various embodiment of the present invention, the silane (SiH4) treatment using a process such as thermal decomposition or soft plasma activation forms a surface silicon-doped layer or layer with impurities of silicon on the low-k dielectric surface, which increase the interface between the liner and a subsequently deposited seed layer.In various embodiments of the present invention, the wetting layer 152 is a refractory metal. It has been discovered that there is an "ultra-thin", critical range in thickness between 5 Angstroms and 30 Angstroms to maximize wetting action for the conductor material of the seed layer 134 and minimize resistance of the channel and via.In various embodiments of the present invention, the seed layer is deposited by a process such as sputter deposition or chemical vapor deposition in a thickness range of 25 Angstroms to 300 Angstroms.In various embodiments, the wetting layer 152 is of materials such as tantalum (Ta), titanium (Ti), tungsten (W), alloys thereof, and compounds thereof. The seed layers 134 (where used) are of materials such as copper (Cu), gold (Au), silver (Ag), compounds thereof and combinations thereof with one or more of the above elements. The conductor core 136, with or without seed layers, are of materials such as copper, aluminum (Al), gold, silver, compounds thereof, and combinations thereof. The dielectric layers are of dielectric materials such as silicon oxide (SiOx), tetraethoxysilane (TEOS), borophosphosilicate (BPSG) glass, etc. with dielectric constants from 4.2 to 3.9 or low dielectric constant dielectric materials such as fluorinated tetraethoxysilane (FTEOS), hydrogen silsesquioxane (HSQ), benzocyclobutene (BCB), TMOS (tetramethoxysilane), OMCTS (octamethyleyclotetrasiloxane), HMDS (hexamethyldisiloxane), SOB (trimethylsilil borxle), DADBS (diaceloxyditerliarybutoxsilane), SOP (trimethylsilil phosphate), etc. with dielectric constants below 3.9. The stop layers 114, 120, 122, and 124 (or thin insulation layers where these layers are not used as etch stop layers) and the capping layers are of materials such as silicon nitride (SixNx) or silicon oxynitride (SiON).While the invention has been described in conjunction with a specific best mode, it is to be understood that many alternatives, modifications, and variations, will be apparent to those skilled in the art in light of the aforegoing description. Accordingly, it is intended to embrace all such alternatives, modifications, and variations that fall within the spirit and scope of the included claims. All matters hither-to-fore set forth or shown in the accompanying drawings are to be interpreted in an illustrative and non-limiting sense. |
In a shared bus of which communication is managed by a main device, direct slave device to slave device (S2S) communication is realized. A first slave device which requires communication with a secondslave device can transmit an S2S communication request to the main device. The request can comprise a requested code number which is anticipated to be transmitted on the sharing bus. The main devicecan have a current code restriction which changes based on an operation parameter. If the request code number is larger than the current code restriction or the main device does not support S2S communication, the main device can reject the request. Request rejection may be used for other reasons, such as actions on the shared bus. If the main device allows the request, the slave device can transmit the codes with the requested number to another slave device on the shared bus. |
1.A master device, including:a bus interface circuit for coupling to a control data bus shared with a plurality of slave devices;A processing circuit coupled to the bus interface circuit and configured to:Controlling access by the plurality of slave devices to the control data bus;Receiving a slave to slave device communication request from a slave requesting access to the control data bus;The device-to-device communication request is for bypassing direct data transfer between the slave device of the master device and another slave device of the plurality of slave devices.2.The master device of claim 1 wherein the slave to slave communication request comprises a number of requested codewords to be transferred by the requesting party from the device to another slave device.3.The master device of claim 1 wherein said slave to slave communication request comprises a maximum of requested codewords to be transferred from the requesting slave device to the other slave device on said control data bus a number; and the processing circuit is further configured to:A response to the request is sent to the requesting slave device.4.The master device of claim 3 wherein said response grants said request by including in the response a number of codeword limits equal to or greater than a maximum number of codewords specified in said request.5.The master device of claim 3 wherein said processing circuit is further configured to:Monitoring the control data bus to detect the end of communication from the device to the slave device;After detecting the end of communication from the device to the slave device, the control of the control data bus is reacquired.6.The master device of claim 1 wherein said slave to slave communication request comprises a maximum of requested codewords to be transferred from the requesting slave device to the other slave device on said control data bus a number; and the processing circuit is further configured to:A response rejecting the request is sent to the requesting slave device.7.The master device of claim 6 wherein said slave to slave communication request is rejected if the requested maximum number of codewords is greater than a current codeword limit of said master device.8.The master device of claim 6 wherein said processing circuit is further configured to:A response rejecting the request is sent to the requesting slave device by transmitting an acceptable number of codewords when the requestor provides a acceptable number of codewords from the device that is greater than the current codeword limit.9.The master device of claim 6, wherein if the master device does not support slave device to slave device communication, the master device sends a response rejecting the request.10.A method of operation by a master device, comprising:Control access to control data buses shared with multiple slave devices;Receiving a slave-to-slave communication request from a slave requesting access to the control data bus, the device-to-device communication request for bypassing the slave device of the master device and the plurality of slave devices Direct data transfer between another slave device.11.The method of claim 10 wherein said slave to slave communication request comprises a maximum number of requested codewords to be transferred from the requesting slave to the other slave on said control data bus .12.The method of claim 11 further comprising:A response to the request is sent to the requesting slave device.13.The method of claim 12, further comprising:Monitoring the control data bus to detect the end of communication from the device to the slave device;After detecting the end of communication from the device to the slave device, the control of the control data bus is reacquired.14.The method of claim 11 further comprising:A response rejecting the request is sent to the requesting slave device.15.The method of claim 14 further comprising:Receiving a request from the requestor slave device to cause the master device to transfer a certain amount of information between the requestor slave device and the other slave device;Transferring data between the requested information between the requestor slave device and the other slave device.16.The method of claim 14 wherein said responding to said request rejects comprises an acceptable number of codewords less than a maximum number of requested codewords.17.A slave device that includes:a bus interface circuit for coupling to a control data bus shared with the plurality of slave devices and the at least one master device;A processing circuit coupled to the bus interface circuit and configured to:Transmitting a slave to slave communication request from the slave device to the master device on the control data bus, wherein the device to device communication request is for bypassing the slave device of the master device and the Direct data transfer between another slave device of multiple slave devices.18.The slave device of claim 17 wherein said communication request comprises a maximum number of requested codewords to be transmitted by said slave device to another slave device on said control data bus.19.The slave device of claim 18, wherein the processing circuit is further configured to:Receiving a response from the primary device rejecting the request.20.The slave device of claim 19, wherein the response comprises an additional number of acceptable codewords from the master device.21.The slave device of claim 20 wherein said processing circuit is further configured to:Transmitting a new slave device to a slave device communication request to the master device, wherein the new slave device to slave device communication request includes a maximum number of new requested codewords that are less than or equal to an acceptable number of received codewords.22.The slave device of claim 19 wherein said processing circuit is further configured to:A new request is sent to cause the master device to transfer a certain amount of information between the requestor slave device and the other slave device.23.The slave device of claim 19, wherein the processing circuit is further configured to perform any of the following:Resending to the master device the same slave device to slave device communication request having the same maximum number of codewords at a later time; orTransmitting a new slave device with a second codeword restriction to a slave device communication request, the second codeword limit being less than a previously requested maximum codeword limit but greater than being identified by the master device in its rejection of the initial request The acceptable number of codewords.24.The slave device of claim 19 wherein said processing circuit is further configured to:Transmitting a host request to the master device to transfer control of the control data bus to the requestor slave device;If the host request is granted by the master device, operating as a new host of the control data bus;Sending the desired number of data codewords to another slave device.25.The slave device of claim 17 wherein said processing circuit is further configured to:Receiving a response to grant the request; andA slave to slave communication is sent to the control data bus.26.The slave device of claim 17 wherein said slave to slave communication is limited to an acceptable number of codewords approved by said master device.27.A method of operating on a slave device, comprising:Coupling the slave device to a control data bus, the control data bus being shared with a plurality of slave devices and at least one master device;Transmitting a slave to slave communication request from the slave device to the master device on the control data bus, wherein the device to device communication request is for bypassing the slave device of the master device and the Direct data transfer between another slave device of multiple slave devices.28.The method of claim 27 wherein said slave to slave communication request comprises a number of requested codewords to be transferred from said slave to another slave on said control data bus.29.The method of claim 27, further comprising:Receiving a response to grant the request; andA slave to slave communication is sent to the control data bus.30.The method of claim 29 wherein said slave to slave communication is limited to an acceptable number of codewords approved by said master device. |
Camera control interface communicates from device to slaveThe application for this division is a divisional application for the PCT national phase invention patent application with the PCT international filing date of October 7, 2014 and the national application number 201480054967.6 entitled "Camera control interface from device to slave communication".Cross-reference to related applicationsThis patent application claims Provisional Application No. 61/887,895, filed on October 7, 2013, entitled "Camera Control Interface Slave Device to Slave Device Communication", and October 6, 2014 The priority of the non-provisional application No. 14/507,179, entitled "Camera Control Interface Slave Device to Slave Device Communication", which is assigned to the assignee of the present application. Citations are explicitly included here.fieldThe present disclosure relates to enabling operations on a shared control data bus, and more particularly to data communication from one slave to another on a multi-wire data and/or clocked data bus.Background techniqueI2C (also known as I2C) is a multi-master serial single-ended control data bus that is used to attach low-speed peripherals to motherboards, embedded systems, cellular phones, or other electronic devices. The I2C control data bus includes a clock (SCL) and data (SDA) line with 7-bit addressing. The control data bus has two node roles: a master node and a slave node. The master node is the node that generates the clock and initiates communication with the slave node. A slave node is a node that receives a clock and responds when it is addressed by the master node. The I2C control data bus is a multi-master data bus, which means that there can be any number of master nodes. In addition, the primary and secondary roles can change between messages. I2C defines the basic type of message, where each message starts with START and ends with STOP.In this context of camera implementation, unidirectional transmission can be used to capture images from sensors and transfer such image data to a memory in a baseband processor, and control data can be at the baseband processor and these and other peripherals Exchange between. In one example, a Camera Control Interface (CCI) protocol can be used for such control data between a baseband processor and an image sensor (and/or one or more slave nodes). In one example, the CCI protocol can be implemented on an I2C serial control data bus between the image sensor and the baseband processor.The master controls access to the control data bus. While some slave devices may have the ability to switch to host mode of operation, other slave devices may not operate in master mode. One major difference between only slave-capable slaves and master-capable slaves is the ability to receive (eg, handle) interrupts on the interrupt request line (IRQ). Only slave devices with slave capability can cause/send interrupts but cannot handle such interrupts. Therefore, since interrupt handling is extremely important, the slave device has not been able to communicate directly with other slave devices. Accordingly, it may be desirable to implement slave-to-slave communication on a shared data bus controlled by the master device.OverviewThe master device is provided to include a bus interface circuit and processing circuitry. The bus interface circuit can be used to couple to a control data bus shared with multiple slave devices. The processing circuit can be coupled to the bus interface and configured to: (a) control access by the plurality of slave devices to the control data bus; and/or (b) receive slave devices from the slave device requesting access to the control data bus Request from the device communication.In one example, the slave to slave communication request may include the number of requested codewords to be transferred by the requestor from the device to another slave.In another example, the slave to slave communication request may include a maximum number of requested codewords to be transferred from the requestor slave to the other slave on the control data bus. The processing circuit can be further configured to send a response to the requestor slave device granting the request. The response grants the request by including in the response a number of codeword limits equal to or greater than the maximum number of codewords specified in the request. The processing circuit can be further configured to: (a) monitor the control data bus to detect an end of communication from the device to the slave device; and/or (b) reacquire the control after detecting the end of communication from the device to the slave device Control of the data bus.In yet another example, the slave to slave communication request can include a maximum number of requested codewords to be transferred from the requestor slave to another slave on the control data bus. The processing circuit can be further configured to send a response to the requesting slave device rejecting the request. The slave to slave communication request may be rejected if the requested maximum number of codewords is greater than the current codeword limit of the master device. The processing circuit can be further configured to send a rejection to the requesting slave device by transmitting an acceptable number of codewords when the requested party slave device provides a maximum number of requested codewords greater than a current codeword limit The requested response. If the master device does not support slave to slave communication, the master device may also send a response rejecting the request.A method operable by a master device is provided, comprising: (a) controlling access to a control data bus shared with a plurality of slave devices; and/or (b) receiving slave devices from slave devices requesting access to the control data bus Request to communicate from the device. The slave to slave communication request may include a maximum number of requested codewords to be transferred from the requestor slave to the other slave on the control data bus.The method can further include: (a) transmitting a response to the requesting device to the requesting device; (b) monitoring the control data bus to detect an end of communication from the device to the slave device; and/or (c) detecting The control of the control data bus is reacquired after the end of communication from the device to the slave device.The method can further include transmitting a response to the requesting party from the device rejecting the request. In one example, in response to the request rejection, the method can further include: (a) receiving a request from the requestor slave device to cause the master device to transfer a particular one between the requestor slave device and the other slave device The amount of information; and/or (b) the data between the requesting party transferring the requested information between the slave device and the other slave device. In another example, the response to reject the request may include an acceptable number of codewords that is less than the maximum number of requested codewords.A slave device includes a bus interface circuit and a processing circuit. The bus interface circuit can be used to couple the slave device to a control data bus shared with the plurality of slave devices and the at least one master device. The processing circuit can be coupled to the bus interface circuit and configured to transmit a slave to slave communication request from the slave device to the master device on the control data bus. The communication request can include the number of requested codewords to be transmitted by the slave device to another slave device on the control data bus.In one example, the processing circuit can be further configured to receive a response from the primary device rejecting the request. The rejection response may include an additional number of acceptable codewords from the primary device.In another method for rejecting the response, the processing circuit can be further configured to send a new slave device to the slave device communication request to the master device, wherein the new slave device to slave device communication request includes less than or equal to the received The maximum number of new requested codewords for the number of acceptable codewords.In still another alternative method for rejecting the response, the processing circuit can be further configured to send a new request to cause the master device to transfer a particular amount of information between the requestor slave device and the other slave device .In still another alternative method for rejecting the response, the processing circuit can be further configured to perform any of: (a) resending to the master device at a later time the same maximum number of codewords The same slave device to slave device communication request; or (b) transmitting a new slave device with a second codeword restriction to the slave device communication request, the second codeword limit is less than the previously requested maximum codeword limit but greater than the The number of acceptable codewords identified by the master in its rejection of the initial request.In still another alternative method for rejecting the response, the processing circuit can be further configured to: (a) send a host request to the master device to transfer control of the control data bus to the requestor slave device; b) if the master device grants the host request, then operates as a new host to control the data bus; and/or (c) transmits a desired number of data codewords to the other slave device.In another example, the processing circuit can be further configured to: (a) receive a response to grant the request; and/or (b) send a slave to slave communication to the control data bus.The slave to slave communication can be limited to an acceptable number of codewords approved by the master.According to another aspect, a method is provided that is operable on a slave device, comprising: (a) coupling the slave device to a control data bus shared with the plurality of slave devices and the at least one master device; and/or (b) A slave to slave communication request is sent from the slave to the master on the control data bus. The slave to slave communication request may include a maximum number of requested codewords to be transferred from the slave to another device on the control data bus. In one example, the method can further include: (a) receiving a response granting the request; and/or (b) transmitting a slave device to the slave device communication to the control data bus. The slave to slave communication can be limited to an acceptable number of codewords approved by the master.BRIEF abstractThe features, nature, and advantages of the invention will be apparent from the description and appended claims.1 is a block diagram illustrating an exemplary device having a baseband processor and an image sensor and implementing an image data bus and a multi-mode control data bus.2 illustrates an exemplary slave to slave communication process.FIG. 3 conceptually illustrates an exemplary write data codeword.4 is a block diagram illustrating an exemplary method for transcoding data bits into sequential symbols at a transmitter to embed a clock signal within the sequential symbols.Figure 5 illustrates an exemplary transition between transition numbers and sequential symbols.6 illustrates an exemplary transition from bit to transition number at the transmitter and then from the transition number to the bit at the receiver.Fig. 7 illustrates a general example of converting a ternary number (a number with a base of 3) into a binary number, where each T in {T11, T10...T2, T1, T0} is a symbol transition number.Figure 8 illustrates an example method of converting a binary number (bit) into a 12-bit ternary number (a number with a base of 3).9 illustrates an example of one possible implementation of the divide and modulo operations of FIG. 8, which may be synthesized by any commercial synthesis tool.10 illustrates an example of a 20-bit region including a 19-bit data region (eg, bits 0-18) and an additional 20th bit region (eg, bit 19).Figure 11 illustrates that the range of numbers can be subdivided into six sections except that bit 19 is set to the numbers 2221_2201_22033 to 2222_2222_22223.FIG. 12 illustrates an exemplary mapping of a portion of the bit 19 map of FIG.Figure 13 conceptually illustrates a sequence of transmissions on a control data bus that can be executed for communication from a device to a slave device.Figure 14 conceptually illustrates an exemplary slave to slave transfer request.Figure 15 conceptually illustrates a grant command for a slave to slave communication protocol.Figure 16 conceptually illustrates the details of a grant command for a slave to slave transfer.17 is a block diagram illustrating an exemplary master device adapted for communication from a device to a slave device.Figure 18 illustrates a method that can be operated on a master device to facilitate communication from a device to a slave device.19 is a block diagram illustrating an exemplary slave device adapted for communication from a device to a slave device.20 illustrates a method that can be operated on a slave device to perform slave to slave communication.Detailed DescriptionIn the following description, specific details are set forth to provide a thorough understanding of the embodiments. However, those of ordinary skill in the art will understand that these embodiments can be practiced without these specific details. For example, the circuits may be shown in block diagrams in order to avoid obscuring the embodiments in unnecessary detail. In other instances, well-known circuits, structures, and techniques may not be shown in detail to avoid obscuring the embodiments.OverviewThe first feature provides direct slave to slave communication over the shared data bus managed by the master device. The first slave device that wants to communicate with the second slave device can make a slave device to slave device communication request to the master device. The request may include the number of requested codewords that the first slave device wishes to transmit on the shared control data bus managed by the master device. Such communication requests may be used for direct transfer (eg, read or write operations) between the first slave device and the second slave device to bypass the master device (eg, data to be transferred is not managed or transmitted by the master device). The master device may have a current codeword limit that varies based on operational parameters. For example, the master device may allow different numbers of codewords to be communicated between the slave devices at different times. For example, during the peak control bus usage period, the slave device can be limited to communicate with each other using no more than eight codewords at a time. There may be times when communication from the device to the slave is not allowed (ie, the current codeword limit may be zero). Also, for example, during low control bus usage, the current codeword limit can be 1024 codewords. Thus, the current codeword limit can be dynamically adjusted by the master device of the management bus and can change over time depending on the control data bus conditions or due to any other desired cause.Because the codeword limit can be dynamically changed, and the slave device may not know what the current codeword limit is, the second feature provides for communicating the current codeword limit to the slave device. For example, a first slave device desiring to communicate with a second slave device is expected to transmit a slave device to slave device communication request, the request including a desired number of codewords to be communicated to the second slave device. When the expected number of codewords is greater than the current codeword limit, the master device sends a reject message to the first slave device, the reject message including the current codeword limit, such that the slave can request to send a smaller message to the second slave device. The master device receives this request for a smaller message and sends a message to the first slave device granting the request. The first slave device then sends the message to the second slave device. The master device monitors the message(s) being sent from the first slave device to the second slave device. Upon detecting that the "end" control code is being transmitted from the first slave device to the second slave device, the master device knows that the communication has ended and completely regains control as the master device did before granting the communication request from the first slave device. Control of the data bus. Furthermore, when the first slave device is communicating with the second slave device, the master device can continuously monitor the IRQ line. Once the communication from the device to the slave has ended, any interrupt requests that occurred during the communication from the device to the slave are handled. In one feature, the current codeword limit is not only variable based on the control data bus traffic, it is also variable based on the most recent frequency of the interrupt request. Thus, during a relatively high interrupt request period, the current codeword limit may be lower than the period of the relatively low interrupt request.Exemplary system for communicating from device to slave1 is a block diagram illustrating a device 102 having a baseband processor 104 and an image sensor 106 and implementing an image data bus 116 and a multi-mode control data data bus 108 (wherein from device to slave communication can be implemented as described herein). Although FIG. 1 illustrates a multi-mode control data bus 108 within a camera device, it should be apparent that the control data bus 108 can be implemented in a variety of different devices and/or systems. Image data may be transmitted from image sensor 106 to baseband processor 104 over image data bus 116 (e.g., a high speed differential DPHY link). In one example, control data bus 108 may be an I2C control data bus that includes two lines: a clock line (SCL) and a serial data line (SDA). The clock line SCL can be used to synchronize all data transfers on the I2C bus (control data bus 108). Data line SDA and clock line SCL can be coupled to all devices 112, 114, and 118 on the I2C bus (control data bus 108). In this example, control data can be exchanged between the baseband processor 104 and the image sensor 106 and other peripherals 118 via the control data bus 108. In some implementations, these operational modes on the I2C control data bus when used in a camera application may be referred to as a Camera Control Interface (CCI) mode.According to one aspect, an improved mode of operation can be implemented on the multi-mode control data bus 108 to support camera operations. This improved mode of operation on the I2C control data bus may be referred to as Camera Control Interface Extension (CCIe) mode when used in camera applications. In this example, baseband processor 104 includes master device 112 and image sensor 106 includes slave device 114, both master device 112 and slave device 114 can operate on control data bus 108 in accordance with a Camera Control Interface Extension (CCIe) mode, The proper operation of other legacy I2C devices coupled to control data bus 108 is not affected. According to one aspect, the improved mode on control data bus 108 can be implemented without any bridge devices between the CCIe device and any legacy I2C slave devices. Interrupt request (IRQ) line/bus 120 couples devices 114 and 118 to master device 112, thereby allowing the slave device to inform master device 112 that slave devices 114 and 118 require attention. In other words, the slave requesting attention pulls the normally high IRQ line low (grounding the IRQ line 120), and the master responds by first identifying which slave requests the interrupt and second polling the slave for the IRQ state. . For example, the status may be a request to perform a slave to slave communication.2 illustrates an exemplary slave to slave communication process. Similar to FIG. 1, the device can include a shared control data bus 202 (eg, a CCIe bus), and a separate single-wire interrupt bus 204 to which multiple devices can be coupled. In this example, active/current master device 206 can control/manage access to control data bus 202 by one or more other devices (eg, inactive master devices and/or slave devices).In a first phase 210, the first slave device 208 may wish to initiate communication with the second slave device 216. To accomplish this, the first slave device can issue/transmit an interrupt signal 218 on the interrupt bus 204. Interrupt bus 204 may allow any device coupled to interrupt bus 204 to unilaterally issue an interrupt signal on interrupt bus 204 as long as no other device has asserted the interrupt signal. In response to sensing or receiving the interrupt request signal 218, the master device 206 can attempt to ascertain the requesting slave device (e.g., the device that issued the interrupt signal 218). This can be done by polling or requesting (220) the master device 206 that each slave device provides its status. In one example, this may be done by request (220) from the state of each device coupled to control data bus 202 until the master device 206 identifying the first slave device 208 that issued the terminal signal 218.In a second phase 212, in response to receiving the status request 220, the first slave device 208 can transmit its status 222, which will indicate that it issued the interrupt 218 and/or it wishes to perform a slave to slave transfer/communication . Such slave to slave communication requests may be used for direct data transfer (eg, read or write operations) between the first slave device 208 and the second slave device 216 to bypass the master device 206 (eg, to be transferred) Data is not managed or sent by the primary device). In one example, state 222 can be obtained by master device 206 that reads information within the status register of first slave device 208.Once the first slave device 208 is identified as the issuer of the interrupt signal and the desired service is ascertained (eg, the control data bus 202 is used to communicate from the device to the slave device), the master device 206 can transmit the grant via the interrupt bus 204. Indicator 224 is used to grant such requests. At this point, the first slave device 208 has been granted a limited license to use the control data bus 202 for its own communication with another slave device 216. Note that because master device 206 controls or manages the use of control data bus 202 by all devices coupled to control data bus 202, there is no opportunity for another device to conflict with control data bus 202. In one example, the first slave device 208 is granted the use of the control data bus 202 to transmit or receive a predetermined number of data code words. The use of the slave to slave communication request is different from the transfer of control of the control data bus 202 to the first slave.In a third phase 214, the first slave device 208 recognizes that the master device 206 has granted its request and may perform a slave device to slave device transfer or communication 224. The slave to slave transfer or communication 224 can read data from or transmit data to the second slave device 216. After a predetermined number of data codewords have been transmitted on control data bus 202, the master device can regain control and use of control data bus 202. The first slave device 208 stops transmitting on the control data bus 202 after transmitting and/or receiving a predetermined number of data code words.Exemplary communication protocol supporting communication from device to slaveFigure 3 illustrates an exemplary write data codeword format. This illustrates that each data codeword 300 includes a 16-bit data portion 302, a 2-bit control code 304, a 1-bit error detection constant 310, and a spare bit 306. The 16-bit data portion 302 can be partitioned into a 14-bit least significant bit portion 312 ( Bits 5 through 18) of data codeword 300 and 2-bit most significant bit portion 308 (located at bits 1 through 2 of data codeword 300). If the value is any value other than the desired constant (e.g., "0"), the 1-bit error detection constant 310 can be used to detect errors on the data codeword 300. In one example, the write data codeword 300 can be a CCIe write data codeword.Control code table 314 illustrates various possible values of control code 304. In one example, multiple write data codewords can be sent sequentially. If the control code of the currently written codeword is '00' (symbol C0), the data is to be written to the previous address. If the control code of the currently written codeword is '01' (symbol C1), then the data is to be written to the previous address +1. If the current codeword is the control code '10' (symbol E), this indicates the end of the frame, and the next codeword can be a slave device identifier (SID) or an exit code.Data codeword 300 may also have spare bits 306 (e.g., bit 19, also referred to as the 20th bit), which may be used to transfer commands and other information between the master device and one or more slave devices. Alternate bits 306 (e.g., bit 19) may be used to encode commands coupled between devices that control data bus 108. The spare bit data region (e.g., bit 19, also referred to as the 20th bit) that may be defined by using the spare bit 306 may be further illustrated and discussed in Figures 10, 11 and 12.4 is a block diagram illustrating an exemplary method for transcoding data bits into sequential symbols at a transmitter to embed a clock signal within the sequential symbols. At transmitter 402, data bit sequence 404 is converted to a ternary (base 3) number (eg, where each digit of the ternary number is a "transition number"), and these ternary numbers are converted to The sequential symbols transmitted on the control data bus including clock line SCL 412 and data line SDA 414 are included.In one example, the original 20-bit binary data 404 is input to a bit-to-transformance converter block 408 for conversion to a 12-bit ternary number 409. Each digit of the 12-digit ternary number can represent a "transition number." The two consecutive digits of the transition number can be the same digit value. Each digit of the transition number is converted to a sequential symbol at transition to symbol block 410 such that no two consecutive sequential symbols have the same value. Such sequential symbol transitions can be used to embed a clock signal since a transition (e.g., a change) is guaranteed at each sequential symbol. Each sequential symbol 416 is then transmitted on a two-wire physical link (e.g., an I2C control data bus including SCL line 412 and SDA line 414).At receiver 420, the process is reversed to convert the sequential symbols back to bits, and in the process, the clock signal is extracted from sequential symbol conversion. Receiver 420 receives sequential symbols 422 on a two-wire physical link (e.g., an I2C control data bus including SCL line 422 and SDA line 424). The received sequential symbol 422 is input to a clock data recovery (CDR) block 428 to recover the clock timing and sample the sequential symbols (S). The symbol to transition number converter block 430 then converts each sequential symbol into a transition number, where each transition number represents a digit of a ternary number. Subsequently, the transition number to bit converter 432 converts twelve (12) transition numbers (ie, ternary numbers) to recover twenty (20) bits of raw data from the 12-bit ternary number.The techniques illustrated herein can be used to increase the link rate of the control data bus 108 (FIG. 1) beyond the link rate provided by the I2C standard control data bus, and is referred to herein as the CCIe mode. In one example, a master node/device and/or slave node/device coupled to control data bus 108 may implement a transmitter and/or receiver that embeds a clock signal within a sequential symbol change/transition (as shown in FIG. 5) Explain, thus achieving a higher bit rate than is possible with the standard I2C control data bus on the same control data bus.FIG. 5 illustrates an exemplary transition between transition number 502 and sequential symbol 504. Each digit of a ternary number (a number with a base of 3) (also referred to as a number of transitions) may have three (3) possible digits or one of states 0, 1, or 2. Although the same number may appear in two consecutive digits of a ternary number, none of the two consecutive sequential symbols have the same value. The transition between the number of transitions and the sequential symbols ensures that the sequential symbols always change (from sequential symbols to sequential symbols) even if the number of consecutive transitions is the same.In one example, the conversion function adds 1 to the number of transitions (eg, a digit of a ternary number) and then adds it to the previous original sequential symbol value. If the addition results in a number greater than 3, it flips from 0, and the result then becomes the state number or value of the current sequential symbol.In the first loop 506, when the first transition number (Ta) 1 is input, the previous sequential symbol (Ps) is 1, so the transition number 1 is incremented by 1 and then added to the previous sequential symbol (Ps), and the result is obtained. The current sequential symbol (Cs) 3 becomes the current sequential symbol that is sent to the physical link.In the second (next) loop 508, the second transition number (Tb) 0 is input, and the second transition number 0 is incremented by one and added to the previous sequential symbol (Ps) 3. Since the result of the addition (0+1+3) is equal to 4 and greater than 3, the flip number 0 becomes the current sequential symbol (Cs).In the third loop 510, a third transition number (Tc) 0 is input. The conversion logic adds the third transition number of 0 to 1 and adds it to the previous sequential symbol (Ps) 0 to generate the current sequential symbol (Cs) 1.In the fourth cycle 512, the fourth transition number (Td) 2 is input. The conversion logic adds the fourth transition number (Td) 2 to 1 and then adds the previous symbol (Ps) 1 to generate the current symbol (Cs) 0 (since the result of the addition is greater than 3, the flip count 0 becomes the current order Symbol).Thus, even if two consecutive ternary digits Tb and Tc have the same number, the conversion ensures that two consecutive sequential symbols have different state values. Due to this conversion, the guaranteed sequential symbol change or transition in symbol sequence 504 can be used to embed the clock signal, thereby freeing the clock line SCL in the I2C control data bus for data transfer.Note that although the number of transitions to sequential number conversion increases the guaranteed number "1" to increment between consecutive sequential symbols, other values may be used in other implementations to ensure transitions or changes between sequential symbols. .Referring again to Figure 4, at receiver 420, the process illustrated in Figure 5 is reversed to convert the sequential symbols back to bits, and in the process, the clock signal is extracted from the symbol transitions. Receiver 420 receives sequential symbols 422 on a two-wire physical link (e.g., an I2C bus including SCL line 424 and SDA line 426). The received sequential symbol 422 is input into a clock data recovery (CDR) block 428 to recover the clock timing and sample the transcoded symbols (S). The symbol to transition number converter block 430 then converts each sequential symbol into a transition number, i.e., the number of transitions constitutes a digit in the ternary number. Subsequently, the transition number to bit converter 32 converts 12 transition numbers (i.e., ternary numbers) to recover 20 bits of raw data from the 12-bit ternary number.6 illustrates an exemplary transition from bit to transition number at the transmitter 602 and then from the transition number to the bit at the receiver 604. This example illustrates the transmission of a two-wire system using 12 transition symbols. Transmitter 602 feeds binary information (bits) into a "bit to 12 x T" converter 606 to generate 12 symbol transitions T0 through T11. Receiver 604 receives 12 symbol transitions (T0 through T11) which are fed to a "12 x T to bit" converter 608 to retrieve binary information (bits). If there are r possible symbol transition states for each T (T0 to T11), then 12 transitions can send r12 different states. For a two-wire bus, r=22-1. Thus, transitions T0...T11 contain data that can have (22-1) 12 different states. Therefore, r = 4-1 = 3, and the number of states = (4-1) ^ 12 = 531441.In the example of the two-wire system using 12 symbol transition numbers, it can be assumed that the possible symbol transition r of each T is 3 (= 22-1). If the number of symbols in the group is 12, then a 12-digit ternary number (a number with a base of 3) can be used: T11, T10...T2, T1, T0, where each Ti: 0, 1, 2. For example, for {T11, T10,...T2, T1, T0}={2,1,0,0,1,1,0,1,0,1,2,1}, the ternary number is:2100_1101_01213 (ternary number)=2×311+1×310+0×39+0×38+1×37+1×36+0×35+1×34+0×33+1×32+2×31+1×30=416356 (0x65A64).In this way, 12 transitions can be converted into a number. Note that, for example, in FIG. 4, the ternary number 2100_1101_01213 can be used as a transition number, so that each transition number (for example, a digit of a ternary number) can be mapped to a sequential symbol, and vice versa.An example of the two-line system and 12 symbol transition numbers illustrated in Figure 6 can be generalized to an n-line system and m symbol transition numbers. If there are r possible symbol transition states for each T (T0 to Tm-1), then m transitions can send rm different states (ie, r = 2n-1). Therefore, the transition numbers T0...Tm-1 contain data (e.g., sequential numbers) that can have (2n-1)m different states.Fig. 7 illustrates a general example of converting a ternary number (a number with a base of 3) into a binary number, where each T in {T11, T10...T2, T1, T0} is a transition number.Figure 8 illustrates an example method of converting a binary number (bit) into a 12-bit ternary number (a number with a base of 3). Each digit in the ternary number can be calculated as follows: the remainder in the higher digit calculation (the result of the modulo operation) is divided by the power of the digit of 3, and the number following the decimal point is discarded.9 illustrates an example of one possible implementation of the divide and modulo operations of FIG. 8, which may be synthesized by any commercial synthesis tool.Figure 10 conceptually illustrates that bit 19 (bit 20 or "alternate bit" 306 in Figure 3) is used in the CCIe protocol and can be used to implement slave to slave communication. More specifically, Figure 10 illustrates bit 19 (i.e., when the bit count is at the beginning of the first bit of bit 0). In other words, as is typical in computer science, counting bit by bit from zero, the 20th bit is often referred to as bit 19. Here, bits 0-18 are represented in the ternary number range 0000_0000_00003 to 2221_2201_20013. The ternary numbers in the range 2221_2201_20023 to 2222_2222_22223 are not used. Therefore, the ternary number range 2221_2201_20023 to 2222_2222_22223 can be used to represent the bit 19 (ie, the 20th bit). In other words, the ternary 2221_2201_20023 is a binary 10,000,000,000,000,000,000 (hex 0x80000), and the ternary 2222_2222_22223 (0x81BF0) is the largest possible 12-digit ternary number. In one implementation from device to slave communication, the 20th bit (bit 19) can be utilized as described herein.Figure 11 conceptually illustrates the CCIe bit 19 mapping protocol for ternary numbers 0000_0000_00003 through 2222_2222_22223. Note that different types of commands can be encoded in the bit 19 region (eg, the 20th bit).FIG. 12 illustrates an exemplary mapping of a portion of the bit 19 map of FIG.Figure 13 conceptually illustrates a sequence of transmissions on a control data bus that can be executed for communication from a device to a slave device. In this example, the slave to slave communication protocol utilizes a range from 2222_2112_11223 to 2222_2202_21203. The slave device can send an interrupt signal to the master device on a separate interrupt bus/line. In response to such an interrupt signal 1301, the master device can initiate/send an IRQ query 1303 on the control data bus that causes the requesting party to respond 1304 from the device (S1). The master device can then read the status 1305 of the requesting slave device (S1), in which the slave to slave communication request 1306 is indicated. The slave to slave communication request 1306 may include an upper limit (maximum limit) of codewords for the requested transmission to another slave device. The number of codewords may be more than just the data portion of the message, but may also include the entire number of codewords to be transferred between the slave devices, including all overhead and/or envelope information, such as slave device identifiers (SIDs), looping. Redundancy check or checksum (CRC) and/or sync (SYNC) information. After the master grants a request to the slave 1308, it then monitors the slave 1310 and after the slave sends the endcode from the slave, the master resumes control of the control data bus. In other words, after notifying the slave that sent the request that its request is granted, the master monitors the control data bus to detect the end of the grant access. Note that during this slave to slave transmission of frame 1310, the requesting slave may send a write command 1312 and/or a read command 1314 to another slave device.Figure 14 conceptually illustrates an exemplary slave to slave transfer request. The exemplary slave to slave transfer request 1403 may embed the portion 1402 of the bit 19 mapping illustrated in Figures 11 and 12 (e.g., utilizing a range from 2222_2112_11223 to 2222_2202_21203). One example of a slave to slave transfer request 1403 is defined by a first bit sequence 1404 in conjunction with two tables 1406 and 1408. The slave to slave transfer request 1403 can include the number of codewords 1408 to be transmitted on the control data bus during the transfer from the device to the slave.Figure 15 conceptually illustrates a grant command for a slave to slave communication protocol. In this example, the slave to slave communication protocol 1502 utilizes a range from 2222_2112_11223 to 2222_2202_21203 in the send request grant and/or reject. The master device grants the slave device to the slave device communication request by returning to the requesting party slave device a message including the same codeword limit number as in the slave device to slave device communication request originally sent from the slave device to the master device. Alternatively, the master returns a number of codeword limits less than the requested number, which the slave device interprets as a rejection of the original request and a new codeword restriction. When the new (ie, current) codeword is limited to zero, the slave knows that the slave to slave communication is currently disabled and waits for a pre-programmed amount of time before making the second request. Otherwise, if the current codeword returned is limited to non-zero, the slave then splits the original too long message into multiple messages that are not greater than the current codeword limit returned. Additionally, in some embodiments, one or more slave devices may have a sleep mode to conserve energy, and when an intended recipient communicating from the device to the slave device is in a sleep mode, the master device may be in granting the requestor from the device The slave device that wakes up the sleep before the communication request. Alternatively, the master device may first reject the communication request of the requesting slave device by sending a current codeword limit of zero, and then wake up the sleeping slave device. After the pre-programmed time period, the rejected slave initiates a second request that the master may now grant.An example of a slave device to slave device transfer grant 1503 is defined by a first bit sequence 1504 in conjunction with two tables 1506 and 1508. The slave to slave transfer grant 1503 can include the number of codewords 1508 that are allowed to be transmitted on the control data bus during the slave to slave transfer.Figure 16 conceptually illustrates the details of a grant command for a slave to slave transfer. In this example, the master device can transmit a frame 1602 that includes an identifier (SID) of the requesting slave device and an additional codeword 1606. As part of the "Address" field, the master device can provide slave device to slave device grant 1608.Exemplary master device and method operable therein17 is a block diagram illustrating an exemplary master device adapted for communication from a device to a slave device. The master device 1702 can include a first communication interface/circuit 1706, a second communication interface/circuit 1708, and/or a processing circuit 1704. The first communication interface/circuit 1706 can be used to couple to a single line interrupt request (IRQ) bus to which a plurality of other devices can be coupled. A second communication interface/circuit 1708 can be used to couple to the control data bus to which a plurality of other devices can also be coupled.Processing circuitry 1704 can include various sub-circuits and/or modules to perform one or more of the functions described herein. For example, the communication management circuit/module 1710 can be adapted to manage communications on the data bus for all devices coupled to the control data bus based on an interrupt signal asserted on the IRQ bus. The IRQ bus monitoring circuit/module 1712 can be adapted to monitor the IRQ bus to ascertain when the IRQ signal is asserted (eg, from the device). The slave to slave communication grant/reject circuit/module 1714 can be adapted to communicate directly with another slave device on the control data bus in response to a request from the device (eg, grant or deny). The data bus monitoring circuit/module 1716 can be adapted to allow the master device to monitor the control data bus to ascertain when the slave to slave communication ends.FIG. 18 illustrates a method 1800 that can operate on a master device to facilitate communication from a device to a slave device. The method 1800 includes causing a master device to control/manage access to a control data bus shared with a plurality of slave devices (1802). The master device may receive a slave device to slave device communication/transfer request from a slave device requesting access to the control data bus (1804). The slave to slave communication/transfer request may include a maximum number of codewords that the requesting slave device wishes to transmit on the control data bus. In response to such a request, when the requested maximum number of codewords is greater than the current maximum codeword limit, the master device can send a response rejecting the request to the requesting slave device by transmitting the maximum number of acceptable codewords (1806). For example, the number of acceptable codewords may be equal to the current maximum codeword limit (maintained by the master device) and/or it may be a number less than the maximum number of requested codewords. Alternatively, the master device may send a response to the requesting device to the requesting device (1808). In addition, the master device can monitor the control data bus to detect the end of grant access to the requesting slave device (1810). Once the end of communication from the slave to the slave on the shared control bus is detected, the master can regain control of the control data bus.In one example, the response from the master device grants the request by including in the response a number of codeword limits equal to or greater than the maximum number of codewords specified in the request. That is, if the response includes matching the maximum number of requested codewords specified by the requesting slave device or the number of codeword limits greater than the maximum number of requested codewords, the requesting slave device can identify it as the requested grant. .According to another aspect, if the master device does not support slave device to slave device communication, the master device responds to the request with a codeword limit number of zero (0) in its response. Thus, if the master device can be required to reply (eg, via a protocol standard) a slave to slave communication request, it can reject the request by including a codeword limit number of zero (0). For this reason, it is sensible for a master device that supports communication from the device to the slave device to reply with a non-zero codeword limit number in its response.According to one aspect, if the master device rejects the slave device to slave device communication request, the slave device can wait for some amount of time and then resend the same slave device to slave device communication request to the master device on the control data bus (eg, the same The expected number of codewords to be transferred). Note that the original rejection may be due to the busy control data bus, so retrying at a later time can cause the request to be granted. In addition, the slave can retry sending the same request multiple times before giving up.According to another aspect, if the master rejects the slave to slave communication request, the slave can resend the request, but the request has a lower maximum codeword to be transferred by the requesting slave from the control data bus. number. In some implementations, the slave device can split the data transfer greater than the maximum number of codewords permitted by the master device into multiple slave device to slave device communication requests.In an alternative approach, if the master device rejects the slave to slave communication request, but the slave device is unable to split the data transfer into multiple portions, the slave device may request the master device to take over the requestor slave device and another Data transfer between devices.Exemplary slave device and method operable therein19 is a block diagram illustrating an exemplary slave device adapted to perform communication from a device to a slave device. Slave device 1902 can include a first communication interface/circuit 1906, a second communication interface/circuit 1908, and/or processing circuit 1904. A first communication interface/circuit 1906 can be used to couple to a single line interrupt request (IRQ) bus to which a plurality of other devices can be coupled. A second communication interface/circuit 1908 can be used to couple to the control data bus to which a plurality of other devices can also be coupled.Processing circuitry 1904 can include various sub-circuits and/or modules to perform one or more of the functions described herein. For example, an interrupt (IRQ) generator circuit/module 1910 can be adapted to generate an interrupt request on the interrupt bus. The slave to slave communication request circuit/module 1912 can be adapted to request the master device to control the use of the data bus to perform direct communication with the slave device to the slave device of the other slave device. The slave to slave communication transmission circuit/module 1914 can be adapted to transmit slave to slave communication on the control data bus.20 illustrates a method 2000 that can operate on a slave device to facilitate communication from a device to a slave device. The method 2000 includes coupling a slave device to a control data bus (2002) that is shared with a plurality of slave devices and managed by the master device. The slave device can send a slave to slave communication request to the master device on the control data bus (2004). In response to the request, the slave device can receive a response (2006) from the master device rejecting the request, the response including a maximum number of additional acceptable codewords. If the request is initially rejected, the slave can simply wait for a certain amount of time and resend the same initial request. Alternatively, the slave device may send a new slave device to slave device communication request (2008) to the master device, the request including a maximum number of requested codewords less than or equal to the maximum number of acceptable codewords received.The slave device can receive a response to the grant request from the master device (2010). Upon receiving the grant response from the autonomous device, the slave device can transmit information 2012 to the second slave device on the control data bus for at most the requested number of codewords.According to one aspect, if the master device rejects the slave device to slave device communication request, the slave device can wait for some amount of time and then resend the same slave device to slave device communication request to the master device on the control data bus (eg, the same The expected number of codewords to be transferred). Note that the original rejection may be due to the busy control data bus, so retrying at a later time can cause the request to be granted. In addition, the slave can retry sending the same request multiple times before giving up.According to another aspect, if the master rejects the slave to slave communication request, the slave resends the request, but the request has a lower number of lower codewords to be transferred by the requesting slave from the control data bus. . In some implementations, the slave device can split the data transfer greater than the maximum number of codewords permitted by the master device into multiple slave device to slave device communication requests.In an alternative approach, if the master device rejects the slave to slave communication request, but the slave device is unable to split the data transfer into multiple portions, the slave device may request the master device to take over the requestor slave device and another A transfer of data from between devices.In still another method, if the master device rejects the slave device to slave device communication request and the requestor slave device is capable of operating in the master mode or the slave mode, the requestor slave device can request to be the shared control data bus. Host, thereby transferring the desired number of codewords.One or more of the components, steps, features, and/or functions illustrated in the figures may be rearranged and/or combined into a single component, step, feature, or function, or may be implemented in several components. Or function. Additional elements, components, steps, and/or functions may be added without departing from the novel features disclosed herein. The apparatus, devices, and/or components illustrated in the figures may be configured to perform one or more of the methods, features, or steps described in the Figures. The novel algorithms described herein can also be implemented efficiently in software and/or embedded in hardware.Further, it should be noted that these embodiments may be described as a process depicted as a flowchart, a flow diagram, a structural diagram, or a block diagram. Although the flowcharts may describe the operations as a sequential process, many of these operations can be performed in parallel or concurrently. Additionally, the order of these operations can be rearranged. The process terminates when its operation is complete. Processes may correspond to methods, functions, procedures, subroutines, subroutines, and the like. When a procedure corresponds to a function, its termination corresponds to the function returning to the caller function or the main function.Furthermore, a storage medium may represent one or more devices for storing data, including read only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash memory devices, and/or the like. A machine readable medium that stores information. The term "machine-readable medium" includes, but is not limited to, portable or fixed storage devices, optical storage devices, wireless channels, and various other media capable of storing, containing, or carrying instructions and/or data.Moreover, the embodiments can be implemented by hardware, software, firmware, middleware, microcode, or any combination thereof. When implemented in software, firmware, middleware, or microcode, the program code or code segments to perform the necessary tasks can be stored in a machine readable medium or other storage, such as a storage medium. The processor can perform these necessary tasks. A code segment can represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment can be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory. Information, arguments, parameters, data, etc. may be communicated, forwarded, or transmitted via any suitable means including memory sharing, messaging, token passing, network transmission, and the like.The various illustrative logical blocks, modules, circuits, components and/or components described in connection with the examples disclosed herein may be a general purpose processor, digital signal processor (DSP), application specific integrated circuit (ASIC) designed to perform the functions described herein. ), Field Programmable Gate Array (FPGA) or other programmable logic component, discrete gate or transistor logic, discrete hardware components, or any combination thereof, implemented or executed. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. The processor may also be implemented as a combination of computing components, such as a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.The methods or algorithms described in connection with the examples disclosed herein may be implemented directly in hardware, in a software module executable by a processor, or in a combination of the two in the form of a processing unit, a programming instruction, or other indication. And can be included in a single device or distributed across multiple devices. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. A storage medium is coupled to the processor to enable the processor to read and write information from/to the storage medium. Alternatively, the storage medium can be integrated into the processor.Those skilled in the art will further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps are described above generally in the form of their functionality. Whether such functionality is implemented as hardware or software depends on the particular application and design constraints imposed on the overall system.The various features of the invention described herein can be implemented in different systems without departing from the invention. It should be noted that the above embodiments are merely examples and should not be construed as limiting the invention. The description of the embodiments is intended to be illustrative, and not to limit the scope of the claims. Thus, the teachings of the present invention can be readily applied to other types of devices, and many alternatives, modifications, and variations will be apparent to those skilled in the art. |
The invention relates to an apparatus and methods for overvoltage switches with active leakage current compensation. Apparatus and methods for overvoltage switches with active leakage current compensation are provided. In certain configurations, an IC includes an input node and a protection device or overvoltage switch electrically connected to the input node. The protection device includes a first well and a second well. The second well is positioned adjacent to the first well and has a conductivity type opposite that of the first well. Additionally, a first terminal of the protection device is electrically connected to the first well and to the input node of the IC. The protection device further includes a leakage current compensation circuit that is used to control a voltage level of the second well based on a voltage level of the first terminal to inhibit a leakage current of the first terminal of the protection device. |
1.An integrated circuit comprising:Input node; andProtection equipment, including:Electrically connected to the first terminal of the input node;a first well of the semiconductor electrically connected to the first terminal;a second well of the semiconductor adjacent to the first well, wherein the second well has a conductivity type opposite to the first well; andA leakage current compensation circuit configured to control a voltage level of the second well based on a voltage level of the first terminal to suppress leakage current of a first terminal of the protection device.2.The integrated circuit of claim 1 further comprising a precision amplifier comprising an input node electrically coupled to the integrated circuit and a first input to the first terminal of the protection device.3.The integrated circuit of claim 1 wherein the first well of the semiconductor comprises a first p-type well, and wherein the second well of the semiconductor comprises an n-type well.4.The integrated circuit of claim 3 wherein the junction between the n-type well and the first P-type well comprises a base-emitter of a PNP bipolar transistor, wherein the leakage current compensation circuit controls the trans-base The voltage of the pole-emitter junction suppresses the leakage current of the first terminal of the protection device.5.The integrated circuit of claim 4 wherein said leakage current compensation circuit controls a voltage difference between said n-well and said first p-well to be less than 700 millivolts.6.The integrated circuit of claim 3 wherein said protection device further comprises:a first n-type active region in the n-type well, wherein the first n-type active region is electrically connected to an output of the leakage current compensation circuit; anda first p-type active region in the first p-type well, wherein the first p-type active region is electrically coupled to the input of the first terminal and the drain current compensation circuit.7.The integrated circuit of claim 6 wherein the implementation of the leakage current compensation circuit comprises a buffer circuit electrically coupled between the input of the leakage current compensation circuit and the output of the leakage current compensation circuit.8.The integrated circuit of claim 7 further comprising an input resistor electrically coupled between the input of the buffer and the first terminal, wherein the input resistor has a resistance in the range of 10 kΩ and 100 MΩ.9.The integrated circuit of claim 7 further comprising an electrical connection between the output of the buffer of the output resistor and the first n-type active region, wherein the output resistor has a resistance in the range of 10 kΩ and 100 MΩ.10.The integrated circuit of claim 7 wherein the buffer circuit comprises at least one of a trimming circuit, a chopper circuit, or an auto-zeroing circuit to compensate for an input offset voltage at the snubber circuit.11.The integrated circuit of claim 6 wherein said protection device further comprises:A second p-type well, wherein at least a portion of the n-type well is positioned between the first p-type well and the second p-type well.12.The integrated circuit of claim 11 wherein said protection device further comprises:Second terminal; anda second p-type active region in the second p-type well, wherein the second p-type active region is electrically connected to the second terminal.13.The integrated circuit of claim 12 wherein said second terminal is electrically coupled to a supply node of the integrated circuit.14.The integrated circuit of claim 12 wherein said protection device further comprises:a second n-type active region in the first p-type well, wherein the second n-type active region is electrically connected to the first terminal; andA third n-type active region in the second p-type well, wherein the third n-type active region is electrically connected to the second terminal.15.The integrated circuit of claim 12 further comprising:The first p-type well, the second p-type well, and an insulator layer under the n-type well; andA support substrate under the insulating layer.16.The integrated circuit of claim 12 wherein the first p-type well is implemented as a first island in the n-type well, and wherein the second p-type well is implemented as a second island of the n-type well.17.The integrated circuit of claim 16 wherein said protection device further comprises a third p-type well surrounding a perimeter of the n-type well.18.The integrated circuit of claim 16 wherein said protection device further comprises a first p-type well, a second p-type well, and an n-type buried layer under the n-well.19.A method of electrical overload protection, the method comprising:A protection device is used to avoid an overstress event of an input node of the integrated circuit, the protection device comprising a first terminal electrically connected to the input node, a first well of the semiconductor electrically connected to the first terminal, and a semiconductor adjacent the first well And having a second well opposite to the conductivity type of the first well; andThe leakage current compensation circuit used suppresses the leakage current of the first terminal of the protection device by controlling the voltage level of the second well based on the voltage level of the first terminal.20.The method of claim 19, further comprising: using a buffer of the leakage current compensation circuit to control a voltage level of the second well by buffering a voltage level of the first terminal.21.The method of claim 19 further comprising:Receiving an input signal on the input node;Amplifying the input signal using a precision amplifier; andThe leakage current compensation circuit is used to suppress the input bias current of the precision amplifier generated by the protection device.22.An integrated circuit comprising:Input node; andProtection equipment, including:Electrically connected to the first terminal of the input node;a first well of the semiconductor electrically connected to the first terminal;a second well of the semiconductor adjacent to the first well, wherein the second well has a conductivity type opposite to the first well; andAnd means for suppressing leakage current of the first terminal of the protection device by controlling a voltage level of the second well based on a voltage level of the first terminal. |
Apparatus and method for overvoltage switch with active leakage current compensationTechnical fieldEmbodiments of the present invention relate to electronic systems, and more particularly to overvoltage switch/protection devices for integrated circuits (ICs).Background techniqueCertain electronic systems may be exposed to over-stress events, or relatively short duration electrical signals with rapidly varying voltages and high power. Overstress events may include, for example, electrostatic discharge (ESD) events and/or electromagnetic interference (EMI) events.Overstress events can damage integrated circuits (ICs) within an electronic system due to overvoltage conditions and/or high levels of power consumption in relatively small areas of the IC. High power consumption can increase the temperature of the integrated circuit and can cause many problems such as gate oxide breakdown, wiring damage, metal damage, and surface charge buildup. Moreover, an overload event can induce latch-up (in other words, unintentional establishment of a low impedance path), thereby damaging the operation of the integrated circuit and can result in permanent damage to the IC. Therefore, it is necessary to provide an integrated circuit that avoids overstress events without affecting its performance.Summary of the inventionIn one aspect, an integrated circuit is provided. The integrated circuit includes an input node and a protection device. The protection device includes a first terminal electrically coupled to the input node, a first well of a semiconductor electrically coupled to the first terminal, a second well adjacent to the semiconductor of the first well, and a leakage current compensation circuit. The second well has a conductivity type opposite that of the first well. Further, the leakage current compensation circuit is configured to control a voltage level of the second well based on a voltage level of the first terminal to suppress leakage current of the first terminal of the protection device.In another aspect, a method of electrical overload protection is provided. The method includes: using a protection device to avoid an overvoltage event of an input node of an integrated circuit, the protection device comprising: a first terminal electrically connected to the input node, a first well of a semiconductor electrically connected to the first terminal, and A second well adjacent to the first well and having a semiconductor opposite the type of the first well. The method further includes controlling a voltage level of the second well based on a voltage level at the first end of the leakage current compensation circuit to suppress leakage current at the first end of the protection device.In another aspect, an integrated circuit is provided. The integrated circuit includes an input node and a protection device. The protection device includes a first end electrically coupled to the input node, a first well of a semiconductor electrically coupled to the first terminal, and a second well adjacent to the semiconductor of the first well. The second well has a conductivity type opposite the first well. The protection device further includes means for suppressing leakage current of the first terminal of the protection device by controlling a voltage level of the second well based on a voltage level of the first terminal.DRAWINGS1 is a schematic diagram of one embodiment of an integrated circuit.2A is an annotated cross section of a protection device with active leakage current compensation, in accordance with one embodiment.2B is an annotated cross section of a protection device with active leakage current compensation, in accordance with another embodiment.3A is a top plan view of a protection device with active leakage current compensation, in accordance with another embodiment.FIG. 3B is an annotated cross section of the protection device of FIG. 3A taken along line 3B-3B of FIG. 3A.4 is a top plan view of a protection device with active leakage current compensation in accordance with another embodiment.Figure 5A is a circuit diagram of a buffer in accordance with one embodiment.Figure 5B is a circuit diagram of a buffer in accordance with another embodiment.Figure 5C is a circuit diagram of a buffer in accordance with another embodiment.Figure 5D is a circuit diagram of a buffer in accordance with another embodiment.Figure 5E is a circuit diagram of a buffer in accordance with another embodiment.Figure 5F is a circuit diagram of a buffer in accordance with another embodiment.Detailed waysThe following detailed description of the embodiments of the invention is set forth in the description However, the invention may be embodied in many different forms and is defined and covered by the claims. In the description, reference is made to the drawings, in which like referenceAs mentioned above, the terms as used herein refer to the devices positioned as shown in the figures and should be interpreted accordingly. It should also be understood that because regions within a semiconductor device, such as transistors, are different portions of doped semiconductor material defined by different impurities or different impurity concentrations, discrete physical boundaries between different regions actually exist in the completion device, but Areas can be converted from one to another. Some of the boundaries shown are of this type and are shown as abrupt structures just to assist the reader. In the embodiments described below, the p-type region may include a p-type semiconductor material such as boron as a dopant. Additionally, the n-type region may include an n-type semiconductor material such as phosphorus as a dopant. Those skilled in the art will appreciate different concentrations of dopants in the regions described below.Overview of protection devices with active leakage current compensationTo help ensure that electronic systems are reliable, manufacturers can test electronic systems under defined stress conditions, which can be described by standard sets of various organizations, such as the Joint Electron Device Engineering Consortium (JEDEC), the International Electrotechnical Commission (IEC), automotive engineering. Association (AEC) and International Organization for Standardization (ISO). The standard can cover a wide range of strain events, including electrostatic discharge (ESD) events and/or electromagnetic interference (EMI) events. To meet these criteria, an integrated circuit (IC) can include protection devices at the pins or pads of the IC.When the pins or pads of the IC operate using the normal signaling plane, the protection device can operate in a closed or high impedance state. However, when an over-strain event causes a voltage across a particular protection device to exceed a forward or reverse trigger voltage of the device, the protection device can activate and operate in an ON or low impedance state, wherein the protection device is connected in parallel with an over-strain event Part of the current and / or charge. Therefore, the protection device can be used to prevent the voltage level of the pins or pads of the IC from reaching the fault voltage associated with the damaged IC.As used herein, a protection device may also be referred to as an overvoltage switch. For example, the protection device can operate in an OFF or high impedance state when no overvoltage condition exists, and can be switched on or operating at an ON or low impedance level when an overvoltage condition exists.While including a protection device on the pins or pads of the IC can help protect the IC from damage from overstress events, the protection device can affect the performance of the integrated circuit during normal operation. For example, even when the protection device can still have a limited input leakage current when in the OFF state, the performance of the IC can be lowered. For example, in one example, an integrated circuit can include a precision amplifier having pins or pads that are electrically connected to the IC. In addition, the performance of the precision amplifier is also degraded when the protection device is also electrically connected to the pin or pad and has a relatively high leakage current. For example, protecting the leakage current of the device can undesirably increase the input bias current of the precision amplifier, especially at high temperatures. In other examples, a protection device having a relatively high leakage current can produce input bias, systematic errors, and/or otherwise degrade the performance of the IC's precision circuitry.Provided herein is an apparatus and method for a protection device with active leakage current compensation. In some configurations, an integrated circuit includes an input node and a protection device electrically coupled to the input node. The protection device includes a first well and a second well. A second well is positioned adjacent to the first well and has a conductivity type opposite to the first well. Additionally, a first terminal of the protection device is electrically coupled to the first well and an input node of the IC. The protection device further includes a leakage current compensation circuit for controlling a voltage level of the second well based on a voltage level of the first terminal to suppress leakage current from flowing into or out of the first terminal of the protection device.A protection device with active leakage current compensation can advantageously provide strong protection against overstress events of the pins or pads of the IC while having minimal performance impact on the circuitry coupled to the pins or pads.The teachings herein can be used to reduce leakage currents of protection devices used in a wide variety of applications, including, for example, applications with stringent input current grids. For example, even at high temperatures (eg, 125 ° C), precision amplifiers can be specified to operate at very low input bias currents, as a reference, sub-nA (10 in the range of 50 pA (10 -12 A) to 800 pA) -9 A) Bias current, for example 200pA. By implementing a protection device with active leakage current compensation, the protection device can have little or no effect on the operation of the precision amplifier. In contrast, protection devices with higher input leakage currents can degrade the performance of precision amplifiers by generating temperature-dependent input bias currents.In some configurations, the protection device includes a bidirectional clamp, such as a triac rectifier (SCR) device. Additionally, the bidirectional clamp further includes a third well of the same conductivity type of the first well. For example, the first and third wells can include a p-well, and the second well can include an n-well. The well is configured such that at least a portion of the second well is positioned between the first and second wells. Additionally, the first p-well includes at least one P+ region electrically coupled to the first terminal of the protection device, and the second p-well includes at least one P+ region electrically coupled to the second terminal of the protection device. In such a configuration, the leakage current compensation circuit is operable to control the base-emitter of a PNP bipolar transistor having emitters, bases, and collectors associated with the first P-well, the n-well, and the second P-well, respectively. Voltage. In particular, the leakage current compensation circuit can control the base-to-emitter voltage of the PNP bipolar transistor to be approximately equal to 0V to suppress leakage current from flowing into or out of the first terminal of the protection device.In some configurations, the leakage current compensation circuit includes a buffer having an input electrically coupled to the first terminal and an output controlling the voltage level of the second well to be approximately equal to a voltage level of the first terminal. By booting the second well to a voltage level approximately equal to the voltage level of the first terminal, the input leakage current of the protection device can be eliminated or reduced. The teachings herein can be used to reduce or eliminate leakage currents of a protection device including, for example, leakage current from a first terminal of the protection device to a base or well of the protection device.The protection device can be fabricated herein in a variety of fabrication processes including, but not limited to, deep submicron (DSM) complementary metal oxide semiconductor (CMOS) processes, BCD (bipolar-CMOS-DMOS) processing, or silicon. Insulator (SOI) process.1 is a schematic diagram 10 of an embodiment of an integrated circuit (IC). The integrated circuit 10 includes an input node 1 (IN), a power supply node 2 (V1), a protection device 3, and a precision amplifier 4.For the sake of clarity, only certain structures of IC 10 are shown in FIG. Thus, the IC 10 can include additional pins, pads, circuits, devices, and/or other structures.The protection device 3 comprises a first terminal electrically connected to the input node 1 and a second terminal electrically connected to the power supply node 2. In some configurations, the protection device 3 includes a bi-directional fixture, such as a triac (SCR) device.In some configurations, input node 1 corresponds to a signal pin or pad of integrated circuit 10, and power supply node 2 corresponds to a pin or pad of integrated circuit 10 associated with V CC or ground. For example, the power supply node 2 can be electrically connected to a low voltage or grounded supply voltage of the power supply. In some configurations, the voltage level of input node 1 is greater than or equal to the voltage level of power supply node 2 when IC 10 is operating under normal signaling conditions.The illustrated high precision amplifier 4 comprises a first input electrically connected to the input node 1. Thus, the precision amplifier 4 can be used to provide amplification of the signals received on the input node 1. In one example, precision amplifier 4 includes a second input that receives a reference voltage, and precision amplifier 4 amplifies the voltage difference between the signal received at input node 1 and a reference voltage. In another example, precision amplifier 4 provides amplification of the differential signal, and signal 1 received at the input node corresponds to an inverted or non-inverted component of the differential signal. The precision amplifier 4 can correspond to a wide variety of amplification circuits including, for example, operational amplifiers or instrumentation amplifiers.When the IC 10 is operating at a normal signaling level or condition, the protection device 3 operates in an off state, wherein the protection device 3 should not interfere with the operation of the precision amplifier 4. However, when the overstress event causes the difference between the voltage input node 1 and the supply node 2 to exceed the forward trigger voltage or the reverse trigger voltage of the protection device 3, the protection device 3 can activate the operation in the on state to protect the precision amplifier. 4 and / or other circuits of IC 10 are not damaged.Ideally, the protection device 3 has a low leakage current in the OFF state. For example, when the leakage current of the protection device 3 is relatively large, the leakage current can reduce the performance of the precision amplifier 4 by generating an input bias current.Low input bias current is an important indicator of precision amplifiers, such as for high performance instruments and/or operational amplifiers. For example, achieving low input bias currents (such as sub-nA input bias currents) has become the target performance benchmark for commercial precision amplifier products.However, the protection device used for overvoltage protection at the input interface of the IC affects the input bias current of the amplifier. For example, the protection device can introduce an additional conduction path that can contribute to the input bias current of the amplifier. For example, a reverse bias blocking junction of a protection device can generate a leakage current that can increase exponentially with temperature. For example, the leakage current of the nominal reverse bias blocking junction of the protection device can rise approximately twice the temperature per 10 ° C and can be the primary source of the input bias current of the amplifier at high temperatures, such as temperatures of 100 ° C or higher. .Input bias current and temperature characteristics can exist in a wide variety of instrumentation and operational amplifier products, including ESD protection devices, including overstress protection circuits. This protection device limits the minimum achievable input bias current of the amplifier. Thus, even when the amplifier is otherwise designed to have a small sub-nA input bias current, the leakage current of the protection device can reduce the input bias current of the amplifier at high temperatures.The protection device 3 comprises a leakage current compensation circuit 5 as described in detail herein. In addition, the protection device 3 can include a p-well of an n-well and an adjacent n-well, and the leakage current compensation circuit 5 can control the voltage level of the N-well to track or change the voltage level of the p-well for reducing the protection device Leakage current. The leakage current compensation circuit 5 can suppress the inflow or outflow of leakage current to the first terminal of the protection device 3, which in turn can improve the performance of the precision amplifier 4. For example, the precision amplifier 4 can operate with a low input bias current even when operating at high temperatures.While the protection device 3 is shown in the context of protecting the input of a precision amplifier, the teachings herein are applicable to a variety of applications. For example, one or more protection devices can be used to provide protection for pins or pads of a wide variety of ICs that are designated to operate with low input leakage currents, including, for example, data converters, data acquisition systems, and receivers. interface. Thus, while integrated circuit 10 of FIG. 1 illustrates one example of an IC that includes one or more protection devices with active leakage current compensation, the teachings herein are applicable to other configurations of integrated circuits.2A is an annotated cross section of a protection device with active leakage current compensation, in accordance with one embodiment.The illustrated protection device 30 of FIG. 2A is fabricated on a p-type substrate (P-SUB) 31 and includes an n-well 34, a first P-well 33a and a second P-well 33b, a first n-type active (N+) Region 42a, second N+ region 42b, third N+ region 42c, first p-type active (P+) region 41a, second P+ region 41b and third P+ region 41c, first terminal 21 (VH), second terminal 22 (VL) and leakage current compensation circuit 50.As shown in FIG. 2A, n-well 34 is positioned at P-SUB 31, and first and second p-wells 33a, 33b are positioned at n-well 34. As shown in FIG. 2A, the first and second p-wells 33a, 33b are spaced apart from each other such that a portion of the n-well 34 is between the first and second p-wells 33a, 33b. The first N+ region 42a is in the n-well 34. Although the first N+ region 42a is shown as being located between the first and second p-wells 33a, 33b, the first N+ region 42a may be located at other locations.The first P+ region 41a and the second N+ region 42b are placed adjacent to each other in the first p-well 33a. In addition, the second P+ region 41b and the third N+ region 42c are positioned adjacent to each other in the second P well 33b. In addition, the third P+ region 41c is positioned at the P-SUB 31, and can be used to control the voltage level of the P-SUB 31.The cross section shown in FIG. 2A has been annotated to show certain structures of the protection device 30, including the leakage current compensation circuit 50, the first terminal 21, the second terminal 22, and the active region, the terminal block, and the leakage current compensation circuit. Electrical connection between 50. Although noted in schematic form, one of ordinary skill in the art will appreciate that the illustrated electrical connections can be made using conductors such as metallization and vias that can be fabricated at P-SUB 31. For example, leakage current compensation circuit 50 can be fabricated in a portion of P-SUB 31 that is not visible in the cross-section of Figure 2A.The cross section is also annotated to show certain transistor and resistive elements associated with the illustrated semiconductor well and active region. For example, the protection device 30 has been annotated to include a PNP bipolar transistor 61, an NPN bipolar transistor 62, a first resistor 63, and a second resistor 64.NPN bipolar transistor 62 includes an emitter associated with the third N+ region 42c, a base associated with the second p-well 33b, and a collector associated with the n-well 34. Additionally, the PNP bipolar transistor 61 includes an emitter associated with the first P-well 33a and a base associated with the n-well 34, and a collector associated with the second P-well 33b. The first resistor 63 is associated with good resistance of the first P well 33a between the base of the PNP bipolar transistor 61 and the first P+ region 41a. Furthermore, the second resistance 64 is associated with good resistance of the base of the NPN bipolar transistor 62 to the second P-well 33b between the second P+ regions 41b.The NPN bipolar transistor 62 and the PNP bipolar transistor 61 are cross-coupled, the base of the NPN bipolar transistor 62 is electrically coupled to the collector of the PNP bipolar transistor 61, and the collector of the NPN bipolar transistor 62 is electrically coupled to the PNP bipolar. Type transistor 61. The bases of the NPN bipolar transistor 62 and the PNP bipolar transistor 61 function as a silicon controlled rectifier (SCR) device.In the illustrated configuration, the first P+ region 41a and the second N+ region 42b are electrically connected to the first terminal 21, and the second P+ region 41b and the third N+ region 42c are electrically connected to the second terminal 22. Additionally, the third P+ region 41c is electrically coupled to a first voltage V1, such as a low power or grounded power source. In a particular configuration, the second terminal 22 is also electrically coupled to the first voltage V 1 .The leakage current compensation circuit 50 includes an input electrically connected to the first terminal 21 and an output electrically connected to the first N+ region 42a. The leakage current compensation circuit 50 controls the voltage level of the n-well 34 based on the voltage level of the first terminal 21, thereby reducing the voltage difference between the n-well 34 and the first P-well 33a to suppress leakage current flowing in or out of the said The first terminal 21.In the illustrated construction, the leakage current compensation circuit 50 includes a buffer 51, an input resistor 52, and an output resistor 53. The input resistor 52 is electrically connected between the first terminal 21 and the input to the buffer 51, and the output resistor 53 is electrically connected between the output of the buffer 51 and the first N+ region 42a.In some configurations, the voltage gain from the input to the output of the buffer 51 can be between 0.5 and 1.5, such as one. Accordingly, the buffer 51 can be used to control the voltage level of the n-well 34 via the first N+ region 42a to be approximately equal to the voltage level of the first terminal 21. Since the first P well 33a is electrically connected to the first terminal 21 via the first P+ region 41a, the buffer 51 also controls the voltage level of the n well 34 to be approximately equal to the voltage level of the first P well 33a. In this manner, controlling the voltage level of the n-well 34 can reduce the leakage current flowing into or out of the first terminal of the protection device by reducing the leakage current of the base-emitter junction of the PNP bipolar transistor 61.In one embodiment, the leakage current compensation circuit 50 is configured to control the voltage level of the n-well 34 such that the magnitude of the voltage difference between the first P-well 33a and the n-well 34 is less than 700 millivolts. Using the leakage current compensation circuit 50 to control the voltage difference between the n-well 34 and the first p-well portion 33a to a relatively small level can reduce the leakage current of the first terminal of the protection device 30.The input resistor 52 can protect the buffer 51 from damage during an overstress event, such as an ESD event, causing a change in voltage difference between the first and second terminals 21, 22. For example, input resistor 52 can help prevent charge from flowing into or out of the buffer during an ESD event. In one embodiment, input resistor 52 has a resistance selected to be in the range of 10 kΩ and 100 MΩ. While one example of the resistance value of the input resistor 52 has been provided, the input resistor 52 can have other resistance values, such as resistance values associated with a particular application and/or manufacturing process.The output resistor 53 provides an impedance between the output of the buffer 51 and the n-well 34 to prevent the buffer 51 from affecting the operation of the protection device 30 during an overstress event. For example, when the illustrated SCR device is activated in response to an overstress event, output resistor 53 limits current flow from the output of the buffer into or out of n-well 34 to prevent buffer 51 from interfering with the operation of the SCR device. The output resistor 53 also provides secondary over-strain protection of the output of the buffer 51.In one embodiment, output resistor 53 has a resistance selected to be in the range of 10 kΩ and 100 MΩ. While an example of a range of resistance has been provided, the output resistor 53 can have other resistance values, such as resistance values associated with a particular application and/or manufacturing process.Although the illustrated embodiment includes an input resistor 52 and an output resistor 53, the teachings herein are also applicable to omitting the configuration of input resistor 52 and/or output resistor 53.The first terminal 21 can be electrically connected to an input node of an integrated circuit, such as a signal pin or pad. Further, even when the protection device 30 operates at a high temperature, the leakage current compensation circuit 50 can reduce or eliminate the leakage current of the first terminal of the protection device 30. In contrast, when the leakage current compensation circuit 50 is omitted, the leakage current of the protection device can be remarkably improved at a high temperature. For example, when normal signaling conditions exist and the protection device operates at room temperature, the leakage current at the junction between the n-well 34 and the first p-well portion 33a can be relatively small, typically in the range of pA. However, when the leakage current compensation circuit 50 is not present, at a relatively high temperature (for example, the temperature is higher than 100 ° C), the leakage current of the junction increases exponentially and reaches the nA level. The leakage current can be the primary source of bias current for the precision amplifier coupled to the first terminal 21.In one embodiment, the leakage current of the first terminal 21 of the protection device 30 is given by Equation 1 below, where β is the current gain of the PNP bipolar transistor 61 and IS is the saturation current of the PNP bipolar transistor 61. V BE is the voltage of the base-to-emitter PNP bipolar transistor 61, and VT is the thermal voltage. One of ordinary skill in the art will appreciate that the thermal voltage V T can be equal to kT/q, where k is the Boltzmann constant, T is the temperature, and q is the amount of electron charge.<math> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <mn>1</mn> <mo>/</mo> <mi>&beta ;</mi> <mo>)</mo> <mo>&CenterDot;</mo> <mi>I</mi> <mi>s</mi> <mo>&CenterDot;</mo> <mo> (</mo> <mi>exp</mi> <mo>(</mo> <mfrac> <msub> <mi>V</mi> <mrow> <mi>B</mi> <mi>E </mi> </mrow> </msub> <msub> <mi>V</mi> <mi>T</mi> </msub> </mfrac> <mo>)</mo> <mo> -</mo> <mn>1</mn> <mo>)</mo> </mrow> </math> Equation (1)By the voltage level of the bootstrap n-well 34 being approximately equal to the voltage level of the first terminal 21, the voltage difference between the n-well 34 and the first p-well portion 33a can be relatively small, such that the base of the PNP bipolar transistor 61 The pole-emitter junction has a base-to-emitter voltage VBE near zero and conducts negligible current.Therefore, the leakage current compensation circuit 50 can provide leakage current compensation to the protection device 30. When the leakage current compensation circuit 50 controls the PNP bipolar transistor 61 in this manner, the PNP bipolar transistor 61 operates similarly to the BVCES operation at low leakage performance (emitter-to-collector breakdown voltage and short-to-emitter base) Instead of performance similar to BVCEO operation (with open-base emitter to collector breakdown voltage).Thereby, the leakage current of the junction between the n well 34 and the first p well 34A can be reduced or eliminated, which in turn suppresses leakage current from flowing into or out of the first terminal 21. In addition, leakage currents of other structures, such as the junction of the NPN bipolar transistor 62, may be supplied from the output of the buffer 51. Although the overall static power consumption of the protection device 30 may not be reduced, the leakage current of the first terminal 21 may be reduced or eliminated. Therefore, a sensitive electronic product such as a precision amplifier can be electrically connected to the first terminal 21 and operate without a decrease in performance of the first terminal 21 of the protection device 30 that flows in or out due to leakage current. In some configurations, protection device 20 protects the input to the precision amplifier, and even at high temperatures, leakage current compensation circuit 50 can be used to implement the current of the input bias precision amplifier divided by nA.When the protection device 30 is in the OFF state, the bias voltage of the base emitter junction of the entire PNP bipolar transistor 61 may be based on the input offset voltage of the buffer 51. For example, in some configurations, the voltage difference between the n-well 34 and the first P-well 33a can be approximately equal to the input offset voltage of the buffer. In some embodiments, the typical input offset voltage of the buffer 51 should be less than the thermal voltage V T . For example, the input offset voltage (V OS ) can be a few millivolts, and thus the compensated leakage current of the protection device can be similar to the factor that protects the small V OS /V T without leakage compensation.To provide further reduction in leakage current, the input offset voltage of the buffer 51 can be trimmed, shredded, and/or auto-zeroed. Reducing the input offset voltage of the buffer can reduce the leakage current of the protection device by reducing the voltage difference between the n-well 34 and the first p-well portion 33a, thereby reducing the base emitter junction with the PNP bipolar transistor 61. Associated leakage current.2B is an annotated cross section of a protection device with active leakage current compensation, in accordance with another embodiment. The protection device 70 of Figure 2B is similar to the protection device 30 of Figure 2A, except that the protection device 70 has been fabricated using an SOI process.For example, the protection device 70 of FIG. 2B is fabricated on the semiconductor layer 75. As shown in FIG. 2B, the semiconductor layer 75 is positioned on the insulating layer 72 so as to be positioned on the support substrate 71. As the skilled artisan understands, the speech substrate 71 can correspond to a doped or undoped substrate. Furthermore, the insulator layer 72 can be implemented in a variety of different ways (eg, by using a buried oxide (BOX) layer).The protection device 70 includes an n-well 74, a first P-well portion 73a, a second p-well 73b, first to third N+ regions 42a to 42c, first and second P+ regions 41a, 41b, and first and second terminals 21 22, leakage current compensation circuit 50. And the n well 74 is located between the first and second p well portions 73a, 73b of the semiconductor layer 75. The first N+ region 42a is in the n-well 74. In addition, the first P+ region 41a and the second N+ region 42b are in the first p-well 73a and are electrically connected to the first terminal 21. In addition, the second P+ region 41b and the third N+ region 42c are in the second p-well 73b, and are electrically connected to the second terminal 22. Leakage current compensation circuit 50 includes an input electrically coupled to the first terminal and an output electrically coupled to first N+ region 42a.Additional details of the protection device 70 can be similar to those previously described.3A is a top plan view of a protection device with active leakage current compensation, in accordance with another embodiment. FIG. 3B is an annotated cross section of the protection device 80 of FIG. 3A taken along line 3B-3B of FIG. 3A.The protection device 80 illustrated in Figures 3A-3B is fabricated on a p-type substrate (P-SUB) 81 and includes a high voltage n-well (HVNW) 84, a first high voltage p-type well (HVPW) portion 83a, a second HVPW 83b, a third HVPW 83c, a first P+ region 91a, a second P+ region 91b, an N+ region 92, a first array of N+ regions 93, a second array of N+ regions 94, a shallow n-well (SHNW) 87, An n-type buried layer (NBL) 89 and an isolation region 88.For clarity, only HVNW 84, HVPW 83a-83c, P+ regions 91a-91c, and N+ regions 92 through 94 are shown in the top plan view of FIG. 3A.As shown in FIG. 3A, the first HVPW 83a is implemented as a first island shape at the HVNW 84. In addition, the second HVPW 83b is implemented as a second island at the HVNW 84 and separated from the first HVPW 83a. The central portion of the HVNW 84 is located between the first HVPW 83a and the second HVPW 83b and acts as a current path when the protection device 80 is activated. The third HVPW 83c surrounds the periphery of the HVNW 84.The first P+ region 91a is positioned at the first HVPW 83a, and is implemented to have a comb shape in this embodiment. Furthermore, the first array of N+ regions 93 is positioned adjacent the first HVPW 83a of the first P+ region 91a such that portions of the first P+ region 91a extend between adjacent pairs of N+ regions in the first array. The second P+ zone 91b is positioned in the second HVPW 83b, and is implemented in the present embodiment to have a comb shape. Furthermore, the second array of N+ regions 93 is located at the second HVPW 83b such that the second array of portions 91b of the second P+ region extends between adjacent pairs of N+ regions. The first and second P+ regions 91a, 91b are oriented such that the extended portion of the first P+ region 91a faces the extended portion of the second P+ region 91b.The protection device 80 configured in this manner can be used to increase the forward hold and trigger voltage of the protection device 80. Although an example of the active area has been shown in the first and second HVPWs 83a, 83b, other configurations are possible. For example, in another embodiment, the first HVPW 83a includes a first P+ region and a first N+ region extending in a vertical direction along the first or side by side, and the second HVPW 83b includes a second extending in a vertical direction alongside each other. P+ zone and a second N+ zone.The third HVPW 83c is implemented as a ring that surrounds and abuts the HVNW 84. In addition, the third P+ zone 91c is positioned in the third HVPW 83c, and the third HVPW 83c operates as a guard ring of the protection device. When integrated on-chip, the guard ring 80 can inhibit or eliminate unintended parasitic paths formed between the protection device 80 and surrounding semiconductor components. In the illustrated configuration, the guard ring is electrically connected to a first voltage V1, which may be, for example, a ground or power line low supply voltage. Although FIGS. 3A-3B illustrate the third HVPW 83c as the contiguous HVNW 84, in other configurations, the third HVPW 83c is spaced apart from the HVNW 84, increasing latch-up immunity at the expense of region increase.In the illustrated embodiment, the SHNW 87 is positioned at a central portion of the HVNW 84 between the first and second HVPWs 83a, 83b. Further, the NBL layer 89 is positioned below the HVNW 84, the first HVPW 83a, and the second HVPW 83b. The NBL layer 89 electrically isolates the first HVPW 83a and the second HVPW 83b from the P-SUB 81, thereby allowing the first and second HVPWs 83a, 83b to operate at different potentials from the P-SUB 81. As used herein, and as understood by those skilled in the art, the term "n-type buried layer" refers to any suitable n-type isolation layer or structure, including, for example, in the technique of buried n± layers or in deep n-well technology. of.As shown in FIG. 3B, the N+ region 92 is positioned at the HVNW 84 and is electrically coupled to the output of the leakage current compensation circuit 50 by metallization. The leakage current compensation circuit 50 controls the voltage level of the HVNW 84 and the NBL layer 89 to track or change the voltage level of the first terminal 21. Configuring the protection device 80 in this manner can provide active compensation that reduces or eliminates leakage currents flowing into or out of the first terminal 21, thereby preventing the protection device 80 from interfering with the operation of other circuits that are also electrically connected to the first terminal 21. .Although FIG. 3B illustrates an embodiment in which the protection device 80 is fabricated directly on the P-SUB 81, other configurations are possible. For example, in another embodiment, the protection device 80 is fabricated using an SOI process, and the NBL layer 89 is omitted to facilitate separation of the HVNW 84 and HVPWs 81a-81c from the support substrate using an insulating layer. In yet another embodiment, the protection device 80 is fabricated in a p-type epitaxial (P-EPI) layer. For example, the P-EPI layer can be disposed on a doped or undoped support substrate, and the protection device 80 can be fabricated in a P epitaxial layer. In a particular configuration, the support substrate is implanted into the NBL layer 89, and the P-EPI layer is grown on the carrier substrate and NBL layer 89 using an epitaxial growth process. In addition, HVNW 84, HVPW 83a-83c, and SHNW 87 can be implanted in the P-EPI layer. Further, isolation regions 88 may be formed on the surface of the P-EPI layer, and N+ regions and P+ regions may be implanted in the corresponding well regions.Although not shown in FIGS. 3A-3B for clarity, the P-SUB 81 may also include other devices or structures formed therein.In the illustrated configuration, the first HVPW 83a and the second HVPW 83b extend or elongate in a first or vertical direction. Further, when activated, the current of the protection device 80 is in the second or horizontal direction.The N+ region 92 includes a first ring structure that surrounds a perimeter of the first HVPW 83a and a second ring structure that surrounds a perimeter of the second HVPW 83b. Configuring the N+ region 92 in this manner can provide robust control of the voltage level of the HVNW 84, which can help the leakage current compensation circuit 50 control the voltage level of the HVNW 84 with respect to the voltage level of the first terminal 21.The illustrated blocking voltage device 80 includes an isolation region 88. Forming isolation regions 88 may involve etching trenches in P-SUB 81, filling trenches (such as silicon dioxide (SiO 2 )) with a dielectric, and removing excess using any suitable method, such as chemical mechanical planarization. Dielectric.The cross section shown in FIG. 3B has been annotated to show certain structures of the protection device 80, including the leakage current compensation circuit 50, the first terminal 21, the second terminal 22, and the electrical connection and wiring between the active regions. Terminal and leakage current compensation circuit 50. While annotated in schematic form, one of ordinary skill in the art will appreciate that the illustrated electrical connections can be made using metallization and vias, and that the leakage current compensation circuit 50 can be fabricated in P-SUB 81. For example, the leakage current compensation circuit 50 can be fabricated in a portion of the P-SUB 81 that is not visible in the cross-section of Figure 3B.The cross section is also annotated to show certain transistor and resistive elements associated with the illustrated semiconductor well and active region. For example, the protection device 80 has been annotated to include a bidirectional PNP bipolar transistor 100, a first NPN bipolar transistor 101, a second NPN bipolar transistor 102, a first PNP bipolar transistor 103, and a second PNP bipolar transistor 104. The first resistor 105 and the second resistor 106.Bidirectional PNP bipolar transistor 100 includes an emitter/collector associated with first HVPW 83a, a base associated with HVNW 84, and a collector/emitter associated with second HVPW 83b. The first NPN bipolar transistor 101 includes an emitter associated with a first array of N+ regions 93, a base associated with a first HVPW 83a, and a collector associated with HVNW 84. The second NPN bipolar transistor 103 includes a transmitter associated with the second n+ region 94, a base associated with the second HVPW 83b, and a collector associated with the HVNW 84. The first PNP bipolar transistor 103 includes an emitter associated with the first HVPW 83a, a base associated with the HVNW 84, and a collector associated with the third HVPW 83c. The second PNP bipolar transistor 104 includes an emitter associated with the second HVPW 83b, a base associated with the HVNW 84, and a collector associated with the third HVPW 83c. The first resistor 105 is associated with the well resistance of the first HVPW 83a, and the second resistor 106 is associated with good resistance of the second HVPW 83b.The operation of the bidirectional PNP bipolar transistor 100 in both directions, and the operation of the emitter/collector and the collector/emitter as the emitter and collector may depend on the voltage conditions of the first and second terminals 21, 22. For example, when an overstress event causes the voltage level of the first terminal 21 to be greater than the voltage level of the second terminal, the emitter/collector of the bidirectional PNP bipolar transistor 100 functions as an emitter, and the bidirectional PNP bipolar transistor 100 The emitter/collector acts as a collector. Conversely, when the overstress event causes the voltage level of the first terminal 21 to be less than the voltage level of the second terminal 22, the emitter/collector of the bidirectional PNP bipolar transistor 100 acts as a collector, and the set of bidirectional PNP bipolar transistors 100 The electrode/emitter is used as an emitter.When having a positive overstress event such that the voltage level of the first terminal 21 is greater than the voltage level of the second terminal 22, the bidirectional PNP bipolar transistor 100 can operate the second NPN bipolar transistor 102 as a forward feed Stress-protected first SCR operating device. Further, when the negative overstress event causes the voltage level of the first terminal 21 to be lower than the voltage level of the second terminal 22, the bidirectional PNP bipolar transistor 100 can operate the first NPN type bipolar transistor 101 as a reverse stress Protected second SCR device. In this manner, protection device 80 provides two-way protection.However, during normal operating conditions or signal levels, the protection device 80 should be turned off instead of being performed.As shown in FIG. 3B, the leakage current compensation circuit 50 controls the voltage difference between the HVNW 84 and the first HVPW 83a, and thus also the voltage difference between the emitter/collector and the base of the PNP-type bidirectional bipolar transistor. . Therefore, in the illustrated embodiment, the leakage current compensation circuit 50 turns off the junction between the emitter/collector and the base of the PNP-type bidirectional bipolar transistor 100 to suppress leakage current of the first terminal of the protection device. .The protection device 80 of Figures 3A, 3B corresponds to another embodiment of the protection device 3 shown in Figure 1 . For example, the first terminal 21 can be electrically connected to the input node 1 and the second terminal 22 can be electrically connected to the power supply node 2. However, the protection device 80 can be used in other configurations of integrated circuits.In Figures 3A-3B, the protection device 80 is symmetrical about the center of the HVNW 84. However, those of ordinary skill in the art will appreciate that the teachings herein are also applicable to asymmetric devices. For example, an asymmetrical structure can be provided by providing the structure of the holes, active regions, and/or other devices in an asymmetric configuration.Additional details of the protection device 80 can be similar to those previously described.4 is a top plan view of a protection device with active leakage current compensation in accordance with another embodiment. The protection device 110 of FIG. 4 is similar to the protection device 80 of FIGS. 3A-3B except that the protection device 110 of FIG. 4 further includes a fourth HVPW 83d, a fifth HVPW 83e, a fourth P+ region 91d, a fifth P+ region 91e, and an N+ region. A third array of 95 and a fourth array of N+ regions 96.The protection device 110 of Figure 4 has a cross section 111-111 along the line which is similar to the cross section of the protection device 80 shown in Figure 3B.Although not shown in FIG. 4, the first P+ region 91a, the first array of N+ regions 93, the fourth P+ region 91D, and the third array 95 of N+ regions may be electrically connected to the first terminal 110 of the protection device (eg, The first terminal 21) of Figure 3B. Additionally, the second array of second P+ regions 91b, N+ regions 94, fifth P+ regions 91e, and fourth array of n+ regions 96 can be electrically coupled to the second terminal of protection device 110 (eg, second terminal 22 of FIG. 3B) ). Further, the protection device 110 includes a leakage current compensation circuit (eg, the leakage current compensation circuit 50 of FIG. 3B) that controls the voltage level of the N+ region 92 based on the voltage level of the first terminal. Therefore, the voltage level of the HVNW 84 tracks or changes the changes in the voltage levels of the first and fourth HVPWs 83a, 83d.When the first terminal is electrically connected to an input node of an interface of the IC, such as an input signal pin, which is coupled to the center of the protection device 110, it can improve isolation. Furthermore, the second terminal of the protection device can be electrically connected to a power supply node (such as a ground pin) via metallization so that uniform and rapid activation can be provided by radial current conduction from the center to the edge of the device. The illustrated structure can also facilitate performing active leakage current compensation because the N+ region 92 is distributed throughout the protection device 110.Although the illustrated configuration includes two portions of an SCR device, the teachings herein are applicable to configurations in which the protection device includes fewer portions of more SCR devices. For example, additional portions of the SCR device can be added and used with metallized electrical connections to provide higher current handling capabilities. Moreover, the teachings herein are also applicable to configuring a portion of an SCR device (eg, protection device 80 of FIG. 3A).Additional details of the protection device 110 can be similar to those previously described.FIG. 5A is a circuit diagram of a buffer 200, in accordance with one embodiment. The buffer 200 includes a first n-type metal oxide semiconductor (NMOS) transistor 201, a second NMOS transistor 202, a first current source 203, and a second current source 204. The buffer 200 also includes an input terminal IN and an output terminal OUT.As shown in FIG. 5A, the gate of the first NMOS transistor 201 is electrically connected to the input terminal IN, and the drain of the first NMOS transistor 201 is electrically connected to the second voltage V2, which may be, for example, a power supply voltage of a high power. The first current source 203 includes a first terminal electrically coupled to the sources of the first and second NMOS transistors 201, 202, and a second terminal electrically coupled to the first voltage V1, which may be, for example, a ground or a power supply low supply voltage . The second current source 204 includes a first terminal electrically coupled to the second voltage V2 and is electrically coupled to the second terminals of the output OUT and the drain and the gate of the second NMOS transistor 202.The buffer 200 is operative to control the voltage level of the output OUTPUT based on the voltage level of the input IN. For example, the voltage levels of the sources of the first and second NMOS transistors 51, 52 can track or modify the voltage level of the input terminal IN. For example, in a steady state, the gate to source voltages (V GS ) of the first and second NMOS transistors 201, 202 may be approximately equal to each other, and the voltage level of the output OUT may be approximately equal to the input.The buffer 200 illustrated in FIG. 5A illustrates an example embodiment of the buffer 51 illustrated in FIGS. 2A, 2B, and 3B. However, the buffers 51 of Figures 2A, 2B and 3B can be implemented in a variety of ways.FIG. 5B is a circuit diagram of a buffer 210 in accordance with another embodiment. The buffer 210 of FIG. 5B is similar to the buffer 200 of FIG. 5A, except that the buffer 210 further includes a third NMOS transistor 205 and a fourth NMOS transistor 206.As shown in FIG. 5B, a third NMOS transistor 205 is arranged to be cascoded with the first NMOS transistor 201. For example, the third NMOS transistor 205 includes a source electrically coupled to the drain of the first NMOS transistor 201, electrically coupled to the gate of the bias voltage V BIAS , and electrically coupled to the drain of the second voltage V 2 . Further, the fourth NMOS transistor 206 is arranged to be cascoded with the second NMOS transistor 202. In particular, the fourth NMOS transistor 206 includes a source electrically coupled to the drain of the second NMOS transistor 202, electrically coupled to the gate of the bias voltage V BIAS , and electrically coupled to the second terminal of the second current source 204 The drain.Including the third and fourth NMOS transistors 205, 206 may be enhanced relative to the snubber circuit 200 of FIG. 5A by enhancing the gate-to-source voltage (V GS ) matching of the first and second NMOS transistors 201, 202 during operation. The performance of the buffer circuit 210 of 5B. For example, including the third and fourth NMOS transistors 205, 206 can limit the effects of channel length modulation or the non-idealities of other transistors that affect the accuracy of the buffer.The buffer 210 of FIG. 5B shows another exemplary embodiment of the buffer 51 shown in FIGS. 2A, 2B, and 3B. However, the buffers 51 of Figures 2A, 2B and 3B can be implemented in a variety of ways.FIG. 5C is a circuit diagram of a buffer 220 in accordance with another embodiment. Buffer 220 includes an amplifier 221 having an inverting input, a non-inverting input, and an output. The buffer 200 also includes an input terminal IN and an output terminal OUT.As shown in FIG. 5C, the input IN is electrically coupled to the non-inverting input of amplifier 221, and the output OUT is electrically coupled to the output of amplifier 221. In addition, the output of the amplifier is electrically coupled to the inverting input of the amplifier, so amplifier 221 operates in negative feedback. Although not shown in FIG. 5C, amplifier 221 may include feedback circuitry (such as resistors and/or capacitors) in the feedback path from the amplifier output to the inverting input to provide the desired feedback and/or remain stable.The buffer 220 of FIG. 5C shows another exemplary embodiment of the buffer 51 shown in FIGS. 2A, 2B, and 3B. However, the buffers 51 of Figures 2A, 2B and 3B can be implemented in a variety of ways.FIG. 5D is a circuit diagram of a buffer 230 in accordance with another embodiment. The buffer 230 includes a first p-type metal oxide semiconductor (PMOS) transistor 231, a second PMOS transistor 232, a first current source 233, and a second current source 234. The buffer 230 also includes an input terminal IN and an output terminal OUT.The buffer 230 of FIG. 5D is similar to the buffer 200 of FIG. 5A except that the buffer 230 shows a structure implemented using a PMOS transistor instead of an NMOS transistor. One of ordinary skill in the art will appreciate that the buffers herein can be implemented using a variety of types of transistors including, for example, NMOS transistors, PMOS transistors, NPN bipolar transistors, PNP bipolar transistors, or combinations thereof.As shown in FIG. 5D, the gate of the first PMOS transistor 231 is electrically connected to the input terminal IN, and the drain of the first PMOS transistor 231 is electrically connected to the first voltage V1. The first current source 233 includes a first terminal electrically connected to a source of the first and second PMOS transistors 231, 232, and a second terminal electrically coupled to the second voltage V2. The second current source 234 includes a first terminal electrically coupled to the first voltage V 1 and a second terminal electrically coupled to the output OUT and the drain and gate of the second PMOS transistor 232.The buffer 230 illustrated in FIG. 5D illustrates another exemplary embodiment of the buffer 51 illustrated in FIGS. 2A, 2B, and 3B. However, the buffers 51 of Figures 2A, 2B and 3B can be implemented in a variety of ways.FIG. 5E is a circuit diagram of a buffer 240 in accordance with another embodiment. The buffer 240 of FIG. 5E is similar to the buffer 230 of FIG. 5D except that the buffer 240 further includes a third PMOS transistor 235 and a fourth PMOS transistor 236.As shown in FIG. 5E, the third PMOS transistor 235 is disposed cascoded with the first PMOS transistor 231. For example, the third PMOS transistor 235 includes a source electrically coupled to the drain of the first PMOS transistor 231, electrically coupled to the gate of the bias voltage V BIAS , and electrically coupled to the drain of the first voltage V 1 . Further, the fourth PMOS transistor 236 is disposed cascoded with the second PMOS transistor 232. Specifically, the fourth PMOS transistor 236 includes a source electrically coupled to the drain of the second PMOS transistor 232, electrically coupled to the gate of the bias voltage V BIAS , and electrically coupled to the second terminal of the second current source 234 The drain.The buffer 240 of FIG. 5E illustrates another exemplary embodiment of the buffer 51 shown in FIGS. 2A, 2B, and 3B. However, the buffers 51 of Figures 2A, 2B and 3B can be implemented in a variety of ways.FIG. 5F is a circuit diagram of a buffer 250 in accordance with another embodiment. The buffer 250 of FIG. 5F includes a trimming circuit 251, a chopper circuit 252, and an auto-zeroing circuit 253. The buffer 250 also includes an input terminal IN and an output terminal OUT.The inclusion of at least one trimming circuit 251, chopper circuit 252, or auto-zeroing circuit 253 can reduce the input offset voltage of buffer 250. When the buffer 250 is used in the compensation circuit for leakage current to reduce the n-well voltage difference (eg, n-well 34 of FIG. 2A) and the AP well (eg, the first p-well 33a of FIG. 2A), the compensated voltage difference It can be approximately equal to the input offset voltage of the buffer. Thus, including circuitry to reduce the input offset voltage of the buffer can improve the compensated performance by reducing the voltage difference between the n-well and the p-well.Although FIG. 5F shows that the buffer 250 includes the trimming circuit 251, the chopper circuit 252, and the auto-zeroing circuit 253, one or more circuits may be omitted. For example, the teachings herein are also applicable to buffers that include only the trimming circuit 251, the chopper only circuit 252, or only the auto-zeroing circuit 253.The buffer 250 of FIG. 5F shows another example embodiment of the buffer 51 shown in FIGS. 2A, 2B, and 3B. However, the buffers 51 of Figures 2A, 2B and 3B can be implemented in a variety of ways.Although shown in certain embodiments in the case of a p-type semiconductor substrate, the principles and advantages described herein are also applicable to n-type configurations in which the doping polarity is reversed. For example, an n-type substrate can be provided instead of a p-type substrate, and opposing doped types of wells and active regions can be provided on the n-type substrate. Moreover, the implementations described herein can be applied to undoped substrates, such as those used in certain SOI technologies.applicationDevices employing the above schemes can be implemented as a variety of high performance electronic devices and interface applications, such as interfaces related to precision amplification. Examples of electronic devices may include, but are not limited to, consumer electronics, consumer electronics, electronic test equipment, highly robust industrial equipment, vehicle equipment, and the like. Consumer electronics may include, but are not limited to, automobiles, engine control units, vehicle engine management controllers, transmission controllers, seat belt controllers, anti-lock brake system controllers, and the like. In addition, electronic devices can include unfinished products, including those used in industrial and automotive applications.The above description and claims may refer to elements or features being "connected" or "coupled" together. As used herein, "connected" means that one element/feature is directly or indirectly connected to another element/feature, and is not necessarily a mechanical connection. Likewise, "coupled" means that one element/feature is directly or indirectly coupled to another element/feature, and is not necessarily a mechanical connection. Thus, although the various schematic diagrams shown in the figures depict an exemplary arrangement of components and components, additional intermediate components, devices, features, or may be present in a practical embodiment (assuming that the functionality of the depicted circuit is not adversely affected) ).Although the present invention has been described in some embodiments, it will be apparent to those skilled in the art that the present invention is not to be construed as the invention. Furthermore, the various embodiments described above can be combined to provide further embodiments. Furthermore, some of the features shown in the context of one embodiment can also be incorporated in other embodiments. Therefore, the scope of the invention is to be limited only by the appended claims. |
One feature pertains to a near field communication (NFC) target device comprising a memory circuit adapted to store sensitive data, an NFC interface adapted to transmit and receive information using NFC protocols, and a processing circuit. The processing circuit receives a plurality of provider identification (PID) numbers from a plurality of providers, where each PID number is associated with a different provider. The processing circuit also stores the PID numbers at the memory circuit, and assigns a privilege mask to each PID number received and stored. The NFC target device may also include a physical unclonable function (PUF) circuit. The processing circuit may additionally provide one or more PID numbers as input challenges to the PUF circuit, and receive one or more PUF output responses from the PUF circuit, where the PUF output responses are different from one another and are associated with different providers. |
CLAIMS1. A device, comprising:a memory circuit adapted to store sensitive data;an NFC interface adapted to transmit and receive information using NFC protocols; anda processing circuit communicatively coupled to the memory circuit and the NFC interface, the processing circuit adapted toreceive a plurality of provider identification (PID) numbers from a plurality of providers, each PID number associated with a different provider, store the PID numbers at the memory circuit, andassign a privilege mask of a plurality of privilege masks to each PID number received and stored, each privilege mask of the plurality of privilege masks designating at least a portion of the sensitive data as associated to a provider of the plurality of providers.2. The device of claim 1, wherein the processing circuit is further adapted to:transmit to each provider the portion of the sensitive data associated to the provider based on the privilege mask assigned to the PID number associated with the provider.3. The device of claim 1, wherein the processing circuit is further adapted to:receive a privilege mask request from each provider indicating a desired level of sensitive data access before assigning the privilege mask to each PID number received and stored.4. The device of claim 1, further comprising:a physical unclonable function (PUF) circuit communicatively coupled to the processing circuit, the PUF circuit adapted to generate output responses to input challenges, and wherein the processing circuit is further adapted to:provide one or more PID numbers of the plurality of PID numbers as input challenges to the PUF circuit, and receive one or more PUF output responses from the PUF circuit in response to providing the one or more PID numbers as input challenges, the PUF output responses being different from one another and associated with different providers.5. The device of claim 4, wherein the processing circuit is further adapted to:authenticate one or more providers using, at least in part, the one or more PUF output responses associated with the different providers.6. The device of claim 1, wherein the processing circuit is further adapted to enroll at least a first provider according to a security configuration causing the processing circuit to:receive a first PID number from the first provider, the first PID number identifying the first provider;store the first PID number at the memory circuit;assign a first privilege mask to the first PID number, the first privilege mask designating at least a portion of the sensitive data as first provider sensitive data; and transmit a user identification (UID) number associated with the device to the first provider, the UID number identifying the device.7. The device of claim 6, wherein the processing circuit is further adapted to verify the first provider according to the security configuration causing the processing circuit to:receive the first PID number from the first provider;apply the first privilege mask assigned to the first PID number to limit the sensitive data available to the first provider to the first provider sensitive data; andtransmit the UID number and the first provider sensitive data to the first provider.8. The device of claim 1, further comprising a physical unclonable function (PUF) circuit communicatively coupled to the processing circuit, the PUF circuit adapted to generate output responses to input challenges, and wherein the processing circuit is further adapted to enroll at least a first provider according to a security configuration causing the processing circuit to:receive a first message from the first provider, the first message including a first PID number and a hash of the first PID number and a first random number r1;the first PID number identifying the first provider;store the first PID number at the memory circuit;assign a first privilege mask to the first PID number, the first privilege mask designating at least a portion of the sensitive data as first provider sensitive data;provide the hash of the first PID number and the first random number ri to the PUF circuit as an input challenge to obtain a first noisy response ke;execute a helper data generation function to generate helper data heassociated with the first noisy response ke; andtransmit a second message to the first provider, the second message including a user identification (UID) number, the first noisy response ke, and the helper data he, the UID number identifying the device.9. The device of claim 8, wherein the processing circuit is further adapted to verify the first provider according to the security configuration causing the processing circuit to:receive a third message from the first provider, the third message including the first PID number, the first random number r1;and the hash of the first PID number and the first random number n;apply the first privilege mask assigned to the first PID number to limit the sensitive data available to the first provider to the first provider sensitive data;provide the hash of the first PID number and the first random number ri to the PUF circuit as an input challenge to obtain a target-generated second noisy response execute the helper data generation function to generate helper data hvassociated with the target-generated second noisy response kv t;compute a value ui that includes a hash of the UID number, the helper data hv, the target-generated second noisy response kv t, and a second random number r2; and transmit a fourth message to the first provider, the fourth message including the UID number, the value u1;the second random number r2, the helper data hv, and the first provider sensitive data.10. The device of claim 8, wherein the processing circuit is further adapted to verify the first provider according to a security configuration causing the processing circuit to: transmit the UID number to the first provider;receive a third message from the first provider, the third message including the first PID number, the first random number r1;and the hash of the first PID number and the first random numberprovide the hash of the first PID number and the first random number ri to the PUF circuit as an input challenge to obtain a target-generated second noisy response execute the helper data generation function to generate helper data hvassociated with the target-generated second noisy response kv t;transmit a fourth message to the first provider, the fourth message including a second random number r2and the helper data hv, the second random number r2different than the first random numberreceive a fifth message from the first provider, the fifth message including a third random number r3and a value u1;the third random number different than the first random number ri and the second random number r2, the value ui based on a hash of the UID number, a provider-generated second noisy response kv_p, the second random number r2, and the third random number r3;compute a value u2based on a hash of the UID number, the target-generated second noisy response kv t, the second random number r2, and the third random number authenticate the first provider based on the value ui received and the value u2computed;apply the first privilege mask assigned to the first PID number to limit the sensitive data available to the first provider to the first provider sensitive data; andtransmit a sixth message to the first provider, the sixth message including the first provider sensitive data and a hash of the UID number, the target-generated second noisy response kv t, and the third random number r3.1 1. The device of claim 10, wherein the processing circuit is further adapted to: encrypt the first provider sensitive data in the sixth message with the target- generated second noisy response kv tas a cryptographic key prior to transmitting the sixth message to the first provider.12. A method operational at a device, the method comprising:receiving a plurality of provider identification (PID) numbers from a plurality of providers, each PID number associated with a different provider; storing the PID numbers at a memory circuit; andassigning a privilege mask of a plurality of privilege masks to each PID number received and stored, each privilege mask of the plurality of privilege masks designating at least a portion of a sensitive data as associated to a provider of the plurality of providers.13. The method of claim 12, further comprising:receiving a privilege mask request from each provider indicating a desired level of sensitive data access before assigning the privilege mask to each PID number received and stored.14. The method of claim 12, further comprising:providing one or more PID numbers of the plurality of PID numbers as input challenges to a PUF circuit, andreceiving one or more PUF output responses from the PUF circuit in response to providing the one or more PID numbers as input challenges, the PUF output responses being different from one another and associated with different providers.15. The method of claim 14, further comprising:authenticating one or more providers using, at least in part, the one or more PUF output responses associated with the different providers. server comprising: a communication interface adapted to directly and/or indirectly transmit to and receive information from a near field communication (NFC) target device;a memory circuit; anda processing circuit communicatively coupled to the communication interface and the memory circuit, the processing circuit adapted totransmit a provider identification (PID) number to the NFC target device, the PID number associated with a provider that is associated with the server, the PID number being different than PID numbers of other providers,transmit a privilege mask request to the NFC target device, the privilege mask request indicating a desired privilege mask to be associated with the PID number, the desired privilege mask designating at least a portion of sensitive data stored at the NFC target device as provider sensitive data that is associated with the provider and accessible by the server,receive a user identification (UID) number associated with the NFC target device from the NFC target device, the UID number identifying the NFC target device, andstore the UID number in the memory circuit.17. The server of claim 16, wherein transmitting the PID number to the NFC target device, transmitting the privilege mask request to the NFC target device, and receiving and storing the UID number enroll the provider with the NFC target device according to a security configuration, and wherein the processing circuit is further adapted to verify the NFC target device according to the security configuration causing the processing circuit to:transmit the PID number to the NFC target device;receive a first message from the NFC target device, the first message including the UID number and the provider sensitive data;verify that the UID number received in the first message matches the UID number stored in the memory circuit; andaccept the provider sensitive data received.18. The server of claim 16, wherein the processing circuit is further adapted to enroll with the NFC target device according to a security configuration causing the processing circuit to:transmit a first message to the NFC target device, the first message including the PID number and a hash of the PID number and a first random numberreceive a second message from the NFC target device, the second message including the UID number, a first noisy response ke, and helper data heassociated with the first noisy response ke;execute a reproduction function based on the first noisy response keand the helper data heto reproduce a physical unclonable function (PUF) response k; andstore the response k in the memory circuit, and wherein the processing circuit is further adapted to verify the NFC target device according to the security configuration causing the processing circuit totransmit a third message to the NFC target device, the third message including the PID number, the first random number r1;and the hash of the PID number and the first random number r1;receive a fourth message from the NFC target device, the fourth message including the UID number, a value u1;a second random number r2different than the first random number r1;helper data hvassociated with a target-generated second noisy response kv t, and the provider sensitive data,obtain the PUF response k based on the UID number of the NFC target device,execute the reproduction function based on the PUF response k and the helper data hvto reproduce a provider-generated second noisy response kv_p, compute a value u2based on a hash of the UID number, the helper data hv, the provider-generated second noisy response kv_p, and the second random number r2,authenticate the NFC target device based on the value ui received and the value U2 computed, andaccept the provider sensitive data received.19. The server of claim 16, wherein the processing circuit is further adapted to enroll with the NFC target device according to a security configuration causing the processing circuit to:transmit a first message to the NFC target device, the first message including the PID number and a hash of the PID number and a first random numberreceive a second message from the NFC target device, the second message including the UID number, a first noisy response ke, and helper data heassociated with the first noisy response ke;execute a reproduction function based on the first noisy response keand the helper data heto reproduce a physical unclonable function (PUF) response k; andstore the response k in the memory circuit, and wherein the processing circuit is further adapted to verify the NFC target device according to the security configuration causing the processing circuit toreceive the UID number from the NFC target device,verify the UID number received from the NFC target device matches the UID number stored in the memory circuit,transmit a third message to the NFC target device, the third message including the PID number, the first random number r1;and the hash of the PID number and the first random number r1;receive a fourth message from the NFC target device, the fourth message including a second random number r2different than the first random number ri and helper data hvassociated with a target-generated second noisy response kv t, obtain the PUF response k based on the UID number of the NFC target device,execute the reproduction function based on the PUF response k and the helper data hvto reproduce a provider-generated second noisy response kv_p, compute a value ui based on a hash of the UID number, the provider- generated second noisy response kv_p, the second random number r2, and a third random number Γ3 different than the first random number ri and the second random number r2,transmit a fifth message to the NFC target device, the fifth message including the value ui and the third random number r3, receive a sixth message from the NFC target device, the sixth message including the provider sensitive data and a value ¾ based on a hash of the UID number, the target-generated second noisy response kv t, and the third random number r3,compute a value ¾ based on a hash of the UID number, the provider- generated second noisy response kv_p, and the third random number r3,authenticate the NFC target device based on the value ¾ received and the value U3 computed, andaccept the provider sensitive data received.20. The server of claim 19, wherein the provider sensitive data is encrypted by the target-generated second noisy response kv t, and the processing circuit is further adapted to:decrypt the provider sensitive data received using the provider-generated second noisy response kv_p. |
SECURITY PROTOCOLS FOR UNIFIED NEAR FIELD COMMUNICATION INFRASTRUCTURESCROSS-REFERENCE TO RELATED APPLICATION[0001] This application claims priority to and the benefit of Non-Provisional Application No. 14/613,169 filed in the U.S. Patent and Trademark Office on February 3, 2015, the entire content of which is incorporated herein by reference.BACKGROUNDField[0002] Various features generally relate to security protocols and devices, and more particularly to security protocols and devices for unified near field communication (NFC) infrastructures.Background[0003] NFC is a set of short-range wireless technologies, typically requiring a distance of 10 cm or less. NFC operates at 13.56 MHz on ISO/IEC 18000-3 air interface and currently at rates ranging from 106 kbit/s to 424 kbit/s. NFC involves an initiator and a target. NFC targets may contain memory circuits that can store data that may be read and/or written to by NFC initiators. The initiator actively generates a radio frequency (RF) field that can power the NFC target, which is frequently a passive device having no power source of its own. Such passive NFC targets may thus take very simple form factors such as tags, stickers, or cards that do not require batteries.[0004] One common use for NFC applications is to issue NFC target cards that contain information pertinent to a specific application, service, or purpose. A user may thus carry many different NFC cards each storing information associated with a different application, service, or purpose. Doing so may indeed prove cumbersome though, especially for those users carrying many cards.[0005] Consequently, unified NFC card architectures have been proposed where a single NFC card stores information pertinent to many different applications, services, and purposes. For example, a unified NFC card may store a user's birthdate, social security number, phone number, age, address, account information associated with different services or merchants, credit card numbers, merchant service rewards program card information, etc. The ability to store a variety of information on a single NFC card may be of great practical benefit to a user who ordinarily may have had to store such information on separate, individual NFC cards.[0006] However, a security problem arises for unified NFC cards where the NFC initiator associated with a specific application, service, or purpose interrogates the NFC card to obtain information pertinent to it but rather, or in addition to such pertinent information, the initiator intentionally or unintentionally obtains data stored on the NFC card that is not associated to the initiator. For example, an NFC initiator associated with a supermarket may interrogate a user's NFC card with the intention of obtaining their supermarket rewards card number. In addition to this data, the NFC initiator may be able to read data unrelated to the supermarket account number including, for example, phone numbers, addresses, social security numbers, driver's license numbers, etc. and any other sensitive data that is stored on the NFC card.[0007] Another security problem deals with NFC card "cloning." A nefarious third party may attempt to clone an original NFC card by copying all the data off it onto another imposter NFC card not associated with the original NFC card's user. The nefarious third party may then attempt to pass the cloned, imposter NFC card off as the original thereby potentially gaining access to things and services they shouldn't.[0008] There is a need for security protocols and devices for unified NFC target architectures that provide security for the various types of data that may be stored on NFC targets. Specifically, there is a need for security protocols and devices that help prevent entities, such as services and merchants, from accessing sensitive data stored on NFC targets that is not associated with them or should otherwise be inaccessible. Additionally, there is a need for security protocols and devices that help thwart card cloning. Moreover, there is also a need for security protocols and devices that provide mutual authentication of both the NFC targets and the services providers/merchants, thereby allowing service providers and merchants to verify the identity of an NFC target before providing sensitive data to the NFC target.SUMMARY[0009] One feature provides a near field communication (NFC) target device comprising a memory circuit adapted to store sensitive data, an NFC interface adapted to transmit and receive information using NFC protocols, and a processing circuit communicatively coupled to the memory circuit and the NFC interface. The processing circuit is adapted to receive a plurality of provider identification (PID) numbers from a plurality of providers, where each PID number is associated with a different provider, store the PID numbers at the memory circuit, and assign a privilege mask of a plurality of privilege masks to each PID number received and stored, where each privilege mask of the plurality of privilege masks designates at least a portion of the sensitive data as associated to a provider of the plurality of providers. According to one aspect, the processing circuit is further adapted to transmit to each provider the portion of the sensitive data associated to the provider based on the privilege mask assigned to the PID number associated with the provider. According to another aspect, the processing circuit is further adapted to receive a privilege mask request from each provider indicating a desired level of sensitive data access before assigning the privilege mask to each PID number received and stored.[0010] According to one aspect, the NFC target device further comprises a physical unclonable function (PUF) circuit communicatively coupled to the processing circuit, the PUF circuit adapted to generate output responses to input challenges, and wherein the processing circuit is further adapted to provide one or more PID numbers of the plurality of PID numbers as input challenges to the PUF circuit, and receive one or more PUF output responses from the PUF circuit in response to providing the one or more PID numbers as input challenges, the PUF output responses being different from one another and associated with different providers. According to one aspect, the processing circuit is further adapted to authenticate one or more providers using, at least in part, the one or more PUF output responses associated with the different providers. According to another aspect, the processing circuit is further adapted to enroll at least a first provider according to a security configuration causing the processing circuit to receive a first PID number from the first provider, the first PID number identifying the first provider, store the first PID number at the memory circuit, assign a first privilege mask to the first PID number, the first privilege mask designating at least a portion of the sensitive data as first provider sensitive data, and transmit a user identification (UID) number associated with the NFC target device to the first provider, the UID number identifying the NFC target device. [0011] According to one aspect, the processing circuit is further adapted to verify the first provider according to the security configuration causing the processing circuit to receive the first PID number from the first provider, apply the first privilege mask assigned to the first PID number to limit the sensitive data available to the first provider to the first provider sensitive data, and transmit the UID number and the first provider sensitive data to the first provider. According to another aspect, the NFC target device further comprises a physical unclonable function (PUF) circuit communicatively coupled to the processing circuit, the PUF circuit adapted to generate output responses to input challenges, and wherein the processing circuit is further adapted to enroll at least a first provider according to a security configuration causing the processing circuit to receive a first message from the first provider, the first message including a first PID number and a hash of the first PID number and a first random number r1;the first PID number identifying the first provider, store the first PID number at the memory circuit, assign a first privilege mask to the first PID number, the first privilege mask designating at least a portion of the sensitive data as first provider sensitive data, provide the hash of the first PID number and the first random number ri to the PUF circuit as an input challenge to obtain a first noisy response ke, execute a helper data generation function to generate helper data heassociated with the first noisy response ke, and transmit a second message to the first provider, the second message including a user identification (UID) number, the first noisy response ke, and the helper data he, the UID number identifying the NFC target device.[0012] According to one aspect, the processing circuit is further adapted to verify the first provider according to the security configuration causing the processing circuit to receive a third message from the first provider, the third message including the first PID number, the first random number r1;and the hash of the first PID number and the first random number r1;apply the first privilege mask assigned to the first PID number to limit the sensitive data available to the first provider to the first provider sensitive data, provide the hash of the first PID number and the first random number ri to the PUF circuit as an input challenge to obtain a target-generated second noisy response kv t, execute the helper data generation function to generate helper data hvassociated with the target-generated second noisy response kv t, compute a value ui that includes a hash of the UID number, the helper data hv, the target-generated second noisy response kv t, and a second random number r2, and transmit a fourth message to the first provider, the fourth message including the UID number, the value u1;the second random number r2, the helper data hv, and the first provider sensitive data. According to another aspect, the processing circuit is further adapted to verify the first provider according to a security configuration causing the processing circuit to transmit the UID number to the first provider, receive a third message from the first provider, the third message including the first PID number, the first random number r1;and the hash of the first PID number and the first random number r1;provide the hash of the first PID number and the first random number ri to the PUF circuit as an input challenge to obtain a target-generated second noisy response kv t, execute the helper data generation function to generate helper data hvassociated with the target-generated second noisy response kv t, transmit a fourth message to the first provider, the fourth message including a second random number r2and the helper data hv, the second random number r2different than the first random number r1;receive a fifth message from the first provider, the fifth message including a third random number r3and a value u1;the third random number different than the first random number ri and the second random number r2, the value ui based on a hash of the UID number, a provider-generated second noisy response kv_p, the second random number r2, and the third random number r3, compute a value u2based on a hash of the UID number, the target-generated second noisy response kv t, the second random number r2, and the third random number r3, authenticate the first provider based on the value ui received and the value u2computed, apply the first privilege mask assigned to the first PID number to limit the sensitive data available to the first provider to the first provider sensitive data, and transmit a sixth message to the first provider, the sixth message including the first provider sensitive data and a hash of the UID number, the target-generated second noisy response kv t, and the third random number r3. According to yet another aspect, the processing circuit is further adapted to encrypt the first provider sensitive data in the sixth message with the target-generated second noisy response kv tas a cryptographic key prior to transmitting the sixth message to the first provider.[0013] Another feature provides a method operational at an NFC target device, the method comprising receiving a plurality of provider identification (PID) numbers from a plurality of providers, each PID number associated with a different provider, storing the PID numbers at a memory circuit, and assigning a privilege mask of a plurality of privilege masks to each PID number received and stored, each privilege mask of the plurality of privilege masks designating at least a portion of a sensitive data as associated to a provider of the plurality of providers. According to one aspect, the method further comprises receiving a privilege mask request from each provider indicating a desired level of sensitive data access before assigning the privilege mask to each PID number received and stored. According to another aspect, the method further comprises providing one or more PID numbers of the plurality of PID numbers as input challenges to a PUF circuit, and receiving one or more PUF output responses from the PUF circuit in response to providing the one or more PID numbers as input challenges, the PUF output responses being different from one another and associated with different providers. According to yet another aspect, the method further comprises authenticating one or more providers using, at least in part, the one or more PUF output responses associated with the different providers.[0014] Another feature provides a server associated with a provider and part of a near field communication (NFC) infrastructure, the server comprising a communication interface adapted to directly and/or indirectly transmit to and receive information from an NFC target device, a memory circuit, and a processing circuit communicatively coupled to the communication interface and the memory circuit, the processing circuit adapted to transmit a provider identification (PID) number to the NFC target device, the PID number associated with the provider and being different than PID numbers of other providers, transmit a privilege mask request to the NFC target device, the privilege mask request indicating a desired privilege mask to be associated with the PID number, the desired privilege mask designating at least a portion of sensitive data stored at the NFC target device as provider sensitive data that is associated with the provider and accessible by the server, receive a user identification (UID) number associated with the NFC target device from the NFC target device, the UID number identifying the NFC target device, and store the UID number in the memory circuit. According to one aspect, transmitting the PID number to the NFC target device, transmitting the privilege mask request to the NFC target device, and receiving and storing the UID number enroll the provider with the NFC target device according to a security configuration, and wherein the processing circuit is further adapted to verify the NFC target device according to the security configuration causing the processing circuit to transmit the PID number to the NFC target device, receive a first message from the NFC target device, the first message including the UID number and the provider sensitive data, verify that the UID number received in the first message matches the UID number stored in the memory circuit, and accept the provider sensitive data received.[0015] According to one aspect, the processing circuit is further adapted to enroll with the NFC target device according to a security configuration causing the processing circuit to transmit a first message to the NFC target device, the first message including the PID number and a hash of the PID number and a first random number r1;receive a second message from the NFC target device, the second message including the UID number, a first noisy response ke, and helper data heassociated with the first noisy response ke, execute a reproduction function based on the first noisy response keand the helper data heto reproduce a physical unclonable function (PUF) response k, and store the response k in the memory circuit, and wherein the processing circuit is further adapted to verify the NFC target device according to the security configuration causing the processing circuit to transmit a third message to the NFC target device, the third message including the PID number, the first random number r1;and the hash of the PID number and the first random number r1;receive a fourth message from the NFC target device, the fourth message including the UID number, a value u1;a second random number r2different than the first random number r1;helper data hvassociated with a target-generated second noisy response kv t, and the provider sensitive data, obtain the PUF response k based on the UID number of the NFC target device, execute the reproduction function based on the PUF response k and the helper data hvto reproduce a provider-generated second noisy response kv_p, compute a value u2based on a hash of the UID number, the helper data hv, the provider-generated second noisy response kv_p, and the second random number r2, authenticate the NFC target device based on the value ui received and the value u2computed, and accept the provider sensitive data received. According to another aspect, the processing circuit is further adapted to enroll with the NFC target device according to a security configuration causing the processing circuit to transmit a first message to the NFC target device, the first message including the PID number and a hash of the PID number and a first random number r1;receive a second message from the NFC target device, the second message including the UID number, a first noisy response ke, and helper data heassociated with the first noisy response ke, execute a reproduction function based on the first noisy response keand the helper data heto reproduce a physical unclonable function (PUF) response k, and store the response k in the memory circuit, and wherein the processing circuit is further adapted to verify the NFC target device according to the security configuration causing the processing circuit to receive the UID number from the NFC target device, verify the UID number received from the NFC target device matches the UID number stored in the memory circuit, transmit a third message to the NFC target device, the third message including the PID number, the first random number r1;and the hash of the PID number and the first random number r1;receive a fourth message from the NFC target device, the fourth message including a second random number r2different than the first random number ri and helper data hvassociated with a target-generated second noisy response kv t, obtain the PUF response k based on the UID number of the NFC target device, execute the reproduction function based on the PUF response k and the helper data hvto reproduce a provider-generated second noisy response kv_p, compute a value ui based on a hash of the UID number, the provider-generated second noisy response kv_p, the second random number r2, and a third random number r3different than the first random number ri and the second random number r2, transmit a fifth message to the NFC target device, the fifth message including the value ui and the third random number r3, receive a sixth message from the NFC target device, the sixth message including the provider sensitive data and a value u2based on a hash of the UID number, the target-generated second noisy response kv t, and the third random number r3, compute a value u3based on a hash of the UID number, the provider-generated second noisy response kv_p, and the third random number r3, authenticate the NFC target device based on the value u2received and the value u3computed, and accept the provider sensitive data received. According to another aspect, the provider sensitive data is encrypted by the target- generated second noisy response kv t, and the processing circuit is further adapted to decrypt the provider sensitive data received using the provider-generated second noisy response kv_p.BRIEF DESCRIPTION OF THE DRAWINGS[0016] FIG. 1 illustrates a high-level schematic block diagram of a unified NFC infrastructure.[0017] FIG. 2 illustrates a high level block diagram of an NFC infrastructure depicting an enrollment process.[0018] FIG. 3 illustrates a high level block diagram of an NFC infrastructure depicting a verification process. [0019] FIG. 4 illustrates a high level schematic block diagram of an NFC infrastructure.[0020] FIG. 5 illustrates a schematic block diagram of the sensitive data stored at an NFC card.[0021] FIG. 6 illustrates a process flow diagram of a first exemplary security configuration for an NFC infrastructure.[0022] FIGS. 7 A and 7B illustrate a process flow diagram of a second exemplary security configuration for an NFC infrastructure.[0023] FIGS. 8 A and 8B illustrate a process flow diagram of a third exemplary security configuration for an NFC infrastructure.[0024] FIG. 9 illustrates a process flow diagram of a fourth exemplary security configuration for an NFC infrastructure.[0025] FIG. 10 illustrates a process flow diagram of an enrollment process taking place at the NFC target device based on the first security configuration.[0026] FIG. 11 illustrates a process flow diagram of a verification process taking place at the NFC target device based on the first security configuration.[0027] FIG. 12 illustrates a process flow diagram of an enrollment process taking place at the NFC target device based on the second security configuration.[0028] FIG. 13 illustrates a process flow diagram of a verification process taking place at the NFC target device based on the second security configuration.[0029] FIGS. 14A and 14B illustrate a process flow diagram of a verification process taking place at the NFC target device based on the third security configuration.[0030] FIG. 15 illustrates a process flow diagram of an enrollment process taking place at the server based on the first security configuration.[0031] FIG. 16 illustrates a process flow diagram of a verification process taking place at the server based on the first security configuration.[0032] FIG. 17 illustrates a process flow diagram of an enrollment process taking place at the server based on the second security configuration.[0033] FIGS. 18A and 18B illustrate a process flow diagram of a verification process taking place at the server based on the second security configuration.[0034] FIGS. 19A and 19B illustrate a process flow diagram of a verification process taking place at the server based on the third security configuration. [0035] FIG. 20 illustrates a schematic block diagram of one example of the NFC card's PUF.[0036] FIG. 21 illustrates a schematic block diagram of an NFC target device.[0037] FIG. 22 illustrates a schematic block diagram of a server.DETAILED DESCRIPTION[0038] In the following description, specific details are given to provide a thorough understanding of the various aspects of the disclosure. However, it will be understood by one of ordinary skill in the art that the aspects may be practiced without these specific details. For example, circuits and structures may be shown in block diagrams in order to avoid obscuring the aspects in unnecessary detail. In other instances, well- known circuits, structures and techniques may not be shown in detail in order not to obscure the aspects of the disclosure. The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any implementation or aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects of the disclosure.Overview[0039] Some systems, methods, and apparatuses described herein pertain to an NFC target device that includes a memory circuit adapted to store sensitive data, an NFC interface adapted to transmit and receive information using NFC protocols, and a processing circuit. The processing circuit receives a plurality of provider identification (PID) numbers from a plurality of providers, where each PID number is associated with a different provider. The processing circuit also stores the PID numbers at the memory circuit, and assigns a privilege mask to each PID number received and stored. The NFC target device may also include a physical unclonable function (PUF) circuit. The processing circuit may additionally provide one or more PID numbers as input challenges to the PUF circuit, and receive one or more PUF output responses from the PUF circuit, where the PUF output responses are different from one another and are associated with different providers.Exemplary Apparatuses, Methods, and Systems for Unified NFC Infrastructures[0040] FIG. 1 illustrates a high-level schematic block diagram of a unified NFC infrastructure 100 (e.g., NFC system) according to one aspect of the disclosure. The infrastructure 100 may include an NFC target 102, a wireless communication device (WCD) 104, and a plurality of provider servers 106a, 106b ... 106n.[0041] Generally, the NFC target 102 may be a device with limited resources and computing power relative to the WCD 104. According to one example, the NFC target 102 may be a passive NFC target device such as, but not limited to, a tag, a card, a sticker, a key fob, etc. According to another example, the NFC target 102 may be an active device having its own power source. For purposes of clarity and simplicity the NFC target 102 may herein be referred to as an "NFC card." However, the NFC card 102 may be any other type of NFC target device not limited to a tag, a sticker, a key fob, etc.[0042] The WCD 104 may be any wireless communication device that is capable of NFC so as to allow the WCD 104 to interrogate and communicate with the NFC card 102 using NFC protocols. Generally, the WCD 104 has more resources and computing power than the NFC card 102. The WCD 104 may also communicate with one or more servers 106a, 106b ... 106n and other devices through wired communication protocols, long range wireless communication protocols, or other short range communication protocols. Some non-limiting examples of WCDs 104 may be, but are not limited to, laptops, smartphones, tablets, personal digital assistants, desktop terminals, wearable devices including smartwatches and head-mounted displays having NFC functionality.[0043] The servers 106a, 106b ... 106n may each be associated with a different provider 118 such as, but not limited to, a merchant or a provider of services. As some non-limiting, non-exclusive examples, a first server 106a may be associated with an establishment that serves alcoholic beverages to adults over a certain age, a second server 106b may be associated with a merchant, such as a supermarket, that provides membership cards to its customers to help facilitate a rewards program, and another server 106n may be a provider of services, such as a security company, that assigns security key cards and related account information to its customers.[0044] The NFC card 102 includes a semiconductor device (e.g., chip) that includes a physically unclonable function (PUF) 108. A PUF is a chip-unique challenge-response mechanism exploiting manufacturing process variations inside one or more integrated circuits (ICs) of the device having the PUF. When a physical stimulus (e.g., challenge) is applied to the PUF, the PUF generates a response in an unpredictable but repeatable way due to the complex interaction of the stimulus with the physical microstructure of the device employing the PUF. This exact microstructure depends on physical factors introduced during manufacture of the device employing the PUF, which are unpredictable. The PUF is difficult to clone in that each device employing the PUF has a unique and unpredictable way of mapping challenges to responses, even if one device is manufactured with the same process as another seemingly identical device. Thus, it is very impractical to construct a PUF with the same challenge-response behavior as another device's PUF because exact control over the manufacturing process is extremely difficult if not infeasible.[0045] The NFC device 102 further includes sensitive data 110. The sensitive data 110 may include data associated with the user 116 of the card 102 and/or data associated with one or more of the providers 118. For example, the sensitive data 110 may include the user's 116 birthdate, social security number, phone number, age, address, and credit card numbers. The sensitive data 110 may also include, for example, account numbers, customer numbers, and/or rewards program account data associated with the different providers 118. The sensitive data 110 may include many other types of data that the user 116 and/or the provider 118 wishes to keep secure.[0046] The WCD 104 includes an NFC reader 112 and a communication interface 114. The NFC reader 112 allows the WCD 104 to communicate with the NFC card 102 using NFC communication protocols. The communication interface 114 may be, for example, a wireless communication interface that allows the WCD 104 to communicate to the one or more servers 106a, 106b ... 106n through wireless wide area networks (WW AN), cellular networks, and/or short range communication protocols such as WiFi®, Zigbee®, etc.[0047] According to one aspect, the WCD 104 may belong to and/or otherwise be associated with the user 116. For example, the WCD 104 may be a smartphone, laptop, tablet, personal digital assistant, desktop terminal, wearable device including a smartwatch and a head-mounted display that the user 116 carries along with them in addition to their NFC card 102. This enables the user 116 to use their WCD 104 at their convenience to read and/or write data 110 from/to their NFC card 102. For example, the WCD 104 may include a software application that the user 116 can use to program (e.g., write data) their NFC card 102 to include certain data 110.[0048] According to another aspect, the WCD 104 may belong to and/or otherwise be associated with a provider 118. For example, the WCD 104 may be a smartphone, laptop, tablet, desktop terminal, kiosk, wearable device including a smartwatch and a head-mounted display, or a hand-held electronic device located at the provider 118. An employee or staff member of the provider 118 may use the WCD 104 to read and/or write data 110 from/to the user's 116 NFC card 102.[0049] FIG. 2 illustrates a high level block diagram of the NFC infrastructure 100 depicting an enrollment process according to one aspect. In the illustrated example, a user enrolls a provider into their NFC card 102 (e.g., NFC card A). Each provider enrolled onto the user's NFC card 102 is identified with a unique provider identification number (PID) so that the NFC card 102 can distinguish between providers and their level of secure data 110 access. During enrollment a server 106a associated with the provider may provide its PID to the NFC card 102. The server 106a may in some cases also provide additional data (e.g., Datai). This may be accomplished by transmitting 202 the PID + Datai to the WCD 104, which in turn transmits 204 the PID + Datai to the NFC card 102 through NFC protocols. In the case where the WCD 104 belongs to or is otherwise associated with the user, an application 206 resident on the WCD 104 may be utilized by the user to have the WCD 104 receive 202 the PID + Datai from the server 106a and transmit 204 the PID + Datai to the user's NFC card 102. In another case, the user may approach the WCD 104 located at and associated with the provider for their NFC card 102 to receive 204 the PID + Datai.[0050] Once the NFC card 102 has received 204 the PID + Datai, the NFC card 102 may store 207 this data. It then assigns and associates 208 a privilege mask (e.g., security mask) with the PID. In one aspect, the NFC card 102 and the provider's server 106a together agree upon the privilege mask to be assigned and associated 208 with the PID. The privilege mask helps define the specific data 110 that may be provided to the provider and its server 106a when, for example, the provider later attempts to access the data 110 stored on the NFC card 102.[0051] Depending on the privilege mask assigned 208 at the NFC card 102 for a given provider, the NFC card 102 generates and transmits 210 one or more responses of varying complexity and security back to the provider's server 106a. As will be described in greater detail below, in one aspect the response may simply provide a user identification (UID) number associated with the NFC card 102 back to the server 106a in response to reception 204 of the provider's PID. In other aspects, the security and complexity of the response (or a set of responses) may increase and incorporate, for example, responses to challenges generated by the NFC card's PUF 108.[0052] Notwithstanding the specific response protocol mandated by the privilege mask, the response may be transmitted 210, 212 to the provider's server 106a from the NFC card 102 via the WCD 104. As before, if the WCD 104 belongs to or is otherwise associated with the user, the user may utilize an application 206 on the WCD 104 to receive 210 and transmit 212 the NFC card's 102 response to the server 106a. The server 106a may then store 214 the response received from the NFC card 102 in a memory circuit 216. The server 106a may store a different response received from each and every NFC card it enrolls in. Thus, the server 106a stores 214 an entry that identifies the identity of the NFC card enrolled and its corresponding response. The server 106a may also store the PID + Datai sent 204 to the NFC card 102. According to one aspect, the memory circuit 216 may be any type of non-volatile storage medium including, but not limited to, flash memory, magnetic storage devices, solid state drives, optical storage devices, tapes, etc.[0053] FIG. 3 illustrates a high level block diagram of the NFC infrastructure 100 depicting a verification process according to one aspect. In the illustrated example, data 110 on an NFC card 102 is provided to a previously enrolled provider after optionally verifying the authenticity of the provider and/or the user's NFC card 102. During the verification process a server 106a associated with the provider may provide its PID in addition to some data (e.g., Data2) to the NFC card 102. This may be accomplished by transmitting 302 the PID + Data2to the WCD 104, which in turn transmits 304 the PID + Data2to the NFC card 102 through NFC protocols. In the case where the WCD 104 belongs to or is otherwise associated with the user, an application 206 resident on the WCD 104 may be utilized by the user to have the WCD 104 receive 302 the PID + Data2from the server 106a and transmit 304 the PID + Data2to the user's NFC card 102. In another case, the user may approach the WCD 104 located at and associated with the provider for their NFC card 102 to receive 304 the PID + Data2.[0054] Once the NFC card 102 has received 304 the PID + Data2, the NFC card 102 may optionally verify 306 the server 106a based on the PID and/or Data2. This may be accomplished by performing operations and making comparisons to previously stored data (e.g., PID and Datai) associated with the server 106a. Assuming the NFC card 102 verifies 306 the authenticity of the server 106a, the NFC card 102 applies 308 the privilege mask associated with the provider and/or server 106a based on the PID. Depending on the privilege mask applied 308, the NFC card 102 generates and transmits 310 one or more responses of varying complexity and security back to the provider's server 106a. As will be described in greater detail below, in one aspect the response may simply provide a user identification (UID) number associated with the NFC card 102 back to the server 106a in response to reception 304 of the provider's PID + Data2. In other aspects, the security and complexity of the one or more responses may increase and incorporate, for example, responses to challenges generated by the NFC card's PUF 108.[0055] Notwithstanding the specific response protocol mandated by the privilege mask, the response may be transmitted 310, 312 to the provider's server 106a from the NFC card 102 via the WCD 104. As before, if the WCD 104 belongs to or is otherwise associated with the user, the user may utilize an application 206 on the WCD 104 to receive 310 and transmit 312 the NFC card's 102 response to the server 106a. The server 106a may then optionally verify 314 the response received from the NFC card 102 by, for example, comparing it to previously acquired response(s) stored 214 in its memory circuit 216. Thus, if the response received 312 at the server 106a during verification matches previously stored 214 response data associated with the NFC card 102 (e.g., NFC card A), then the NFC card 102 may be authenticated and data 110 retrieved from the NFC card 102 may be trusted.[0056] FIG. 4 illustrates a high level schematic block diagram of the NFC infrastructure 100 according to one aspect. The illustrated example shows how each of the plurality of providers' servers 106a, 106b ... 106n may transmit different PIDs (e.g., PIDA, PIDB, PIDn) that uniquely identify them in order to enroll and verify with the NFC card 102 (e.g., NFC card A). In this fashion multiple providers may enroll in the system using their unique PID. The NFC card 102 may store data 402, 404, 406 (e.g., PID information and other data) associated with each of the enrolled providers' servers 106a, 106b ... 106n. Each server 106a, 106b ... 106n may store the unique response 410, 412, 414 generated by the NFC card 102 in response to the PID and other data supplied to the NFC card 102. These responses 410, 412, 414 are stored so that the servers 106a, 106b ... 106n may later verify the authenticity of the NFC card 102. The servers 106a, 106b ... 106n may store the responses of each and every different NFC card they've enrolled with. The WCD 104 may act as a communication conduit between the servers 106a, 106b ... 106n and the NFC card 102.[0057] FIG. 5 illustrates a schematic block diagram of the sensitive data 110 stored at the NFC card 102 according to one aspect of the disclosure. The sensitive data 110 may be apportioned according to privilege masks such that only portions of it are designated as being available to and/or associated with one or more providers and/or their servers. For example, referring to FIGS. 1, 4, and 5, a first provider sensitive data 502 may represent at least a portion of the sensitive data 110 stored at the NFC card 102 that can be made available to (e.g., transmitted to) a first provider's (e.g., provider A) server 106a according to a first privilege mask. The first provider sensitive data 502 may include, for instance, the NFC card user's name, age, and a photo of the user 116. Thus, at most only this information may be made available to and/or transmitted to the first provider's server 106a, assuming successful authentication of the NFC card 102 and/or the server 106a when necessary.[0058] Similarly, a second provider sensitive data 504 may represent at least a portion of the sensitive data 110 stored at the NFC card 102 that can be made available to (e.g., transmitted to) a second provider's (e.g., provider B) server 106b according to a second privilege mask. The second provider sensitive data 504 may include, for instance, the NFC card user's 116 rewards program card/account number associated with the second provider. Thus, at most only this information may be made available to and/or transmitted to the second provider's server 106b, assuming successful authentication of the NFC card 102 and/or the server 106b when necessary. In this fashion, the sensitive data 110 may include N number of provider sensitive data 502, 504, 506 that is associated with N different providers and/or servers.[0059] FIG. 6 illustrates a process flow diagram of a first exemplary security configuration 600 for an NFC infrastructure according to one aspect. Referring to FIGS. 1, 5, and 6, the first security configuration 600 provides a first level of security for the NFC infrastructure 100 that includes simple identification of the NFC card 102 and the server 106a. The first security configuration 600 is simple in that it doesn't necessarily provide cryptographic authentication of either the NFC card 102 or the server 106a.[0060] The first security configuration's 600 enrollment process may begin with the provider's server 106a transmitting 602 its unique PID to the NFC card 102. The NFC card 102 may then store 604 the PID received and assign 606 a privilege mask associated with the PID. The privilege mask helps define the portion of the sensitive data 110 that is the provider sensitive data (e.g., the first provider sensitive data 502) that may be provided to the server 106a when, for example, the server 106a later attempts to access sensitive data 110 stored on the NFC card 102. According to one aspect, the NFC card 102 and the server 106a negotiate the privilege mask to be assigned before the NFC card 102 assigns it. This may include the server 106a transmitting a privilege mask request to the NFC card 102 that indicates a desired level of access to certain types of sensitive data 110 stored on the NFC card 102. The NFC card 102, however, may ultimately decide whether or not to grant the server's 106a request and which privilege mask to assign 606 to its PID. In this fashion, the NFC card 102 may control what sensitive data is included in the provider sensitive data 502 that the server 106a may later access after verification. After the privilege mask is assigned 606, the NFC card 102 transmits 608 the user identification (UID) number associated with the NFC card 102 to the server 106a. The UID is a unique number or value that uniquely identifies the NFC card 102. The server 106a then stores 610 the UID associated with the NFC card 102.[0061] The first security configuration's 600 verification process may begin with the provider's server 106a transmitting 652 its PID to the NFC card 102. The NFC card 102 may then look up the privilege mask settings associated with the PID received and apply 654 the privilege mask associated with the PID. The NFC card 102 then transmits 656 its user identification (UID) number and the first provider sensitive data 502 associated with the provider and/or server 106a to the server 106a. The server 106a then verifies 658 that it indeed has a UID value stored that matches the UID received from the NFC card 102. Assuming it does, the server 106a may then accept 660 the first provider sensitive data 502 received.[0062] FIGS. 7 A and 7B illustrate a process flow diagram of a second exemplary security configuration 700 for an NFC infrastructure according to one aspect. Referring to FIGS. 1, 5, 7A, and 7B, the second security configuration 700 provides a second level of security for the NFC infrastructure 100. Specifically, the second security configuration 700 allows the server 106a to cryptographically authenticate the NFC card 102 to help determine the NFC card's 102 identity.[0063] The second security configuration's 700 enrollment process may begin with the provider's server 106a transmitting 702 a message to the NFC card 102 that includes its PID, and a hash (hash function denoted herein has h()) of the PID and a first random number ri (i.e., A(PID, ri)). The NFC card 102 may then store 704 the PID received and assign 706 a privilege mask associated with the PID. According to one aspect, the NFC card 102 and the server 106a negotiate the privilege mask assigned (e.g., server 106a transmits a privilege mask request indicating the desired level of access to sensitive data). After the privilege mask is assigned 706, the NFC card 102 executes 708 its PUF 108 with A(PID, ri) as the challenge thereby generating a noisy response ke. The response keis noisy in that it is different to some degree from the expected, clean (i.e., non-noisy) response k due to noise (e.g., thermal, electrical, etc.) associated with the NFC card 102 and/or the PUF 108 device itself. Thus, ideally the PUF 108 would generate the response k to the challenge A(PID, ri) but due to noise the PUF actually generates the noisy response ke. However, as is known in the art, helper data may be generated using helper data generation functions (denoted herein as GenQ) to enable reproduction of the clean PUF response based on the noisy response.[0064] After the NFC card 102 generates the noisy response ke, the NFC card 102 executes 710 Gen(k£) to generate helper data he. The NFC card 102 may then transmit 712 a message to the server 106a that includes its UID, the noisy response ke, and the helper data he. The server 106a may then execute a reproduction function (denoted herein as RepQ) to reproduce the clean response k from the noisy response keusing the helper data he. Thus, the server 106a may execute 714 Rep(k£, he) to generate the clean response k. The server 106a may then store 716 the UID and the response k.[0065] The second security configuration's 700 verification process may begin with the provider's server 106a transmitting 752 a message to the NFC card 102 that includes PID, A(PID, ri), and ri. The NFC card 102 may then look up the privilege mask settings associated with the PID received and apply 754 the privilege mask associated with the PID. The NFC card 102 may then execute 756 its PUF 108 with h PlO, ri) as the challenge thereby generating an NFC target-generated second noisy response kv t. The second noisy response kv tis different than the first noisy response keeven though the challenge (A(PID, ri)) is the same due to the unpredictable nature of noise. The NFC card 102 then executes 758 Ge«(kv t) to generate helper data hvassociated with the target-generated second noisy response kv t.[0066] The NFC card 102 may also generate a value ui that is equal to A(UID, hv, kv t, r2) where r2is a second random number different than ri. A message including the UID, ui, r2, hv, and the first provider sensitive data 502 associated with the provider and/or server 106a is then transmitted 762 from the NFC card 102 to the server 106a.[0067] The server 106a may then retrieve 764 the previously stored clean response k associated with the UID value received, and execute 766 the Rep(k, hv) to reproduce a provider-generated second noisy response kv_p. The target-generated second noisy response kv tand the provider-generated second noisy response kv_pshould be the same if the NFC card 102 and the server 106a are both the same devices that took part in the enrollment process with each other (i.e., neither one is being impersonated by a different device). The server 106a then generates 768 ¾ = h(\]lD, hv, kv_p, r2) using the provider- generated second noisy response kv_pthat it reproduced 766 earlier to verify 770 whether the value ui it received from the NFC card 102 matches (e.g., equals) the value U2 it generated. If ui = ¾ then the NFC card 102 is authenticated by the server 106a and the server 106a can reliably accept 772 the first provider sensitive data 502 received.[0068] FIGS. 8 A and 8B illustrate a process flow diagram of a third exemplary security configuration 800 for an NFC infrastructure according to one aspect. Referring to FIGS. 1, 5, 8A, and 8B, the third security configuration 800 provides a third level of security for the NFC infrastructure 100. Specifically, the third security configuration 800 allows the server 106a and the NFC card 102 to cryptographically authenticate each other.[0069] The third security configuration's 800 enrollment process may begin with the provider's server 106a transmitting 802 a message to the NFC card 102 that includes its PID and A(PID, ri) where n is a first random number generated and stored at the server 106a. The NFC card 102 may then store 804 the PID received and assign 806 a privilege mask associated with the PID. According to one aspect, the NFC card 102 and the server 106a negotiate the privilege mask assigned (e.g., server 106a transmits a privilege mask request indicating the desired level of access to sensitive data). After the privilege mask is assigned 806, the NFC card 102 executes 808 its PUF 108 with h(PYD, ri) as the challenge thereby generating a noisy response ke.[0070] After the NFC card 102 generates the noisy response ke, the NFC card 102 executes 810 Gen(k£) to generate helper data he. The NFC card 102 may then transmit 812 a message to the server 106a that includes its UID, the noisy response ke, and the helper data he. The server 106a may then execute 814 Rep(k£, he) to generate the clean response k. The server 106a stores 816 the UID and the response k. [0071] The third security configuration's 800 verification process may begin with the NFC card 102 transmitting 852 its UID to the server 106a. After verifying 854 that the UID exists (e.g., server 106a has an enrollment entry corresponding to the UID), the server 106a transmits 856 a message to the NFC card 102 that includes PID, A(PID, ri), and ri. The NFC card 102 may then execute 858 its PUF 108 with h PlO, rx) as the challenge thereby generating a target-generated second noisy response kv t. The target- generated second noisy response kv tis different than the first noisy response ke. The NFC card 102 then executes 860 Ge«(kv t) to generate helper data hvassociated with the second noisy response kv t, and transmits 862 a message including the helper data hvand a second random number r2(different than first random number ri) to the server 106a.[0072] The server 106a may then retrieve 864 the previously stored clean response k associated with the UID value received, and execute 866 the Rep(k, hv) to reproduce a provider-generated second noisy response kv_p. The server 106a then generates 868 ui = A(UID, kv_p, r2, r3) using the provider-generated second noisy response kv_p that it reproduced 866 earlier. The third random number r3is another random number different than the first and second random numbers r1;r2. A message including ui and r3is then transmitted 870 to the NFC card 102.[0073] The NFC card 102 then generates 872 u2= A(UID, kv t, r2, r3) using the target- generated second noisy response kv tthat it previously generated 858, and verifies 874 whether the value ui it received from the server 106a matches (e.g., equals) the value u2it generated. Assuming ui = u2, the server 106a is authenticated and the NFC card 102 generates 876 u3= A(UID, kv t, r3) and applies 878 the privilege mask associated with the server's 106a PID. The NFC card 102 then transmits 880 the value u3and the first provider sensitive data 502 associated with the provider and/or server 106a to the server 106a.[0074] The server 106a then generates 882 ι¾ = A(UID, kv_p, r3) using the provider- generated second noisy response kv_p, and verifies 884 whether the value u3that it received from the NFC card 102 matches (e.g., equals) the value 114 it generated. Assuming u3= U4, the NFC card 102 is authenticated and the server 106a may reliably accept 886 the first provider sensitive data 502 received.[0075] FIG. 9 illustrates a process flow diagram of a fourth exemplary security configuration 900 for an NFC infrastructure according to one aspect. Referring to FIGS. 1, 5, 8A, 8B, and 9, the fourth security configuration 900 provides a fourth level of security for the NFC infrastructure 100 that builds upon the third security configuration 800. Like the third security configuration 800, the fourth security configuration also allows the server 106a and the NFC card 102 to cryptographically authenticate each other but has the added feature of encryption of the first provider sensitive data 502 transmitted to the server 106a from the NFC card 102.[0076] For example, the fourth security configuration 900 includes mutual authentication of the NFC card 102 and the server 106a based on processes and communications 802, 804, 806, 808, 810, 812, 814, 816, 852, 854, 856, 858, 860, 862, 864, 866, 868, 870, 872, 874, 876, 878 of the third security configuration 800. As an additional measure of security, however, after applying 878 the privilege mask the NFC card 102 transmits 902 a message including the value ¾ and an encrypted (denoted by the function EncQ) version of the first provider sensitive data 502 associated with the provider and/or server 106a. The first provider sensitive data 502 is encrypted using the target-generated second noisy response kv tas the key. After verifying 884 that the value U3 = U4, the server 106a accepts and decrypts 904 the sensitive data received using the provider-generated second noisy response kv_pthe server 106a previously generated 866. In one case, the NFC card 102 may encrypt the entire message transmitted 902 using the target-generated second noisy response kv tas the key. In such a case the server 106a decrypts the received message first using its provider-generated second noisy response kv_pbefore generating u4and verifying that u3= u4.[0077] The examples illustrated in FIG. 6 - 9 show communication exchanges between an NFC card 102 and a provider's server 106a. Although these communication exchanges are shown as occurring directly between the NFC card 102 and the server 106a, in some aspects a WCD 104 (see FIGS. 1 - 4) may act as an communication intermediary between the NFC card 102 and the server 106a (e.g., transmitting/receiving data to/from the NFC card 102 using NFC and transmitting/receiving data to/from the server 106a using long or short range communication protocols). In other aspects the server 106a may have an NFC interface itself and thus may communicate directly with the NFC card 102 using NFC.[0078] FIG. 10 illustrates a process flow diagram 1000 of an enrollment process taking place at the NFC target device 102 based on the first security configuration 600 according to one aspect of the disclosure. First, the NFC target device receives a first PID number from a first provider, where the first PID number identifies the first provider 1002. Next, the NFC target device stores the first PID number at a memory circuit 1004. Then, the target device assigns a first privilege mask to the first PID number, where the first privilege mask designates at least a portion of the sensitive data as first provider sensitive data 1006. Next, the NFC target device transmits a user identification (UID) number associated with the NFC target device to the first provider, where the UID number identifies the NFC target device 1008.[0079] FIG. 11 illustrates a process flow diagram 1 100 of a verification process taking place at the NFC target device 102 based on the first security configuration 600 according to one aspect of the disclosure. First, the NFC target device receives the first PID number from the first provider 1 102. Next, the device applies the first privilege mask assigned to the first PID number to limit the sensitive data available to the first provider to the first provider sensitive data 1 104. Then, the target device transmits the UID number and the first provider sensitive data to the first provider 1106.[0080] FIG. 12 illustrates a process flow diagram 1200 of an enrollment process taking place at the NFC target device 102 based on the second security configuration 700 according to one aspect of the disclosure. First, the NFC target device receives a first message from a first provider, where the first message includes a first PID number and a hash of the first PID number and a first random number r1;and the first PID number identifies the first provider 1202. Next, the NFC target device stores the first PID number at a memory circuit 1204. Then, the target device assigns a first privilege mask to the first PID number, where the first privilege mask designates at least a portion of the sensitive data as first provider sensitive data 1206. Next, the NFC target device provides the hash of the first PID number and the first random number ri to a PUF circuit as an input challenge to obtain a first noisy response ke1208. Then, the target device executes a helper data generation function to generate helper data heassociated with the first noisy response ke1210. Next, the target device transmits a second message to the first provider, the second message including a user identification (UID) number, the first noisy response ke, and the helper data he, where the UID number identifies the NFC target device 1212.[0081] FIG. 13 illustrates a process flow diagram 1300 of a verification process taking place at the NFC target device 102 based on the second security configuration 700 according to one aspect of the disclosure. First, the NFC target device receives a third message from the first provider, where the third message includes the first PID number, the first random number r1;and the hash of the first PID number and the first random number ri 1302. Then, the target device applies the first privilege mask assigned to the first PID number to limit the sensitive data available to the first provider to the first provider sensitive data 1304. Next, the target device provides the hash of the first PID number and the first random number ri to the PUF circuit as an input challenge to obtain a target-generated second noisy response kv t1306. Then, the target device executes the helper data generation function to generate helper data hvassociated with the target- generated second noisy response kv t1308. Next, the target device computes a value ui that includes a hash of the UID number, the helper data hv, the target-generated second noisy response kv t, and the second random number r21310. Then, the target device transmits a fourth message to the first provider, where the fourth message includes the UID number, the value u1;the second random number r2, the helper data hv, and the first provider sensitive data 1312.[0082] FIGS. 14A and 14B illustrate a process flow diagram 1400 of a verification process taking place at the NFC target device 102 based on the third security configuration 800 according to one aspect of the disclosure. First, the NFC target device transmits the UID number to the first provider 1402. Next, the target device receives a third message from the first provider, where the third message includes the first PID number, the first random number r1;and the hash of the first PID number and the first random number ri 1404. Then, the target device provides the hash of the first PID number and the first random number ri to the PUF circuit as an input challenge to obtain a target-generated second noisy response kv t1406. Next, the target device executes the helper data generation function to generate helper data hvassociated with the target- generated second noisy response kv t1408. Then, the target device transmits a fourth message to the first provider, where the fourth message includes a second random number r2and the helper data hv, and the second random number r2is different than the first random number ri 1410. Next, the target device receives a fifth message from the first provider, where the fifth message includes a third random number r3and a value u1;the third random number being different than the first random number ri and the second random number r2, and the value ui is based on a hash of the UID number, a provider- generated second noisy response kv_p, the second random number r2, and the third random number Γ3 1412. [0083] Then, the target device computes a value ¾ based on a hash of the UID number, the target-generated second noisy response kv t, the second random number r2, and the third random number r31414. Next, the target device authenticates the first provider based on the value ui received and the value ¾ computed 1416. Then, the target device applies the first privilege mask assigned to the first PID number to limit the sensitive data available to the first provider to the first provider sensitive data 1418. Next, the target device transmits a sixth message to the first provider, where the sixth message includes the first provider sensitive data and a hash of the UID number, the target-generated second noisy response kv t, and the third random number r31420.[0084] FIG. 15 illustrates a process flow diagram 1500 of an enrollment process taking place at the server 106a based on the first security configuration 600 according to one aspect of the disclosure. First, the server transmits a provider identification (PID) number to a NFC target device, where the PID number is associated with a provider and is different than PID numbers of other providers 1502. Next, the server transmits a privilege mask request to the NFC target device, where the privilege mask request indicates a desired privilege mask to be associated with the PID number, the desired privilege mask designating at least a portion of sensitive data stored at the NFC target device as provider sensitive data that is associated with the provider and accessible by the server 1504. Then, the server receives a user identification (UID) number associated with the NFC target device from the NFC target device, where the UID number identifies the NFC target device 1506. Next, the server stores the UID number in a memory circuit 1508.[0085] FIG. 16 illustrates a process flow diagram 1600 of a verification process taking place at the server 106a based on the first security configuration 600 according to one aspect of the disclosure. First, the server transmits the PID number to the NFC target device 1602. Next, the server receives a first message from the NFC target device, where the first message includes the UID number and the provider sensitive data 1604. Then, the server verifies that the UID number received in the first message matches the UID number stored in the memory circuit 1606. Next, the server accepts the provider sensitive data received 1608.[0086] FIG. 17 illustrates a process flow diagram 1700 of an enrollment process taking place at the server 106a based on the second security configuration 700 according to one aspect of the disclosure. First, the server transmits a first message to an NFC target device, where the first message includes a PID number and a hash of the PID number and a first random number ri 1702. Next, the server receives a second message from the NFC target device, where the second message includes the UID number, a first noisy response ke, and helper data hethat is associated with the first noisy response ke1704. Then, the server executes a reproduction function based on the received first noisy response keand the helper data heto reproduce a physical unclonable function (PUF) response k 1706. Next, the server stores the response k in a memory circuit 1708.[0087] FIGS. 18A and 18B illustrate a process flow diagram 1800 of a verification process taking place at the server 106a based on the second security configuration 700 according to one aspect of the disclosure. First, the server transmits a third message to the NFC target device, where the third message includes the PID number, the first random number r1;and the hash of the PID number and the first random number ri 1802. Then, the server receives a fourth message from the NFC target device, where the fourth message includes the UID number, a value u1;a second random number r2different than the first random number r1;helper data hvassociated with a target- generated second noisy response kv t, and the provider sensitive data 1804. Next, the server obtains the PUF response k based on the UID number of the NFC target device 1806. Then, the server executes the reproduction function based on the PUF response k and the helper data hvto reproduce a provider-generated second noisy response kv_p1808. Next, the server computes a value ¾ based on a hash of the UID number, the helper data hv, the provider-generated second noisy response kv_p, and the second random number r21810. Then, the server authenticates the NFC target device based on the value ui received and the value u2computed 1812. Next, the server accepts the provider sensitive data received 1814.[0088] FIGS. 19A and 19B illustrate a process flow diagram 1900 of a verification process taking place at the server 106a based on the third security configuration 800 according to one aspect of the disclosure. First, the server receives the UID number from the NFC target device 1902. Then, the server verifies the UID number received from the NFC target device matches the UID number stored in the memory circuit 1904. Next, the server transmits a third message to the NFC target device, where the third message includes the PID number, the first random number r1;and the hash of the PID number and the first random number ri 1906. Then, the server receives a fourth message from the NFC target device, where the fourth message includes a second random number r2different than the first random number ri and helper data hvassociated with a target-generated second noisy response kv t1908. Next, the server obtains the PUF response k based on the UID number of the NFC target device 1910. Then, the server executes the reproduction function based on the PUF response k and the helper data hvto reproduce a provider-generated second noisy response kv_p1912.[0089] Next, the server computes a value ui based on a hash of the UID number, the provider-generated second noisy response kv_p, the second random number r2, and a third random number r3different than the first random number ri and the second random number r21914. Then, the server transmits a fifth message to the NFC target device, where the fifth message includes the value ui and the third random number r31916. Next, the server receives a sixth message from the NFC target device, where the sixth message includes the provider sensitive data and a value ¾ based on a hash of the UID number, the target-generated second noisy response kv t, and the third random number r31918. Then, the server computes a value u3based on a hash of the UID number, the provider-generated second noisy response kv_p, and the third random number r31920. Next, the server authenticates the NFC target device based on the value ¾ received and the value u3computed 1922. Then, the server accepts the provider sensitive data received 1924.[0090] FIG. 20 illustrates a schematic block diagram of one example of the NFC card's PUF 108 according to one aspect. As shown, the PUF 108 may be a ring oscillator based PUF. A plurality of ring oscillators (ROs) 2002 may be concurrently enabled and their outputs are sent to two or more switches (multiplexers) 2004, 2006. A challenge serves as input to each switch 2004, 2006, which causes each switch 2004, 2006 to then select a single RO from among the plurality of ROs 2002. The challenges sent to the switches 2004, 2006 are designed such that each switch 2004, 2006 selects a different RO. The selected ROs each have a slightly different resonating frequency associated with them due to slight semiconductor-level manufacturing variations, even though each may have been manufactured to be identical. The PUF output (response) is generated by a pair-wise comparison 2008 of these selected ring oscillators' frequencies as measured/stored by the counters 2010, 2012. For example, if the first counter 2010 detects a higher frequency than the second counter 2012, then a logical "1" may be generated, otherwise a logical "0" may be generated. In this fashion, the comparisons made represent a challenge/response mechanism where the chosen RO pair is the challenge and the RO frequency comparison result is the response. The same challenge issued to different yet (almost) identically manufactured NFC cards 102 having the PUF 108 will lead to different response values. This in turn helps identify one NFC card 102 from another even though the NFC cards 102 may have been manufactured to be the same.[0091] The example shown in FIG. 20 is just one example of a PUF that may be used for the NFC card's PUF 108. Many other PUF designs including, but not limited to, arbiter PUFs, SRAM PUFs, etc. may also be used as the NFC card's 102 PUF 108.[0092] FIG. 21 illustrates a schematic block diagram of an NFC target device 2100 according to one aspect of the disclosure. The NFC target device 2100 may include a the PUF circuit 108, a processing circuit 2102, a memory circuit 2104, an input/output (I/O) interface 2106, and/or an NFC communication interface 2108. One or more these components/circuits 108, 2102, 2104, 2106, 2108 may be communicatively coupled to each other through, for example, a communication bus 2110. The NFC target device 2100 may have its own power source (not shown) or instead may be passive in that it is powered by RF radiation provided by an NFC interrogator.[0093] The memory circuit 2104 may include, among other things, sensitive data associated with a user of the NFC target device 2100 and/or one or more providers. For example, the memory circuit 2104 may include the sensitive data 110 shown and described in FIGS. 1, 2, 3, 4, 5, 6, 7A, 7B, 8A, 8B, 9, 10, 11, 12, 13, 14A, 14B, 15, 16, 17, 18A, 18B, 19A, and/or 19B. The I/O interface 2106 may include one or more buttons, a keyboard, sensors, camera, touchscreen, etc. Alternative, the NFC target device 2100 may not have an I/O interface 2106. The NFC communication interface 2108 allows the NFC target device 2100 to transmit and receive data through NFC protocols. For example, the NFC communication interface 2108 allows the NFC target device 2100 to communicate with the WCD 104 having an NFC interrogator and/or a server 106a equipped with an NFC interrogator.[0094] The processing circuit 2102 is generally adapted to execute processes and/or software instructions stored on the memory circuit 2104. The processing circuit 2102 may be adapted to: receive a plurality of provider identification (PID) numbers from a plurality of providers, where each PID number is associated with a different provider; store the PID numbers at the memory circuit 2104; assign a privilege mask of a plurality of privilege masks to each PID number received and stored; transmit to each provider the portion of the sensitive data associated to the provider based on the privilege mask assigned to the PID number associated with the provider; receive a privilege mask request from each provider indicating a desired level of sensitive data access before assigning the privilege mask to each PID number received and stored; provide one or more PID numbers of the plurality of PID numbers as input challenges to the PUF circuit 108; receive one or more PUF output responses from the PUF circuit 108 in response to providing the one or more PID numbers as input challenges, the PUF output responses being different from one another and associated with different providers; and/or authenticate one or more providers using, at least in part, the one or more PUF output responses associated with the different providers.[0095] FIG. 22 illustrates a schematic block diagram of a server 2200 according to one aspect of the disclosure. The server 2200 may include a processing circuit 2202, a memory circuit 2204, an input/output (I/O) interface 2206, and/or an communication interface 2208. One or more these components/circuits 2202, 2204, 2206, 2208 may be communicatively coupled to each other through, for example, a communication bus 2210.[0096] The memory circuit 2104 may store, among other things, an NFC card's UID and PUF responses k. The I/O interface 2106 may include a keyboard, mouse, touchscreen, camera, sensors, etc. The communication interface 2208 allows the server 2200 to transmit and receive data using short or long range communication protocols and may also allow for NFC protocols. For example, the communication interface 2208 allows the server 2200 to communicate with the WCD 104 and may also allow the server 2200 to communicate to the NFC card 102 directly through NFC protocols.[0097] The processing circuit 2202 is generally adapted to execute processes and/or software instructions stored on the memory circuit 2204. The processing circuit 2202 may be adapted to: transmit a provider identification (PID) number to the NFC target device; transmit a privilege mask request to the NFC target device, the privilege mask request indicating a desired privilege mask to be associated with the PID number; receive a user identification (UID) number associated with the NFC target device from the NFC target device; and/or store the UID number in the memory circuit 2204.[0098] One or more of the components, steps, features, and/or functions illustrated in FIGS. 1, 2, 3, 4, 5, 6, 7A, 7B, 8A, 8B, 9, 10, 11, 12, 13, 14A, 14B, 15, 16, 17, 18A, 18B, 19A, 19B, 20, 21, and/or 22 may be rearranged and/or combined into a single component, step, feature or function or embodied in several components, steps, or functions. Additional elements, components, steps, and/or functions may also be added without departing from the invention. The apparatus, devices, and/or components illustrated in FIGS. 1, 2, 3, 4, 5, 20, 21, and/or 22 may be configured to perform one or more of the methods, features, or steps described in FIGS. 6, 7A, 7B, 8A, 8B, 9, 10, 11, 12, 13, 14A, 14B, 15, 16, 17, 18A, 18B, 19A, and/or 19B. The algorithms described herein may also be efficiently implemented in software and/or embedded in hardware.[0099] Moreover, in one aspect of the disclosure, the processing circuit 2102 illustrated in FIG. 21 may be a specialized processor (e.g., an application specific integrated circuit (e.g., ASIC)) that is specifically designed and/or hard-wired to perform the algorithms, methods, and/or steps described in FIGS. 6, 7A, 7B, 8A, 8B, 9, 10, 11, 12, 13, 14A, and/or 14B and related text. Thus, such a specialized processor (e.g., ASIC) may be one example of a means for executing the algorithms, methods, and/or steps described in FIG. 6, 7A, 7B, 8A, 8B, 9, 10, 11, 12, 13, 14A, and/or 14B.[00100] Similarly, in another aspect of the disclosure, the processing circuit 2202 illustrated in FIG. 22 may be a specialized processor (e.g., ASIC) that is specifically designed and/or hard-wired to perform the algorithms, methods, and/or steps described in FIGS. 6, 7A, 7B, 8A, 8B, 9, 15, 16, 17, 18A, 18B, 19A, and/or 19B and related text. Thus, such a specialized processor (e.g., ASIC) may be one example of a means for executing the algorithms, methods, and/or steps described in FIG. 6, 7A, 7B, 8A, 8B, 9, 15, 16, 17, 18A, 18B, 19A, and/or 19B.[00101] Also, it is noted that the aspects of the present disclosure may be described as a process that is depicted as a flowchart, a flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination corresponds to a return of the function to the calling function or the main function.[00102] Moreover, a storage medium may represent one or more devices for storing data, including read-only memory (ROM), random access memory (RAM), magnetic disk storage mediums, optical storage mediums, flash memory devices and/or other machine-readable mediums and, processor-readable mediums, and/or computer- readable mediums for storing information. The terms "machine-readable medium", "computer-readable medium", and/or "processor-readable medium" may include, but are not limited to non-transitory mediums such as portable or fixed storage devices, optical storage devices, and various other mediums capable of storing or containing instruction(s) and/or data. Thus, the various methods described herein may be fully or partially implemented by instructions and/or data that may be stored in a "machine- readable medium", "computer-readable medium", and/or "processor-readable medium" and executed by one or more processors, machines and/or devices.[00103] Furthermore, aspects of the disclosure may be implemented by hardware, software, firmware, middleware, microcode, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks may be stored in a machine-readable medium such as a storage medium or other storage(s). A processor may perform the necessary tasks. A code segment may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, etc.[00104] The various illustrative logical blocks, modules, circuits, elements, and/or components described in connection with the examples disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic component, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing components, e.g., a combination of a DSP and a microprocessor, a number of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. [00105] The methods or algorithms described in connection with the examples disclosed herein may be embodied directly in hardware, in a software module executable by a processor, or in a combination of both, in the form of processing unit, programming instructions, or other directions, and may be contained in a single device or distributed across multiple devices. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. A storage medium may be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor.[00106] Those of skill in the art would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the aspects disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system.[00107] The various features of the invention described herein can be implemented in different systems without departing from the invention. It should be noted that the foregoing aspects of the disclosure are merely examples and are not to be construed as limiting the invention. The description of the aspects of the present disclosure is intended to be illustrative, and not to limit the scope of the claims. As such, the present teachings can be readily applied to other types of apparatuses and many alternatives, modifications, and variations will be apparent to those skilled in the art. |
A processor-based system may be controlled by a remote control unit. An image of the remote control unit may be displayed on a display associated with the processor-based system. When a particular button on the remote control unit is depressed, a corresponding indication may be provided on the image of the remote control unit. |
What is claimed is: 1. A method comprising:determining the ambient light conditions of the room; selectively displaying an image of a remote control unit on a tv screen depending on said light conditions; receiving a control signal from a remote control unit; and altering the image of said remote control unit in response to said control signal. 2. The method of claim 1 wherein receiving the control signal includes receiving a wireless control signal.3. The method of claim 1 further including identifying a particular control unit, checking a database of control units and displaying an image of the identified control unit.4. The method of claim 1 wherein said remote control unit includes a plurality of operable buttons, said method further including recognizing a particular button operation on said remote control unit, and altering the image of said remote control unit in response to operation of said button.5. The method of claim 3 wherein altering the image includes highlighting the image of a selected button on the image of said remote control unit.6. The method of claim 1 wherein said remote control unit includes a plurality of buttons, said method further including determining the nature of the operation of one of said buttons.7. The method of claim 6 including determining whether said button was lightly depressed or fully depressed.8. The method of claim 7 including altering the image of said remote control unit in response to a light depression of said button without executing the function associated with said button.9. The method of claim 1 including identifying a remote control unit by an identifier provided from said remote control unit and determining whether said identifier exists in a database.10. The method of claim 9 wherein if said identifier does not exist in a database, accessing a remote network to obtain information about said identifier, upon obtaining said identifier, obtaining information about the configuration of said remote control unit, and displaying an image of said remote control unit.11. An article comprising a medium storing instructions that enable a processor-based system to:determine the ambient light conditions of the room; selectively display an image of a remote control unit on a tv screen depending on the light conditions; receive a control signal from a remote control unit; and alter the image of said remote control unit in response to said control signal. 12. The article of claim 11 further storing instructions that enable the processor-based system to receive a wireless control signal.13. The article of claim 11 further storing instructions that enable the processor-based system to identify a particular control unit, check a database of control units, and display an image of the identified control unit.14. The article of claim 11 further storing instructions that enable the processor-based system to recognize a particular button operation on said remote control unit and alter the image of said remote control unit in response to operation of said button.15. The article of claim 14 further storing instructions that enable the processor-based system to highlight the image of the selected button on the image of said control unit.16. The article of claim 14 further storing instructions that enable the processor-based system to determine the nature of the operation of a remote control unit button.17. The article of claim 16 further storing instructions that enable the processor-based system to determine whether said button was lightly depressed or fully depressed.18. The article of claim 17 further storing instructions that enable the processor-based system to alter the image of said remote control unit in response to a light depression of said button without executing the function associated with said button.19. The article of claim 11 further storing instructions that enable the processor-based system to identify a remote control unit by an identifier provided from said remote control unit and determine whether said identifier exists in a database.20. The article of claim 19 further storing instructions that enable the processor-based system to access a remote network to obtain information about said identifier, and upon obtaining said identifier, obtain information about the configuration of said remote control unit and display an image of said remote control unit.21. A system comprising:a processor; a light detector coupled to said processor; an interface to receive signals from a remote control unit, said interface coupled to said processor, and a storage coupled to said processor, said storage storing instructions that enable said processor to display an image of a remote control unit on a tv screen, receive a control signal from the remote control unit, alter the image of the remote control unit in response to said control signal and enable the processor to selectively display that image depending on said light conditions of the room. 22. The system of claim 21 wherein said interface is a wireless interface.23. The system of claim 21 wherein said system is a set-top box.24. The system of claim 21 wherein said storage further stores instructions that enable the processor to identify a particular remote control unit, check a database of remote control units, and display an image of the identified remote control unit.25. The system of claim 21 wherein said storage stores instructions that enable said system to determine the nature of an input from said remote control unit.26. The system of claim 25 including a remote control having a sensor and a button, said sensor determining whether the button was pressed in one of two ways.27. The system of claim 26 wherein if said button is pressed in a first way, the image of the corresponding button on said image of said remote control unit is altered but the function normally activated by operation of said button is not activated. |
BACKGROUNDThis invention relates generally to remotely controlling appliances or computer systems, including television receivers.Remote control units may be utilized to control television receivers and other devices including computer systems and appliances without the necessity to walk over to the device to alter its settings. For example, infrared-based remote controls are commonly used with television receivers. Similarly, infrared remote controls are used with entertainment systems.In a variety of circumstances, the remote control may be utilized in a room that is relatively dark. Even in the case where the user is watching television, there may be insufficient light to be able to view the various buttons on the remote control unit.In devices called set-top boxes, the user may use a remote control unit to enter text displayed on a television receiver. That is, computer functions may be actually implemented using a set-top box controlled by the remote control unit and associated with the television receiver. Thus, the user may input relatively complex textual input commands to the set-top box through the remote control unit. These commands may appear on the television display. However, in many cases, it is awkward for the user to look downwardly at the remote control unit at the same time the user should be looking upwardly at the display to see the entries as they are displayed. For example, if the user is entering text through the remote control unit, it may work better to watch the display rather than to watch the information being typed into the remote control unit in accordance with touch-typing principles associated with conventional keyboards.Thus, there is a need for a better way to provide input commands using remote control units that facilitates data entry in a low light environment and that further facilitates the entry of more complex text input commands.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a front elevational view of a processor-based system, in accordance with one embodiment of the present invention;FIG. 2 is a front elevational view of the television receiver shown in FIG. 1 in operation, in accordance with one embodiment of the present invention;FIG. 3 is a flow chart for software resident on the set-top box shown in FIG. 1, in accordance with one embodiment of the present invention;FIG. 3A is a flow chart for software resident on the set-top box shown in FIG. 1, in accordance with another embodiment of the present invention;FIG. 4 is a flow chart for software resident on the set-top box, in accordance with one embodiment of the present invention;FIG. 5 is flow chart for software also resident on the set-top box, in accordance with one embodiment of the present invention;FIG. 6 is a partial, greatly enlarged cross-sectional view of one of the buttons on the remote control unit shown in FIG. 1, in accordance with one embodiment of the present invention;FIG. 7 is a partial, top plan view of a portion of the contact pad under the button shown in FIG. 6, in accordance with one embodiment of the present invention; andFIG. 8 is a block diagram of the system shown in FIG. 1, in accordance with one embodiment of the present invention.DETAILED DESCRIPTIONA processor-based system 10, shown in FIG. 1, may be implemented as a set-top box, in accordance with one embodiment of the present invention. However, other processor-based systems including desktop computer systems, laptop computer systems, and appliances including processor-based systems such as television receivers may implement the present invention.In FIG. 1, a set-top box 12 sits atop a television receiver 14. Both the set-top box 12 and television receiver 14 are controlled by a remote control unit 22. In accordance with one embodiment of the present invention and infrared interface may be implemented between a transceiver 26 on the remote control unit 22 and transceivers 18 and 16 associated with the television receiver 14 and set-top box 12 respectively. However, other wireless protocols may be utilized including radio frequency protocols and a Bluetooth protocol.As used herein, a remote control unit may be any wireless peripheral that operates a processor-based system including the type of remote control unit commonly associated with television receivers. A wireless keyboard, wireless mouse, and wireless tablet are additional examples of remote control units.The remote control unit 22 may include a variety of buttons indicated at 23, 24 and 27 for generating wireless signals from the transceiver 26. These signals are received by the transceivers 16 and 18 to control the operation of the set-top box 12 or the television receiver 14 respectively. The resulting control signals produced by the transceiver 26 may be simple television control signals, such as "adjust volume", "change channel", or "turn the television receiver 14 on" or "off". In addition, the commands may be complex textual inputs to control the set-top box 12 to implement conventional computer systems including navigating the Internet, initiating a purchase transaction, or providing information to a variety of forms, as a few examples.Referring next to FIG. 2, a graphical user interface 28 may be displayed on the television screen 20 of the television receiver 14, in accordance with one embodiment of the present invention. In this case, an electronic programming guide software application may be implemented in which a variety of television channel indicators 38 and associated program information 36 may be displayed in a grid display on the television receiver 14. In a conventional electronic programming guide, a plurality of television programs (such as the Star Trek programs indicated) are associated with channel indications 38 and time indications 36. By mouse clicking on one of the program entries, the user may select a program for automatic television tuning. Conventionally, the input commands to select a particular program may be provided from the remote control unit 22, shown in FIG. 1, using the navigation keys 23. By pressing on the navigator button 23, the position-of the highlighting may be moved, and by pressing the select button 27, a particular highlighted entry may be selected.An image 34 of the remote control unit 22 may be displayed in association with the graphical user interface 28. The image 34 may reflect the button arrangement of the actual remote control unit 22. When the user selects a button on the real remote control unit 22, such as the button 23, the corresponding button image 32 is highlighted as indicated in FIG. 2.In fact, before actually selecting a particular button on the remote control unit 22, the user may lightly depress a particular button such as the button 23 causing the associated button image 32 on the remote image 34 to be highlighted. This facilitates selecting the correct button on the remote control unit 22 by allowing the user to view the intended selection on the image 34 before it is finally entered. In the example shown in FIG. 2, the user selects the UP button 23 causing the button image 32 to be highlighted. Thus, the cursor selection may be moved from the highlighted Star Trek listing 39 on channel 4 upwardly to the Star Trek program on channel 3.Referring to FIG. 3, the software 40, in accordance with one embodiment of the present invention, for implementing the heads-up display using the remote control unit image 34 begins by pre-sensing a button press on the remote control unit 22 as indicated in block 42. A wide variety of sensors may be provided to select a partial depression of a button, such as the button 23. In response to a depression of the button 23 for a short time, the user sees the button image 32, highlighted as indicated in block 44. In contrast, in response to a longer depression of a particular button, such as the button 23, the entry may be immediately implemented. However, the corresponding button image, such as the image 32, may be also highlighted at the same time, in some cases.The short depression of a button, such as the button 23, begins a pre-sense time out, as indicated in block 46. If the button remains depressed for a sufficient time as determined in diamond 48, the button is operated for its normal function as indicated in block 54. If the button has been depressed for only a short time period as determined in diamond 48, the button function is not executed in one embodiment and only the button image is highlighted.Referring to FIG. 3A, in accordance with another embodiment of the present invention, in response to a light touch of a button such as the button 23, the button image may be highlighted on the display and in response to a more forceful depression, the button function may be actually implemented. As indicated in block 52, if the button on a remote control unit is merely touched lightly, the button may be detected as a light touch as indicated in block 53. In such case, the button image may be highlighted on the display screen as indicated in block 55.A check at diamond 56 determines whether the button has actually been depressed fully. If so, the remote control unit action associated with the button is executed and the icon disappears as indicated in block 58. Otherwise, the image is highlighted on screen as indicated in block 55 but no other action may be implemented in one embodiment of the present invention.Thus, the indication of which button has been pressed may be determined in a number of different ways. In one embodiment, a light touch may actuate the button image without operating the function and in another embodiment, a short touch may implement the highlighting without executing the function. This allows the user to determine which button to press before actually implementing an operation.In addition to highlighting the selected image on the remote control unit image 34, a sound may be made to indicated a light touch. A different sound may be made to indicate a full depression of a button, such as button 23, in some embodiments of the present invention.The initial registration of a particular remote control unit 22 with a particular processor-based system 10 may be implemented using software 60, illustrated in FIG. 4, in accordance with one embodiment of the present invention. It is not necessary that a particular remote control unit 22 be pre-assigned to a particular processor-based system 10 in order to accurately display an image of the remote control unit 22. Instead, a database may be utilized on the processor-based system 10 to recognize a particular remote control unit 22 and to display its appropriate image from a database of remote control unit identifiers and corresponding images.Initially, a check at diamond 62 determines whether any button on the remote control unit 22 has been activated. If so, the processor-based system 10 receives a remote message including a remote control unit identifier as indicated in block 66. The processor-based system 10 checks the identifier of the remote control unit 22 against a database of remote control units (block 68). An identifier of the remote control unit 22 may be automatically transmitted from the remote control unit 22 with every depression of a button on the remote control unit 22, in one embodiment of the present invention. Alternatively, the identifier may only be transmitted one time after power up.A check at diamond 70 determines whether the received identifier is one of the known remote control units 22 that may be accounted for in a database on the processor-based system 10. If not, the processor-based system 10 may access the Internet for configuration data for the indicated identifier as indicated in block 76. A search may be implemented through known websites to identify the identifier of the remote control unit 22 as indicated in block 78. Alternatively, a single web site may be accessed that has information for a variety of remote control units. If the identifier is found over the Internet, as determined in diamond 80, the remote identifier configuration file is downloaded as indicated in block 82. Otherwise, an error message may be generated as indicated in block 84.If the identifier is one that is already in the existing database associated with the processor-based system 10 as determined in diamond 70, the information may be loaded from the internal configuration database as indicated in block 72. Once an image 34 has been displayed for the appropriate remote control unit 22, normal operation may be implemented as indicated in block 74.Referring next to FIG. 5, in accordance with one embodiment of the present invention, the generation of the image 34 may be selectively implemented based on the ambient light conditions. The light level software 120 begins by determining whether the light level has fallen below a programmed threshold, as indicated in diamond 122. This determination may be made by a light level indicator associated with the processor-based system 10. If the light level is above the programmed threshold, the operation proceeds as is traditionally associated with remote control unit 22 without the use of the image 34, as shown in block 128.If the light level is low, the low light level operations are implemented using the image 34 as indicated in block 124 and as described previously herein. A time check at diamond 126 determines whether a predetermined time period has passed. If so, the system 10 rechecks to determine what is the current light level, at diamond 122. Otherwise, a timer may be incremented at block 127 and the flow iterates.Referring to FIG. 6, each button, such as the button 23, of the remote control unit 22 may be a two-way acting button. When the user presses on a dome 150, with a light pressure, the resulting distributed force is applied to a portion 152 around the periphery of the dome 150 via a connecting section 164. Thus, a peripheral area of a conductive layer 156 is pressed, compressing flexible separators 160 over a resistive layer 158. As a result, the peripheral areas, indicated at 172 in FIG. 7, around a central area 174 are exposed to a light pressure, changing the distance between layers and thereby causing a detectable characteristic change, such as a resistance change. This activation of the areas 172 may be recognized as a partial button depression for use in connection with the software 40 shown in FIG. 3.The natural resiliency of the section 164 operates as an effective absorbing spring that prevents the actuation of the central pin 154 in response to light finger pressure. Thus, the spring action of the section 164 is sufficient to prevent the pin 154 from contacting the conductive layer 156 until the finger pressure exceeds a predetermined threshold.If the pin 154 contacts the conductive layer 156, then the region 174 may be activated as well as the region 172. This in turn may be recognized as a full button depression by the software 40 illustrated for example in FIG. 3.Turning next to FIG. 8, the processor-based system 10, in an embodiment in which the system 10 is a set-top box 12, may include a processor 212 coupled to a north bridge 216 in one embodiment. The north bridge 216 may couple a system memory 220 and a decoder 234. The decoder 234 may receive demodulated, tuned signals from a demodulator and tuner 237. The tuner 237 for example, may be coupled to a source of television signals such as a cable connection, a satellite connection or the Internet. The decoder 234, in accordance with one embodiment of the present invention, may be a Motion Picture Experts Group (MPEG) compliant decoder. The decoder 234 may provide decoded television signals directly to the television receiver 14. It may also provide video signals and decoded audio signals to a bus 236.The bridge 216 may couple a south bridge 238. The south bridge may be coupled to a hard disk drive 242 and a compact disk drive 244. The software 40, 60 and 120 may be conventionally stored on the hard drive 242 for execution from the system memory 220.The south bridge 238 may couple audio signals to a coder-decoder (codec) 248 that drives amplifiers and speakers 250. The speakers 250 may be associated in some embodiments with the television receiver 14.A serial input/output (SIO) device 254 may also be coupled to the bus 236. It may in turn be coupled to a light level detector 256 and the wireless transceiver 16. The wireless transceiver 16 enables communications with the remote control unit 22. The remote control unit 22 may include the interface 26, a controller 260, and the keypad 262 that includes the buttons 23, 24 and 27.While the present invention has been described with respect to a limited number of embodiments, those skilled in the art will appreciate numerous modifications and variations therefrom. It is intended that the appended claims cover all such modifications and variations as fall within the true spirit and scope of this present invention. |
Embodiments may relate to a multi-chip microelectronic package that includes a first die and a second die coupled to a package substrate. The first and second dies may have respective radiative elements that are communicatively coupled with one another such that they may communicate via an electromagnetic signal with a frequency at or above approximately 20 gigahertz (GHz). Other embodiments may be described or claimed. |
A multi-chip microelectronic package comprising:a package substrate;a first die physically coupled with the package substrate, wherein the first die includes a first radiative element; anda second die coupled with the package substrate, wherein the second die has a second radiative element that is communicatively coupled with the first radiative element such that the first die may transmit an electromagnetic signal with a frequency of at least 20 gigahertz (GHz) from the first radiative element to the second radiative element.The multi-chip microelectronic package of claim 1, wherein the first die is directly physically coupled with the package substrate, and the second die is directly physically coupled with the first die.The multi-chip microelectronic package of claim 1 or 2, wherein the first radiative element is located at an outer surface of the first die.The multi-chip microelectronic package of claim 1, 2 or 3, wherein the first radiative element is located between two dielectric layers of the first die.The multi-chip microelectronic package of any of claims 1-4, wherein the first radiative element is not physically coupled with the second radiative element.The multi-chip microelectronic package of any of claims 1-5, wherein a face of the first die that is adjacent to the second die has a non-planar profile.The multi-chip microelectronic package of claim 6, wherein a face of the second die that is adjacent to the face of the first die has a non-planar profile.The multi-chip microelectronic package of claim 6 or 7, wherein the face of the first die includes one or more cavities; andoptionally, wherein the face of the second die includes one or more protrusions that are to mate with the one or more cavities of the first die.An electronic device comprising:a first die with a first radiative element that is to transmit an electromagnetic signal with a frequency of at least 20 gigahertz (GHz); anda second die with a second radiative element, wherein the second die is positioned adjacent to the first die and wherein the second radiative element is to receive the electromagnetic signal from the first die.The electronic device of claim 9, wherein the first die and the second die are coupled with a package substrate.The electronic device of claim 9 or 10, wherein the first die is coupled with the second die, and the second die is coupled with a package substrate.The electronic device of any of claims 9-11, wherein the first radiative element and the second radiative element are a directional coupler; oroptionally, wherein the first radiative element and the second radiative element are a capacitor or a transformer; oroptionally, wherein the first radiative element is a first antenna and the second radiative element is a second antenna.A method of manufacturing a multi-chip microelectronic package, the method comprising:coupling a first die to a package substrate, wherein the first die has a first radiative element that is to emit an electromagnetic signal with a frequency of at least 20 gigahertz (GHz); andcoupling a second die to the package substrate adjacent to the first die, wherein the second die has a second radiative element that is to receive the electromagnetic signal, and wherein the first radiative element and the second radiative element are not directly physically coupled with one another.The method of claim 13, wherein:coupling the first die to the package substrate includes physically coupling the first die directly to the package substrate; andcoupling the second die to the package substrate includes physically coupling the second die directly to the package substrate.The method of claim 13 or 14, wherein coupling the second die to the package substrate includes physically coupling the second die to the first die such that the first die is positioned between the package substrate and the second die; oroptionally, wherein the electromagnetic signal has a frequency of at least 300 GHz; oroptionally, wherein the electromagnetic signal has a frequency of at least 1 terahertz (THz). |
BackgroundFlip-chip packaging may be considered a powerful technology for constructing high-performance multi-chip packages (MCPs) with high die-to-die bandwidth density.Brief Description of the DrawingsFigure 1 illustrates an example MCP, in accordance with embodiments herein.Figure 2 illustrates an alternative example MCP, in accordance with embodiments herein.Figure 3 illustrates an alternative example MCP, in accordance with embodiments hereinFigure 4 illustrates a top-down cutaway portion of an example MCP with a directional coupler, in accordance with embodiments herein.Figure 5 illustrates an example cutaway view of a portion of an example MCP, in accordance with embodiments herein.Figure 6 illustrates an example technique for generating an MCP, in accordance with embodiments herein.Figure 7 illustrates an example device that may use various embodiments herein, in accordance with various embodiments.Detailed DescriptionIn the following detailed description, reference is made to the accompanying drawings which form a part hereof, wherein like numerals designate like parts throughout, and in which is shown by way of illustration embodiments in which the subject matter of the present disclosure may be practiced. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present disclosure. Therefore, the following detailed description is not to be taken in a limiting sense, and the scope of embodiments is defined by the appended claims and their equivalents.For the purposes of the present disclosure, the phrase "A or B" means (A), (B), or (A and B). For the purposes of the present disclosure, the phrase "A, B, or C" means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B and C).The description may use perspective-based descriptions such as top/bottom, in/out, over/under, and the like. Such descriptions are merely used to facilitate the discussion and are not intended to restrict the application of embodiments described herein to any particular orientation.The description may use the phrases "in an embodiment," or "in embodiments," which may each refer to one or more of the same or different embodiments. Furthermore, the terms "comprising," "including," "having," and the like, as used with respect to embodiments of the present disclosure, are synonymous.The term "coupled with," along with its derivatives, may be used herein. "Coupled" may mean one or more of the following. "Coupled" may mean that two or more elements are in direct physical or electrical contact. However, "coupled" may also mean that two or more elements indirectly contact each other, but yet still cooperate or interact with each other, and may mean that one or more other elements are coupled or connected between the elements that are said to be coupled with each other. The term "directly coupled" may mean that two or elements are in direct contact.In various embodiments, the phrase "a first feature formed, deposited, or otherwise disposed on a second feature," may mean that the first feature is formed, deposited, or disposed over the feature layer, and at least a part of the first feature may be in direct contact (e.g., direct physical or electrical contact) or indirect contact (e.g., having one or more other features between the first feature and the second feature) with at least a part of the second feature.Various operations may be described as multiple discrete operations in turn, in a manner that is most helpful in understanding the claimed subject matter. However, the order of description should not be construed as to imply that these operations are necessarily order dependent.Embodiments herein may be described with respect to various Figures. Unless explicitly stated, the dimensions of the Figures are intended to be simplified illustrative examples, rather than depictions of relative dimensions. For example, various lengths/widths/heights of elements in the Figures may not be drawn to scale unless indicated otherwise. Additionally, some schematic illustrations of example structures of various devices and assemblies described herein may be shown with precise right angles and straight lines, but it is to be understood that such schematic illustrations may not reflect real-life process limitations which may cause the features to not look so "ideal" when any of the structures described herein are examined, e.g., using scanning electron microscopy (SEM) images or transmission electron microscope (TEM) images. In such images of real structures, possible processing defects could also be visible, e.g., not-perfectly straight edges of materials, tapered vias or other openings, inadvertent rounding of corners or variations in thicknesses of different material layers, occasional screw, edge, or combination dislocations within the crystalline region, and/or occasional dislocation defects of single atoms or clusters of atoms. There may be other defects not listed here but that are common within the field of device fabrication.As noted above, flip-chip packaging may be considered a powerful technology for constructing high-performance MCPs with high die-to-die bandwidth density. Embodiments herein may relate to leveraging emerging high-frequency transceiver circuitry to increase MCP die-to-die bandwidth density without adding routing layers to the package substrate.As used herein, high-frequency signals may relate to signals with a frequency of approximately 20 gigahertz (GHz) or above. For example, some high-frequency signals may have a frequency of between approximately 20 GHz and approximately 300 GHz, and be considered millimeter-wave (mmWave) signals. Other high-frequency signals may have a frequency greater than approximately 300 GHz, for example on the order of 1 terahertz (THz) or above, and be considered THz-wave signals. Some high-frequency signals may generally have a frequency between approximately 20 GHz and approximately 10 THz, though other high-frequency signals may have a greater or lower frequency in some embodiments.More specifically, in some embodiments herein, flip-chip dies on an MCP may be connected edge-to-edge by placing a radiative element on each die, and bringing the die edges in close proximity to one another so that the radiative elements can communicate with one another without the dies (or the radiative elements) physically touching. Examples of radiative elements may include putting a half of a capacitor, transformer, directional coupler, antennas, wave launchers, or some other type of radiative element pairing. In this manner, without making galvanic contact, signaling may be accomplished across the gap between the dies by using high-frequency transceiver circuits on the dies that are able to transmit the high-frequency signal through a radiative element of one die, where it may be received by the radiative element of another die. In some embodiments, non-rectangular dies may be used to increase available die periphery and isolation between signaling lanes. Some embodiments may be used in side-by-side dies, whereas other embodiments may be used in stacked dies.Generally, embodiments may open up new three-dimensional (3-D) pathways for die-to-die signaling in addition to what may be available in conventional flip-chip packaging. Thus, die-to-die bandwidth density may be increased without adding routing layers to the package substrate. Additionally, not making galvanic contact edge-to-edge (e.g., such that the dies directly contact one another in the MCP) may facilitate assembly of the MCP. Additionally, some embodiments may include non-rectangular dies with meandering edges, as will be described in greater detail below, which may enable increased isolation between signaling lanes.Figure 1 illustrates an example MCP, in accordance with embodiments herein. Specifically, Figure 1 may depict an MCP with two flip-chip dies that have edges that are in close proximity to one another, but are not touching. Waveguides such as striplines may be routed on each die to form an electromagnetic coupling region that extends from one die to the other. In other words, the striplines may act as radiative elements that may, together, form a directional coupler. However, as will be discussed below, in other embodiments other radiative elements may be used.More specifically, Figure 1 depicts an MCP 100. The MCP 100 may include two dies 105. The dies 105 may be considered to be "flip-chip" type dies based on their method of attachment to a package substrate 130, however in other embodiments a different type of attachment mechanism for the die may be used rather than the flip-chip mechanism used for one or both of the dies 105. In embodiments, the one or both of the dies 105 may be a processor such as a central processing unit (CPU), a general processing unit (GPU), a core of a distributed processor, or some other type of processor. Additionally or alternatively, one or both of the dies 105 may be a memory such as a non-volatile memory (NVM), a flash memory, a double data rate (DDR) memory, a random access memory (RAM), or some other type of memory. Additionally or alternatively, one or both of the dies 105 may be or include RF circuitry designed to generate or process one or more signals in accordance with a wireless standard such as a second generation (2G) standard, a third generation (3G) standard, a fourth generation (4G) standard, a fifth generation (5G) standard, a Wi-Fi standard, a Bluetooth standard, a WiGig standard, or some other wireless standard known or hereinafter developed. In other embodiments, one or both of the dies 105 may be some other type of die.The package substrate 130 may be a cored or coreless substrate, and may include one or more dielectric layers of an organic or inorganic material. For example, the package substrate 130 may be made of, or comprise, one or more layers of a material such as a build-up film (BUF). In some embodiments, the package substrate 130 may also include one or more conductive elements such as vias, traces, pads, etc. which may not be shown in Figure 1 for the sake of clarity of the Figure. Specifically, the conductive elements may route one or more data or power signals between different parts of the package substrate 130, different elements coupled with the package substrate 130, or elements within the package substrate 130. In some embodiments, the package substrate 130 may include one or more additional elements such as a die, passive elements like a resistor or capacitor, or some other element either coupled with or within the package substrate 130. These additional elements are likewise not shown for the sake of clarity of Figure 1 .The dies 105 may be coupled with the package substrate 130 by interconnects which may include pads 120 and solder bumps 125. Specifically, as shown, both the dies 105 and the package substrate 130 may include pads 120. The pads 120 may be formed of a conductive material such as copper, gold, or some other conductive material or combination of conductive materials. Although the pads 120 are depicted as being generally flush with the surface of the dies 105 and the package substrate 130, in some embodiments the pads 120 may not be flush with the surface of one or both of the dies 105, the package substrate 130, or some combination thereof. For example, in some embodiments the pads 120 may at least partially protrude from the face of a die 105 or the package substrate 130.Similarly, the solder bumps 125 may be formed of a solder material which may both physically and communicatively couple the pads 120 to one another. It will be understood, however, that in some embodiments one or more of the pads 120, the solder bumps 125, or some combination thereof may be replaced by a different type of interconnect. For example, in some embodiments the solder bumps 125 may be elements of a ball grid array (BGA). However, in other embodiments the solder bumps 125 may be replaced by pins of a pin grid array (PGA), elements of a land grid array (LGA), a socket mechanism, or some other type of interconnect.The dies 105 may each include a radiative element 115 which may establish a communication path 135. Specifically, the radiative element 115 may be configured to electromagnetically transmit a high-frequency signal from one radiative element 115 of one die 105 to another radiative element 115 of another die. In some embodiments, the radiative elements may each be a plate or other element of a capacitor or portions of a transformer such that a charge supplied to one radiative element of one die 105 creates a corresponding change in charge of the radiative element of the other die 105. In some embodiments, the radiative elements 115 may be a stripline as depicted in Figure 1 . Specifically, the radiative elements 115 may be a trace that is nestled between two ground traces 110 of the die. In this embodiment, the radiative elements 115 may, together, form a directional coupler as will be described in greater detail below.Figure 2 illustrates an alternative example MCP 200, in accordance with embodiments herein. Generally, the MCP 200 may have elements similar to those of MCP 100. Specifically, the MCP 200 may include dies 205, package substrate 230, pads 220, and solder bumps 225, which may be respectively similar to, and share characteristics of, dies 105, package substrate 130, pads 120, and solder bumps 125.The dies 205 may further have ground planes 210 and radiative elements 215, which may be similar to, and share one or more characteristics of, ground planes 110 and radiative elements 115. However, as can be seen, the radiative elements 215 may be located at an external portion of the dies 205. In these embodiments, the radiative elements 215 may be microstrips rather than striplines, and the microstrips may together form a directional coupler. However, it will be understood that in other embodiments the radiative elements 215 may be halves of a capacitor, elements of a transformer, antennae, etc. Generally, the radiative elements 215 may establish a communication path 235, which may be similar to, and share one or more characteristics of, communication path 135.Figure 3 illustrates an alternative example MCP 300, in accordance with embodiments herein. Specifically, the MCP 300 may include a die 305, package substrate 330, pads 320, and solder bumps 325 which may be respectively similar to, and share one or more characteristics of, die 105, package substrate 130, pads 120, and solder bumps 125. Die 305 may additionally include a ground plane 310 and a radiative element 315 which may be similar to, and share one or more characteristics of, ground plane 110 and radiative element 115.The MCP 300 may include a second die 307 which may be generally similar to, and share one or more characteristics of, die 105. The die 307 may also include a ground plane 310 and a radiative element 315. The radiative elements 315 may form a communication path 335 which may be similar to, and share one or more characteristics of, communication path 135.As can be seen in Figure 3 , rather than both dies 305 and 307 being coupled directly with the package substrate 330, die 307 may be coupled with die 305 such that die 305 is positioned at least partially between die 307 and the package substrate 330. Specifically, dies 305 and 307 may include pads 322, which may be similar to, and share one or more characteristics of, pads 120. Similarly, the dies 305 and 307 may be coupled by solder bumps 327, which may be similar to, and share one or more characteristics of, solder bumps 125.It will be understood that the various embodiments of Figures 1-3 are intended as examples of concepts, and other embodiments may include one or more variations from those shown in Figures 1-3 . For example, with respect to Figure 3 , although the solder bumps 327 are depicted as having a similar size and pitch to solder bumps 325, in other embodiments the solder bumps 327 may be larger or smaller, or have a greater or smaller pitch, than solder bumps 325. Additionally, in some embodiments the die 307 may be offset from die 305 rather than directly stacked on top of die 305 as shown. In some embodiments, the die 307 may be physically or communicatively coupled directly to both the die 305 and the package substrate 330 (for example by having a non-rectangular shape, extended interconnects, etc.)Similarly, although certain elements may be shown as generally rectangular or flush with other elements (e.g., how various of the pads 120/220/etc. are shown as flush with the sides of the various dies 105/205/etc.), in other embodiments elements may at least partially protrude from, or be located fully on the exterior of, the elements in which they are shown as embedded. As another example, in some embodiments the radiative elements 115/215/etc. or the ground planes 110/210/etc. may at least partially protrude from, or be located fully on the exterior of, dies 105/205/etc.Additionally, it will be understood that the number, location, sizes, etc. of certain elements depicted in Figures 1-3 is intended as an example. For example, there may be more or fewer interconnects, radiative elements, ground planes, etc. than shown. In some embodiments different elements may be different sizes than depicted with respect to other elements of the Figures. The locations of certain elements such as the interconnects may be different in different embodiments. In some embodiments the microstrips such as those depicted in Figure 2 may be located at a different face of the die than is depicted in Figure 2 (e.g., at a face of the die 205 that is not adjacent to the package substrate 230). Other variations may be present in other embodiments.Figure 4 depicts an example of how a directional coupler may be used for contactless die-to-die signaling. One port on each die may be connected to a mmWave or a THz-wave transceiver, transmitter, or receiver. Data may originate from a transmitter (or a transmitter element of a transceiver), and arrive at a receiver (or a receiver element of a transceiver). The other port on each die may be terminated on-die, e.g., using a thin-film resistor made of polysilicon.Generally, Figure 4 may be considered to illustrate a top-down cutaway portion of an example MCP 400 with a directional coupler, in accordance with embodiments herein. Such an MCP 400 may be similar to, for example, MCP 100, and the view of Figure 4 may be along line A-A of Figure 1 .The MCP 400 may include two dies 405, which may be similar to, and share one or more characteristics of, dies 105. The dies 405 may have radiative elements 415, which may be similar to, and share one or more characteristics of, radiative elements 115. As depicted in Figure 4 , the radiative elements 415 may be striplines. Specifically, the radiative elements 415 may be waveguides that are embedded between two layers of a dielectric material of the dies 405. It will be understood, however, that in other embodiments one or both of the radiative elements 415 may be a microstrip such that the radiative element 415 is only coupled to the dielectric material of the dies 405 on one side of the radiative element 415.The radiative elements 415 may be coupled with a transceiver 408. In embodiments, the transceiver 408 may have transmitter functionality that is to generate and transmit a high-frequency signal along the radiative element 415. Similarly, the transceiver 408 may have receiver functionality that is to identify and process a high-frequency signal received from a radiative element 415. In some embodiments, the transceiver 408 may have both transmitter and receiver functionality, whereas in other embodiments the transceiver 408 may not have transmitter functionality (i.e., it may be a "receiver") or it may not have receiver functionality (i.e., it may be a "transmitter").The radiative elements 415 may also be coupled with a termination 406. The termination 406 may be, for example, a thin-film resistor or some other termination. In some embodiments the termination 406 may be made of polysilicon or some other material.In operation a directional coupler may operate such that a signal generated in one arm of the directional coupler (e.g., a radiative element 415 of one die 405) may cause a similar signal in the other arm of the directional coupler (e.g., the radiative element 415 of the other die 405). Therefore, a transceiver 408 may transmit a high-frequency signal along one radiative element 415. The radiative elements 415 may together form a communication path 435, which may be similar to, and share one or more characteristics of, communication path 135. Therefore, the high-frequency signal may be picked up by the radiative element 415 of the other die 405, and then be communicated to a transceiver 408 for identification and processing. In this manner, the dies 405 may be able to communicate to one another using high-frequency signals, even if the dies 405 are adjacent to one another but not directly touching. It will be noted that the embodiment of Figure 4 is intended as one example embodiment, and similarly to as described above with respect to Figures 1-3 , other embodiments may have more or fewer elements than depicted in Figure 4 , or the elements may be different sizes, shapes, etc.Some embodiments may improve contactless signaling across die edges by increasing isolation between signaling lanes, and thereby reducing crosstalk. Mating meandering die edges obtained by, for example, laser dicing (including stealth dicing) or plasma dicing before or after grind may be used to increase the distance between neighboring coupling regions. This mating may increase isolation, reduce crosstalk, and lead to higher bandwidth density or better power efficiency.Figure 5 depicts an example cutaway view of a portion of an example MCP 500, in accordance with embodiments herein. Similarly to Figure 4 , the cutaway view of Figure 5 may be along line A-A of Figure 1 . Each and every element of Figure 5 may not be explicitly numbered for the sake of clarity of the Figure. However, it will be understood that certain unnumbered elements may share characteristics with similar numbered elements.The MCP 500 may be generally similar to MCP 100. Specifically, the MCP 500 may include dies 505, which may be similar to, and share one or more characteristics of, dies 105. The dies 105 may have a plurality of radiative elements 515 which may be similar to, and share one or more characteristics of, radiative elements 115. The radiative elements 515 of one die 505 may form a communication path 535 with the radiative elements 515 of the other die 505. The communication path 535 may be similar to, and share one or more characteristics of, communication path 135.However, as can be seen in Figure 5 , facing sides of the dies 505 may have "meandering" die edges. Specifically, each die may have one or more protrusions 555 and one or more cavities 560. The protrusion(s) 555 of one die 505 may be positioned within the cavity or cavities 560 of the other die 505. In this manner, the dielectric material of the dies 505 may help to isolate the communication paths 535 from one another, which may increase isolation and reduce crosstalk between the communication paths 535, thereby increasing bandwidth density or increasing power efficiency.Figure 6 illustrates an example technique for generating an MCP, in accordance with embodiments herein. Generally, Figure 6 may be described with respect to the MCP 100 of Figure 1 , however it will be understood that the description may be adapted, in whole or in part, with or without modification, to other embodiments of this disclosure.The technique may include coupling, at 605, a first die to a package substrate. The die may be similar to die 105, and the package substrate may be similar to package substrate 130. The die may have a first radiative element, which may be similar to radiative element 115. The radiative element may be configured to emit an electromagnetic signal with a frequency of at least 20 GHz. More generally, the electromagnetic signal may be a high-frequency electromagnetic signal as described above.The technique may further include coupling, at 610, a second die to the package substrate adjacent to the first die. The second die may likewise be similar to die 105. The second die may have a second radiative element which may be similar to radiative element 115. The second radiative element may receive the electromagnetic signal as described above. As can be seen in the Figures, the first radiative element may not be directly physically coupled with the second radiative element.It will be noted that, in other embodiments, the first die may be similar to die 305, and the second die may be similar to die 307 (or vice versa). In this embodiment, the dies may still be considered to be adjacent to one another, however one die may be located between the package substrate and the other die. It will also be understood that in some embodiments element 610 may occur prior to element 605, or elements 605 and 610 may occur concurrently with one another.Figure 7 illustrates an example computing device 1500 suitable for use with MCPs 100, 200, 300, 400, 500, or some other MCP that is in accordance with this disclosure. Specifically, in some embodiments, the computing device 1500 may include one or more of the MCPs therein.As shown, computing device 1500 may include one or more processors or processor cores 1502 and system memory 1504. For the purpose of this application, including the claims, the terms "processor" and "processor cores" may be considered synonymous, unless the context clearly requires otherwise. The processor 1502 may include any type of processors, such as a CPU, a microprocessor, and the like. The processor 1502 may be implemented as an integrated circuit having multi-cores, e.g., a multi-core microprocessor. The computing device 1500 may include mass storage devices 1506 (such as diskette, hard drive, volatile memory (e.g., DRAM, compact disc read-only memory (CD-ROM), digital versatile disk (DVD), and so forth)). In general, system memory 1504 and/or mass storage devices 1506 may be temporal and/or persistent storage of any type, including, but not limited to, volatile and non-volatile memory, optical, magnetic, and/or solid state mass storage, and so forth. Volatile memory may include, but is not limited to, static and/or DRAM. Non-volatile memory may include, but is not limited to, electrically erasable programmable read-only memory, phase change memory, resistive memory, and so forth. In some embodiments, one or both of the system memory 1504 or the mass storage device 1506 may include computational logic 1522, which may be configured to implement or perform, in whole or in part, one or more instructions that may be stored in the system memory 1504 or the mass storage device 1506. In other embodiments, the computational logic 1522 may be configured to perform a memory-related command such as a read or write command on the system memory 1504 or the mass storage device 1506.The computing device 1500 may further include input/output (I/O) devices 1508 (such as a display (e.g., a touchscreen display), keyboard, cursor control, remote control, gaming controller, image capture device, and so forth) and communication interfaces 1510 (such as network interface cards, modems, infrared receivers, radio receivers (e.g., Bluetooth), and so forth).The communication interfaces 1510 may include communication chips (not shown) that may be configured to operate the device 1500 in accordance with a Global System for Mobile Communication (GSM), General Packet Radio Service (GPRS), Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Evolved HSPA (E-HSPA), or Long-Term Evolution (LTE) network. The communication chips may also be configured to operate in accordance with Enhanced Data for GSM Evolution (EDGE), GSM EDGE Radio Access Network (GERAN), Universal Terrestrial Radio Access Network (UTRAN), or Evolved UTRAN (E-UTRAN). The communication chips may be configured to operate in accordance with Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Digital Enhanced Cordless Telecommunications (DECT), Evolution-Data Optimized (EV-DO), derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The communication interfaces 1510 may operate in accordance with other wireless protocols in other embodiments.The computing device 1500 may further include or be coupled with a power supply. The power supply may, for example, be a power supply that is internal to the computing device 1500 such as a battery. In other embodiments the power supply may be external to the computing device 1500. For example, the power supply may be an electrical source such as an electrical outlet, an external battery, or some other type of power supply. The power supply may be, for example alternating current (AC), direct current (DC) or some other type of power supply. The power supply may in some embodiments include one or more additional components such as an AC to DC convertor, one or more downconverters, one or more upconverters, transistors, resistors, capacitors, etc. that may be used, for example, to tune or alter the current or voltage of the power supply from one level to another level. In some embodiments the power supply may be configured to provide power to the computing device 1500 or one or more discrete components of the computing device 1500 such as the processor(s) 1502, mass storage 1506, I/O devices 1508, etc.The above-described computing device 1500 elements may be coupled to each other via system bus 1512, which may represent one or more buses. In the case of multiple buses, they may be bridged by one or more bus bridges (not shown). Each of these elements may perform its conventional functions known in the art. The various elements may be implemented by assembler instructions supported by processor(s) 1502 or high-level languages that may be compiled into such instructions.The permanent copy of the programming instructions may be placed into mass storage devices 1506 in the factory, or in the field, through, for example, a distribution medium (not shown), such as a compact disc (CD), or through communication interface 1510 (from a distribution server (not shown)). That is, one or more distribution media having an implementation of the agent program may be employed to distribute the agent and to program various computing devices.The number, capability, and/or capacity of the elements 1508, 1510, 1512 may vary, depending on whether computing device 1500 is used as a stationary computing device, such as a set-top box or desktop computer, or a mobile computing device, such as a tablet computing device, laptop computer, game console, or smartphone. Their constitutions are otherwise known, and accordingly will not be further described.In various implementations, the computing device 1500 may comprise one or more components of a data center, a laptop, a netbook, a notebook, an ultrabook, a smartphone, a tablet, a personal digital assistant (PDA), an ultra mobile PC, a mobile phone, or a digital camera. In further implementations, the computing device 1500 may be any other electronic device that processes data.In some embodiments, as noted above, computing device 1500 may include one or more of the various MCPs 100, 200, 300, 400, 500, or some other MCP that is in accordance with this disclosure. For example, a die such as dies 105, 205, 305, 405, 505, or some other die in accordance with this disclosure may, in some embodiments, be a processor 1502, memory 1504, or some other component of the computing device 1500.EXAMPLES OF VARIOUS EMBODIMENTSExample 1 includes a multi-chip microelectronic package comprising: a package substrate; a first die physically coupled with the package substrate, wherein the first die includes a first radiative element; and a second die coupled with the package substrate, wherein the second die has a second radiative element that is communicatively coupled with the first radiative element such that the first die may transmit an electromagnetic signal with a frequency of at least 20 gigahertz (GHz) from the first radiative element to the second radiative element.Example 2 includes the multi-chip microelectronic package of example 1, wherein the first die is directly physically coupled with the package substrate, and the second die is directly physically coupled with the first die.Example 3 includes the multi-chip microelectronic package of example 1, wherein the first radiative element is located at an outer surface of the first die.Example 4 includes the multi-chip microelectronic package of example 1, wherein the first radiative element is located between two dielectric layers of the first die.Example 5 includes the multi-chip microelectronic package of any of examples 1-4, wherein the first radiative element is not physically coupled with the second radiative element.Example 6 includes the multi-chip microelectronic package of any of examples 1-4, wherein a face of the first die that is adjacent to the second die has a non-planar profile.Example 7 includes the multi-chip microelectronic package of example 6, wherein a face of the second die that is adjacent to the face of the first die has a non-planar profile.Example 8 includes the multi-chip microelectronic package of example 6, wherein the face of the first die includes one or more cavities.Example 9 includes the multi-chip microelectronic package of example 8, wherein the face of the second die includes one or more protrusions that are to mate with the one or more cavities of the first die.Example 10 includes an electronic device comprising: a first die with a first radiative element that is to transmit an electromagnetic signal with a frequency of at least 20 gigahertz (GHz); and a second die with a second radiative element, wherein the second die is positioned adjacent to the first die and wherein the second radiative element is to receive the electromagnetic signal from the first die.Example 11 includes the electronic device of example 10, wherein the first die and the second die are coupled with a package substrate.Example 12 includes the electronic device of example 10, wherein the first die is coupled with the second die, and the second die is coupled with a package substrate.Example 13 includes the electronic device of any of examples 10-12, wherein the first radiative element and the second radiative element are a directional coupler.Example 14 includes the electronic device of any of examples 10-12, wherein the first radiative element and the second radiative element are a capacitor or a transformer.Example 15 includes the electronic device of any of examples 10-12, wherein the first radiative element is a first antenna and the second radiative element is a second antenna.Example 16 includes a method of manufacturing a multi-chip microelectronic package, the method comprising: coupling a first die to a package substrate, wherein the first die has a first radiative element that is to emit an electromagnetic signal with a frequency of at least 20 gigahertz (GHz); and coupling a second die to the package substrate adjacent to the first die, wherein the second die has a second radiative element that is to receive the electromagnetic signal, and wherein the first radiative element and the second radiative element are not directly physically coupled with one another.Example 17 includes the method of example 16, wherein: coupling the first die to the package substrate includes physically coupling the first die directly to the package substrate; and coupling the second die to the package substrate includes physically coupling the second die directly to the package substrate.Example 18 includes the method of example 16, wherein coupling the second die to the package substrate includes physically coupling the second die to the first die such that the first die is positioned between the package substrate and the second die.Example 19 includes the method of any of examples 16-18, wherein the electromagnetic signal has a frequency of at least 300 GHz.Example 20 includes the method of any of examples 16-18, wherein the electromagnetic signal has a frequency of at least 1 terahertz (THz).Various embodiments may include any suitable combination of the above-described embodiments including alternative (or) embodiments of embodiments that are described in conjunctive form (and) above (e.g., the "and" may be "and/or"). Furthermore, some embodiments may include one or more articles of manufacture (e.g., non-transitory computer-readable media) having instructions, stored thereon, that when executed result in actions of any of the above-described embodiments. Moreover, some embodiments may include apparatuses or systems having any suitable means for carrying out the various operations of the above-described embodiments.The above description of illustrated embodiments, including what is described in the Abstract, is not intended to be exhaustive or limiting as to the precise forms disclosed. While specific implementations of, and examples for, various embodiments or concepts are described herein for illustrative purposes, various equivalent modifications may be possible, as those skilled in the relevant art will recognize. These modifications may be made in light of the above detailed description, the Abstract, the Figures, or the claims. |
In an aspect, a user equipment receives, via a microphone, an utterance from a user and determines, using radio frequency sensing, that the user performed a gesture while making the utterance. The user equipment determines an object associated with the gesture and transmits an enhanced directive to an application programming interface (API) of a smart assistance device. The enhanced directive is determined based on the object, the gesture, and the utterance. The enhanced directive causes the smart assistant device to perform an action. |
CLAIMSWhat is claimed is:1. A method for instructing a smart assistant device to perform an action, the method comprising: receiving, by a microphone, an utterance from a user; determining, using radio frequency sensing, that the user performed a gesture while making the utterance; determining an object associated with the gesture; and transmitting an enhanced directive to an application programming interface (API) of a smart assistance device, the enhanced directive based on the object, the gesture, and the utterance, wherein the enhanced directive causes the smart assistant device to perform an action.2. The method of claim 1, further comprising: determining that the utterance includes a trigger word.3. The method of claim 1, further comprising: determining a motion associated with the gesture; determining a direction of the motion; and identifying the object associated with the gesture based on the direction of the motion.4. The method of claim 1, further comprising: determining a motion associated with the gesture; determining a relative amount associated with the motion; converting the relative amount to an amount that is understood by the object; and including the amount in the enhanced directive.5. The method of claim 4, determining the relative amount associated with the motion comprises one of:
determining a first distance between a thumb and a forefinger of a hand of the user; determining a second distance between a left palm and a right palm of the user; or determining a third distance between a starting position of the gesture and an ending position of the gesture.6. The method of claim 1, further comprising: creating a link between a device and the smart assistant device.7. The method of claim 1, wherein: the gesture comprises pointing or gesturing towards the object; and the utterance comprises the action associated with the object.8. The method of claim 6, wherein the action comprises on, off, dim, brighten, increase, decrease, play, stop, pause, positioning of an audio object, or any combination thereof.9. The method of claim 1, wherein the object comprises: a light source, a media playback device, a set of blinds or shutters, a controllable object, a heating ventilation air conditioning (HVAC) controller, or any combination thereof.10. A device comprising: a memory; at least one transceiver; and at least one processor communicatively coupled to the memory and the at least one transceiver, the at least one processor configured to: receive, by a microphone, an utterance from a user; determine, using radio frequency sensing, that the user performed a gesture while making the utterance; determine an object associated with the gesture; and
transmit an enhanced directive to an application programming interface (API) of a smart assistance device, the enhanced directive based on the object, the gesture, and the utterance, wherein the enhanced directive causes the smart assistant device to perform an action.11. The device of claim 10, further comprising: determining that the utterance includes a trigger word.12. The device of claim 10, further comprising: determining a motion associated with the gesture; determining a direction of the motion; and identifying the object associated with the gesture based on the direction of the motion.13. The device of claim 10, further comprising: determining a motion associated with the gesture; determining a relative amount associated with the motion; converting the relative amount to an amount that is understood by the object; and including the amount in the enhanced directive.14. The device of claim 13, determining the relative amount associated with the motion comprises one of: determining a first distance between a thumb and a forefinger of a hand of the user; determining a second distance between a left palm and a right palm of the user; or determining a third distance between a starting position of the gesture and an ending position of the gesture.15. The device of claim 10, further comprising: creating a link between the device and the smart assistant device.16. The device of claim 10, wherein:
the gesture comprises pointing or gesturing towards the object; and the utterance comprises the action associated with the object.17. The device of claim 16, wherein the action comprises on, off, dim, brighten, increase, decrease, play, stop, pause, positioning of an audio object, or any combination thereof.18. The device of claim 10, wherein the object comprises: a light source, a media playback device, a set of blinds or shutters, a controllable object, a heating ventilation air conditioning (HVAC) controller, or any combination thereof.19. An apparatus comprising: means for receiving an utterance from a user; means for determining that the user performed a gesture while making the utterance; means for determining an object associated with the gesture; and means for transmitting an enhanced directive to an application programming interface (API) of a smart assistance device, the enhanced directive based on the object, the gesture, and the utterance, wherein the enhanced directive causes the smart assistant device to perform an action.20. The apparatus of claim 19, further comprising: means for determining that the utterance includes a trigger word.21. The apparatus of claim 19, further comprising: means for determining a motion associated with the gesture; means for determining a direction of the motion; and means for identifying the object associated with the gesture based on the direction of the motion.22. The apparatus of claim 19, further comprising:
means for determining a motion associated with the gesture; means for determining a relative amount associated with the motion; means for converting the relative amount to an amount that is understood by the object; and means for including the amount in the enhanced directive.23. The apparatus of claim 22, means for determining the relative amount associated with the motion comprises one of: means for determining a first distance between a thumb and a forefinger of a hand of the user; means for determining a second distance between a left palm and a right palm of the user; or means for determining a third distance between a starting position of the gesture and an ending position of the gesture.24. The apparatus of claim 19, further comprising: means for creating a link between a device and the smart assistant device.25. The apparatus of claim 19, wherein: the gesture comprises pointing or gesturing towards the object; and the utterance comprises the action associated with the object.26. The apparatus of claim 25, wherein the action comprises on, off, dim, brighten, increase, decrease, play, stop, pause, positioning of an audio object, or any combination thereof.27. The apparatus of claim 19, wherein the object comprises: a light source, a media playback device, a set of blinds or shutters, a controllable object, a heating ventilation air conditioning (HVAC) controller, or any combination thereof.
28. A non-transitory computer-readable storage medium to store instructions executable by one or more processors to: receive, by a microphone, an utterance from a user; determine, using radio frequency sensing, that the user performed a gesture while making the utterance; determine an object associated with the gesture; and transmit an enhanced directive to an application programming interface (API) of a smart assistance device, the enhanced directive based on the object, the gesture, and the utterance, wherein the enhanced directive causes the smart assistant device to perform an action.29. The non-transitory computer-readable storage medium of claim 28, further comprising: determining a motion associated with the gesture; determining a direction of the motion; and identifying the object associated with the gesture based on the direction of the motion.30. The non-transitory computer-readable storage medium of claim 28, wherein: the gesture comprises pointing or gesturing towards the object; and the utterance comprises the action associated with the object. |
ENABLING A GESTURE INTERFACE FOR VOICE ASSISTANTS USING RADIO FREQUENCY (RF) SENSINGBACKGROUND OF THE DISCLOSURE1. Field of the Disclosure[0001] Aspects of the disclosure relate generally to augmenting voice assistant devices.2. Description of the Related Art[0002] Wireless communication systems have developed through various generations, including a first-generation analog wireless phone service (1G), a second-generation (2G) digital wireless phone service (including interim 2.5G and 2.75G networks), a third-generation (3G) high speed data, Internet-capable wireless service and a fourth-generation (4G) service (e.g., Long Term Evolution (LTE) or WiMax). There are presently many different types of wireless communication systems in use, including cellular and personal communications service (PCS) systems. Examples of known cellular systems include the cellular analog advanced mobile phone system (AMPS), and digital cellular systems based on code division multiple access (CDMA), frequency division multiple access (FDMA), time division multiple access (TDMA), the Global System for Mobile communications (GSM), etc. A fifth generation (5G) wireless standard, referred to as New Radio (NR), calls for higher data transfer speeds, greater numbers of connections, and better coverage, among other improvements.[0003] Voice assistants receive voice commands to control objects. In addition, the voice assistants require that a user verbally specify the object that the user desires to control.SUMMARY[0004] The following presents a simplified summary relating to one or more aspects disclosed herein. Thus, the following summary should not be considered an extensive overview relating to all contemplated aspects, nor should the following summary be considered to identify key or critical elements relating to all contemplated aspects or to delineate the scope associated with any particular aspect. Accordingly, the following summary has the sole purpose to present certain concepts relating to one or more aspects relating to the
mechanisms disclosed herein in a simplified form to precede the detailed description presented below.[0005] In an aspect, a method for instructing a smart assistant device to perform an action includes receiving, by a microphone, an utterance from a user. The method includes determining, using radio frequency sensing, that the user performed a gesture while making the utterance, determining an object associated with the gesture, and transmitting an enhanced directive to an application programming interface (API) of a smart assistance device. The enhanced directive is based on the object, the gesture, and the utterance. The enhanced directive causes the smart assistant device to perform an action.[0006] In an aspect, a device includes a memory, at least one transceiver, and at least one processor communicatively coupled to the memory and the at least one transceiver. The at least one processor is configured to receive, by a microphone, an utterance from a user. The at least one processor is configured to determine, using radio frequency sensing, that the user performed a gesture while making the utterance, determine an object associated with the gesture, and transmit an enhanced directive to an application programming interface (API) of a smart assistance device. The enhanced directive is based on the object, the gesture, and the utterance. The enhanced directive causes the smart assistant device to perform an action.[0007] In an aspect, an apparatus comprises means for receiving an utterance from a user, means for determining that the user performed a gesture while making the utterance, means for determining an object associated with the gesture, and means for transmitting an enhanced directive to an application programming interface (API) of a smart assistance device. The enhanced directive based on the object, the gesture, and the utterance. The enhanced directive causes the smart assistant device to perform an action.[0008] In an aspect, a non-transitory computer-readable storage medium is used to store instructions executable by one or more processors to receive, by a microphone, an utterance from a user. The instructions are executable by the one or more processors to determine, using radio frequency sensing, that the user performed a gesture while making the utterance. The instructions are executable by the one or more processors to determine an object associated with the gesture. The instructions are executable by the one or more processors to transmit an enhanced directive to an application programming interface (API) of a smart assistance device. The enhanced directive is based on the object, the
gesture, and the utterance. The enhanced directive causes the smart assistant device to perform an action.[0009] Other obj ects and advantages associated with the aspects disclosed herein will be apparent to those skilled in the art based on the accompanying drawings and detailed description.BRIEF DESCRIPTION OF THE DRAWINGS[0010] The accompanying drawings are presented to aid in the description of various aspects of the disclosure and are provided solely for illustration of the aspects and not limitation thereof.[0011] FIG. 1 illustrates an example wireless communications system, according to aspects of the disclosure.[0012] FIGS. 2A and 2B illustrate example wireless network structures, according to aspects of the disclosure.[0013] FIGS. 3A, 3B, and 3C are simplified block diagrams of several sample aspects of components that may be employed in a user equipment (UE), a base station, and a network entity, respectively, and configured to support communications as taught herein.[0014] FIG. 4 is a block diagram illustrating a system to detect a user gesture, according to aspects of the disclosure.[0015] FIG. 5 illustrates a process that includes transmitting an enhanced directive to an application programming interface (API) of a voice assistant device, according to aspects of the disclosure.[0016] FIG. 6 illustrates a process that includes interaction between a Wi-Fi device and a voice assistant device, according to aspects of the disclosure.DETAILED DESCRIPTION[0017] Aspects of the disclosure are provided in the following description and related drawings directed to various examples provided for illustration purposes. Alternate aspects may be devised without departing from the scope of the disclosure. Additionally, well-known elements of the disclosure will not be described in detail or will be omitted so as not to obscure the relevant details of the disclosure.[0018] The words “exemplary” and/or “example” are used herein to mean “serving as an example, instance, or illustration.” Any aspect described herein as “exemplary” and/or
“example” is not necessarily to be construed as preferred or advantageous over other aspects. Likewise, the term “aspects of the disclosure” does not require that all aspects of the disclosure include the discussed feature, advantage or mode of operation.[0019] The systems and techniques described herein illustrate how a Wi-Fi device can use radio frequency (RF) sensing to detect when a user has performed a gesture and link to a voice assistant device to determine whether the user made an utterance (e.g., one or more words) at substantially the same time as the user performed the gesture. RF sensing may include Wi-Fi sensing, millimeter (mm) wave sensing, 5G NR sensing, or another type of RF- based sensing. If the utterance includes a trigger word (e.g., this”, “that”, “here”, “there” or the like), then the Wi-Fi device may determine a direction of the gesture and determine, based on the direction, an object. The object may be (i) a physical object, such as a light source, a media playback device, blinds/shutters, a heating ventilation air conditioning (HVAC) controller such as a thermostat, or (ii) a more abstract type of object, such as a process, software, or the like. For example, the user may gesture towards a light source and utter “Turn this light on.” As another example, the user may gesture towards a thermostat and utter “Turn the temperature down.” As yet another example, the user may gesture towards a set of blinds and utter “Open these blinds.”[0020] The Wi-Fi device may use the gesture and the utterance to create an enhanced directive and send the enhanced directive to the voice assistant device. After receiving the enhanced directive, the voice assistant device causes the object to perform an action, such as turning on or off a light source, initiating or stopping media playback, adjusting an audio stream associated with the media playback, adjusting a video stream associated with the media playback, adjusting a temperature of a thermostat or the link. Adjusting the audio stream may include increasing or decreasing volume, adjusting frequency equalization, routing the audio stream to one or more outputs, and the like. In this way, the user can use gestures along with utterances to control objects in an intuitive manner.[0021] Those of skill in the art will appreciate that the information and signals described below may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the description below may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles,
or any combination thereof, depending in part on the particular application, in part on the desired design, in part on the corresponding technology, etc.[0022] Further, many aspects are described in terms of sequences of actions to be performed by, for example, elements of a computing device. It will be recognized that various actions described herein can be performed by specific circuits (e.g., application specific integrated circuits (ASICs)), by program instructions being executed by one or more processors, or by a combination of both. Additionally, the sequence(s) of actions described herein can be considered to be embodied entirely within any form of non- transitory computer-readable storage medium having stored therein a corresponding set of computer instructions that, upon execution, would cause or instruct an associated processor of a device to perform the functionality described herein. Thus, the various aspects of the disclosure may be embodied in a number of different forms, all of which have been contemplated to be within the scope of the claimed subject matter. In addition, for each of the aspects described herein, the corresponding form of any such aspects may be described herein as, for example, “logic configured to” perform the described action.[0023] As used herein, the terms “user equipment” (UE) and “base station” are not intended to be specific or otherwise limited to any particular radio access technology (RAT), unless otherwise noted. In general, a UE may be any wireless communication device (e.g., a mobile phone, router, tablet computer, laptop computer, consumer asset locating device, wearable (e.g., smartwatch, glasses, augmented reality (AR) / virtual reality (VR) headset, etc.), vehicle (e.g., automobile, motorcycle, bicycle, etc.), Internet of Things (IoT) device, etc.) used by a user to communicate over a wireless communications network. A UE may be mobile or may (e.g., at certain times) be stationary, and may communicate with a radio access network (RAN). As used herein, the term “UE” may be referred to interchangeably as an “access terminal” or “AT,” a “client device,” a “wireless device,” a “subscriber device,” a “subscriber terminal,” a “subscriber station,” a “user terminal” or “UT,” a “mobile device,” a “mobile terminal,” a “mobile station,” or variations thereof. Generally, UEs can communicate with a core network via a RAN, and through the core network the UEs can be connected with external networks such as the Internet and with other UEs. Of course, other mechanisms of connecting to the core network and/or the Internet are also possible for the UEs, such as over wired access networks, wireless local
area network (WLAN) networks (e.g., based on the Institute of Electrical and Electronics Engineers (IEEE) 802.11 specification, etc.) and so on.[0024] A base station may operate according to one of several RATs in communication with UEs depending on the network in which it is deployed, and may be alternatively referred to as an access point (AP), a network node, a NodeB, an evolved NodeB (eNB), a next generation eNB (ng-eNB), a New Radio (NR) Node B (also referred to as a gNB or gNodeB), etc. A base station may be used primarily to support wireless access by UEs, including supporting data, voice, and/or signaling connections for the supported UEs. In some systems a base station may provide purely edge node signaling functions while in other systems it may provide additional control and/or network management functions. A communication link through which UEs can send signals to a base station is called an uplink (UL) channel (e.g., a reverse traffic channel, a reverse control channel, an access channel, etc.). A communication link through which the base station can send signals to UEs is called a downlink (DL) or forward link channel (e.g., a paging channel, a control channel, a broadcast channel, a forward traffic channel, etc.). As used herein the term traffic channel (TCH) can refer to either an uplink / reverse or downlink / forward traffic channel.[0025] The term “base station” may refer to a single physical transmission-reception point (TRP) or to multiple physical TRPs that may or may not be co-located. For example, where the term “base station” refers to a single physical TRP, the physical TRP may be an antenna of the base station corresponding to a cell (or several cell sectors) of the base station. Where the term “base station” refers to multiple co-located physical TRPs, the physical TRPs may be an array of antennas (e.g., as in a multiple-input multiple-output (MIMO) system or where the base station employs beamforming) of the base station. Where the term “base station” refers to multiple non-co-located physical TRPs, the physical TRPs may be a distributed antenna system (DAS) (a network of spatially separated antennas connected to a common source via a transport medium) or a remote radio head (RRH) (a remote base station connected to a serving base station). Alternatively, the non-co-located physical TRPs may be the serving base station receiving the measurement report from the UE and a neighbor base station whose reference radio frequency (RF) signals the UE is measuring. Because a TRP is the point from which a base station transmits and receives
wireless signals, as used herein, references to transmission from or reception at a base station are to be understood as referring to a particular TRP of the base station.[0026] In some implementations that support positioning of UEs, a base station may not support wireless access by UEs (e.g., may not support data, voice, and/or signaling connections for UEs), but may instead transmit reference signals to UEs to be measured by the UEs, and/or may receive and measure signals transmited by the UEs. Such a base station may be referred to as a positioning beacon (e.g., when transmitting signals to UEs) and/or as a location measurement unit (e.g., when receiving and measuring signals from UEs).[0027] An “RF signal” comprises an electromagnetic wave of a given frequency that transports information through the space between a transmitter and a receiver. As used herein, a transmitter may transmit a single “RF signal” or multiple “RF signals” to a receiver. However, the receiver may receive multiple “RF signals” corresponding to each transmitted RF signal due to the propagation characteristics of RF signals through multipath channels. The same transmitted RF signal on different paths between the transmitter and receiver may be referred to as a “multipath” RF signal. As used herein, an RF signal may also be referred to as a “wireless signal” or simply a “signal” where it is clear from the context that the term “signal” refers to a wireless signal or an RF signal.[0028] FIG. 1 illustrates an example wireless communications system 100, according to aspects of the disclosure. The wireless communications system 100 (which may also be referred to as a wireless wide area network (WWAN)) may include various base stations 102 (labeled “BS”) and various UEs 104. The base stations 102 may include macro cell base stations (high power cellular base stations) and/or small cell base stations (low power cellular base stations). In an aspect, the macro cell base stations may include eNBs and/or ng-eNBs where the wireless communications system 100 corresponds to an LTE network, or gNBs where the wireless communications system 100 corresponds to a NR network, or a combination of both, and the small cell base stations may include femtocells, picocells, microcells, etc.[0029] The base stations 102 may collectively form a RAN and interface with a core network 170 (e.g., an evolved packet core (EPC) or a 5G core (5GC)) through backhaul links 122, and through the core network 170 to one or more location servers 172 (e.g., a location management function (LMF) or a secure user plane location (SUPL) location platform (SLP)). The location server(s) 172 may be part of core network 170 or may be external
to core network 170. In addition to other functions, the base stations 102 may perform functions that relate to one or more of transferring user data, radio channel ciphering and deciphering, integrity protection, header compression, mobility control functions (e.g., handover, dual connectivity), inter-cell interference coordination, connection setup and release, load balancing, distribution for non-access stratum (NAS) messages, NAS node selection, synchronization, RAN sharing, multimedia broadcast multicast service (MBMS), subscriber and equipment trace, RAN information management (RIM), paging, positioning, and delivery of warning messages. The base stations 102 may communicate with each other directly or indirectly (e.g., through the EPC / 5GC) over backhaul links 134, which may be wired or wireless.[0030] The base stations 102 may wirelessly communicate with the UEs 104. Each of the base stations 102 may provide communication coverage for a respective geographic coverage area 110. In an aspect, one or more cells may be supported by a base station 102 in each geographic coverage area 110. A “cell” is a logical communication entity used for communication with a base station (e.g., over some frequency resource, referred to as a carrier frequency, component carrier, carrier, band, or the like), and may be associated with an identifier (e.g., a physical cell identifier (PCI), an enhanced cell identifier (ECI), a virtual cell identifier (VCI), a cell global identifier (CGI), etc.) for distinguishing cells operating via the same or a different carrier frequency. In some cases, different cells may be configured according to different protocol types (e.g., machine-type communication (MTC), narrowband IoT (NB-IoT), enhanced mobile broadband (eMBB), or others) that may provide access for different types of UEs. Because a cell is supported by a specific base station, the term “cell” may refer to either or both of the logical communication entity and the base station that supports it, depending on the context. In addition, because a TRP is typically the physical transmission point of a cell, the terms “cell” and “TRP” may be used interchangeably. In some cases, the term “cell” may also refer to a geographic coverage area of a base station (e.g., a sector), insofar as a carrier frequency can be detected and used for communication within some portion of geographic coverage areas 110.[0031] While neighboring macro cell base station 102 geographic coverage areas 110 may partially overlap (e.g., in a handover region), some of the geographic coverage areas 110 may be substantially overlapped by a larger geographic coverage area 110. For example,
a small cell base station 102' (labeled “SC” for “small cell”) may have a geographic coverage area 110' that substantially overlaps with the geographic coverage area 110 of one or more macro cell base stations 102. A network that includes both small cell and macro cell base stations may be known as a heterogeneous network. A heterogeneous network may also include home eNBs (HeNBs), which may provide service to a restricted group known as a closed subscriber group (CSG).[0032] The communication links 120 between the base stations 102 and the UEs 104 may include uplink (also referred to as reverse link) transmissions from a UE 104 to a base station 102 and/or downlink (DL) (also referred to as forward link) transmissions from a base station 102 to a UE 104. The communication links 120 may use MIMO antenna technology, including spatial multiplexing, beamforming, and/or transmit diversity. The communication links 120 may be through one or more carrier frequencies. Allocation of carriers may be asymmetric with respect to downlink and uplink (e.g., more or less carriers may be allocated for downlink than for uplink).[0033] The wireless communications system 100 may further include a wireless local area network (WLAN) access point (AP) 150 in communication with WLAN stations (STAs) 152 via communication links 154 in an unlicensed frequency spectrum (e.g., 5 GHz). When communicating in an unlicensed frequency spectrum, the WLAN STAs 152 and/or the WLAN AP 150 may perform a clear channel assessment (CCA) or listen before talk (LBT) procedure prior to communicating in order to determine whether the channel is available.[0034] The small cell base station 102' may operate in a licensed and/or an unlicensed frequency spectrum. When operating in an unlicensed frequency spectrum, the small cell base station 102' may employ LTE or NR technology and use the same 5 GHz unlicensed frequency spectrum as used by the WLAN AP 150. The small cell base station 102', employing LTE / 5G in an unlicensed frequency spectrum, may boost coverage to and/or increase capacity of the access network. NR in unlicensed spectrum may be referred to as NR-U. LTE in an unlicensed spectrum may be referred to as LTE-U, licensed assisted access (LAA), or MulteFire.[0035] The wireless communications system 100 may further include a millimeter wave (mmW) base station 180 that may operate in mmW frequencies and/or near mmW frequencies in communication with a UE 182. Extremely high frequency (EHF) is part of the RF in the
electromagnetic spectrum. EHF has a range of 30 GHz to 300 GHz and a wavelength between 1 millimeter and 10 millimeters. Radio waves in this band may be referred to as a millimeter wave. Near mmW may extend down to a frequency of 3 GHz with a wavelength of 100 millimeters. The super high frequency (SHF) band extends between 3 GHz and 30 GHz, also referred to as centimeter wave. Communications using the mmW/near mmW radio frequency band have high path loss and a relatively short range. The mmW base station 180 and the UE 182 may utilize beamforming (transmit and/or receive) over a mmW communication link 184 to compensate for the extremely high path loss and short range. Further, it will be appreciated that in alternative configurations, one or more base stations 102 may also transmit using mmW or near mmW and beamforming. Accordingly, it will be appreciated that the foregoing illustrations are merely examples and should not be construed to limit the various aspects disclosed herein.[0036] Transmit beamforming is a technique for focusing an RF signal in a specific direction. Traditionally, when a network node (e.g., a base station) broadcasts an RF signal, it broadcasts the signal in all directions (omni-directionally). With transmit beamforming, the network node determines where a given target device (e.g., a UE) is located (relative to the transmitting network node) and projects a stronger downlink RF signal in that specific direction, thereby providing a faster (in terms of data rate) and stronger RF signal for the receiving device(s). To change the directionality of the RF signal when transmitting, a network node can control the phase and relative amplitude of the RF signal at each of the one or more transmitters that are broadcasting the RF signal. For example, a network node may use an array of antennas (referred to as a “phased array” or an “antenna array”) that creates abeam of RF waves that can be “steered” to point in different directions, without actually moving the antennas. Specifically, the RF current from the transmitter is fed to the individual antennas with the correct phase relationship so that the radio waves from the separate antennas add together to increase the radiation in a desired direction, while cancelling to suppress radiation in undesired directions.[0037] Transmit beams may be quasi-co-located, meaning that they appear to the receiver (e.g., a UE) as having the same parameters, regardless of whether or not the transmitting antennas of the network node themselves are physically co-located. In NR, there are four types of quasi-co-location (QCL) relations. Specifically, a QCL relation of a given type means that certain parameters about a second reference RF signal on a second beam can
be derived from information about a source reference RF signal on a source beam. Thus, if the source reference RF signal is QCL Type A, the receiver can use the source reference RF signal to estimate the Doppler shift, Doppler spread, average delay, and delay spread of a second reference RF signal transmitted on the same channel. If the source reference RF signal is QCL Type B, the receiver can use the source reference RF signal to estimate the Doppler shift and Doppler spread of a second reference RF signal transmitted on the same channel. If the source reference RF signal is QCL Type C, the receiver can use the source reference RF signal to estimate the Doppler shift and average delay of a second reference RF signal transmitted on the same channel. If the source reference RF signal is QCL Type D, the receiver can use the source reference RF signal to estimate the spatial receive parameter of a second reference RF signal transmitted on the same channel.[0038] In receive beamforming, the receiver uses a receive beam to amplify RF signals detected on a given channel. For example, the receiver can increase the gain setting and/or adjust the phase setting of an array of antennas in a particular direction to amplify (e.g., to increase the gain level of) the RF signals received from that direction. Thus, when a receiver is said to beamform in a certain direction, it means the beam gain in that direction is high relative to the beam gain along other directions, or the beam gain in that direction is the highest compared to the beam gain in that direction of all other receive beams available to the receiver. This results in a stronger received signal strength (e.g., reference signal received power (RSRP), reference signal received quality (RSRQ), signal-to- interference-plus-noise ratio (SINR), etc.) of the RF signals received from that direction.[0039] Transmit and receive beams may be spatially related. A spatial relation means that parameters for a second beam (e.g., a transmit or receive beam) for a second reference signal can be derived from information about a first beam (e.g., a receive beam or a transmit beam) for a first reference signal. For example, a UE may use a particular receive beam to receive a reference downlink reference signal (e.g., synchronization signal block (SSB)) from a base station. The UE can then form a transmit beam for sending an uplink reference signal (e.g., sounding reference signal (SRS)) to that base station based on the parameters of the receive beam.[0040] Note that a “downlink” beam may be either a transmit beam or a receive beam, depending on the entity forming it. For example, if a base station is forming the downlink beam to transmit a reference signal to a UE, the downlink beam is a transmit beam. If the UE is
forming the downlink beam, however, it is a receive beam to receive the downlink reference signal. Similarly, an “uplink” beam may be either a transmit beam or a receive beam, depending on the entity forming it. For example, if a base station is forming the uplink beam, it is an uplink receive beam, and if a UE is forming the uplink beam, it is an uplink transmit beam.[0041] In 5G, the frequency spectrum in which wireless nodes (e.g., base stations 102/180, UEs 104/182) operate is divided into multiple frequency ranges, FR1 (from 450 to 6000 MHz), FR2 (from 24250 to 52600 MHz), FR3 (above 52600 MHz), and FR4 (between FR1 and FR2). mmW frequency bands generally include the FR2, FR3, and FR4 frequency ranges. As such, the terms “mmW” and “FR2” or “FR3” or “FR4” may generally be used interchangeably.[0042] In a multi-carrier system, such as 5G, one of the carrier frequencies is referred to as the “primary carrier” or “anchor carrier” or “primary serving cell” or “PCell,” and the remaining carrier frequencies are referred to as “secondary carriers” or “secondary serving cells” or “SCells.” In carrier aggregation, the anchor carrier is the carrier operating on the primary frequency (e.g., FR1) utilized by a UE 104/182 and the cell in which the UE 104/182 either performs the initial radio resource control (RRC) connection establishment procedure or initiates the RRC connection re-establishment procedure. The primary carrier carries all common and UE-specific control channels, and may be a carrier in a licensed frequency (however, this is not always the case). A secondary carrier is a carrier operating on a second frequency (e.g., FR2) that may be configured once the RRC connection is established between the UE 104 and the anchor carrier and that may be used to provide additional radio resources. In some cases, the secondary carrier may be a carrier in an unlicensed frequency. The secondary carrier may contain only necessary signaling information and signals, for example, those that are UE-specific may not be present in the secondary carrier, since both primary uplink and downlink carriers are typically UE-specific. This means that different UEs 104/182 in a cell may have different downlink primary carriers. The same is true for the uplink primary carriers. The network is able to change the primary carrier of any UE 104/182 at any time. This is done, for example, to balance the load on different carriers. Because a “serving cell” (whether a PCell or an SCell) corresponds to a carrier frequency / component carrier over which
some base station is communicating, the term “cell,” “serving cell,” “component carrier,” “carrier frequency,” and the like can be used interchangeably.[0043] For example, still referring to FIG. 1, one of the frequencies utilized by the macro cell base stations 102 may be an anchor carrier (or “PCell”) and other frequencies utilized by the macro cell base stations 102 and/or the mmW base station 180 may be secondary carriers (“SCells”). The simultaneous transmission and/or reception of multiple carriers enables the UE 104/182 to significantly increase its data transmission and/or reception rates. For example, two 20 MHz aggregated carriers in a multi-carrier system would theoretically lead to a two-fold increase in data rate (i.e., 40 MHz), compared to that attained by a single 20 MHz carrier.[0044] The wireless communications system 100 may further include a UE 164 that may communicate with a macro cell base station 102 over a communication link 120 and/or the mmW base station 180 over a mmW communication link 184. For example, the macro cell base station 102 may support a PCell and one or more SCells for the UE 164 and the mmW base station 180 may support one or more SCells for the UE 164.[0045] In the example of FIG. 1, any of the illustrated UEs (shown in FIG. 1 as a single UE 104 for simplicity) may receive signals 124 from one or more Earth orbiting space vehicles (SVs) 112 (e.g., satellites). In an aspect, the SVs 112 may be part of a satellite positioning system that a UE 104 can use as an independent source of location information. A satellite positioning system typically includes a system of transmitters (e.g., SVs 112) positioned to enable receivers (e.g., UEs 104) to determine their location on or above the Earth based, at least in part, on positioning signals (e.g., signals 124) received from the transmitters. Such a transmitter typically transmits a signal marked with a repeating pseudo-random noise (PN) code of a set number of chips. While typically located in SVs 112, transmitters may sometimes be located on ground-based control stations, base stations 102, and/or other UEs 104. A UE 104 may include one or more dedicated receivers specifically designed to receive signals 124 for deriving geo location information from the SVs 112.[0046] In a satellite positioning system, the use of signals 124 can be augmented by various satellite-based augmentation systems (SBAS) that may be associated with or otherwise enabled for use with one or more global and/or regional navigation satellite systems. For example, an SBAS may include an augmentation system(s) that provides integrity information, differential corrections, etc., such as the Wide Area Augmentation System
(WAAS), the European Geostationary Navigation Overlay Service (EGNOS), the Multi functional Satellite Augmentation System (MSAS), the Global Positioning System (GPS) Aided Geo Augmented Navigation or GPS and Geo Augmented Navigation system (GAGAN), and/or the like. Thus, as used herein, a satellite positioning system may include any combination of one or more global and/or regional navigation satellites associated with such one or more satellite positioning systems.[0047] In an aspect, SVs 112 may additionally or alternatively be part of one or more non terrestrial networks (NTNs). In an NTN, an SV 112 is connected to an earth station (also referred to as a ground station, NTN gateway, or gateway), which in turn is connected to an element in a 5G network, such as a modified base station 102 (without a terrestrial antenna) or a network node in a 5GC. This element would in turn provide access to other elements in the 5G network and ultimately to entities external to the 5G network, such as Internet web servers and other user devices. In that way, a UE 104 may receive communication signals (e.g., signals 124) from an SV 112 instead of, or in addition to, communication signals from a terrestrial base station 102.[0048] The wireless communications system 100 may further include one or more UEs, such as UE 190, that connects indirectly to one or more communication networks via one or more device-to-device (D2D) peer-to-peer (P2P) links (referred to as “sidelinks”). In the example of FIG. 1, UE 190 has a D2D P2P link 192 with one of the UEs 104 connected to one of the base stations 102 (e.g., through which UE 190 may indirectly obtain cellular connectivity) and a D2D P2P link 194 with WLAN STA 152 connected to the WLAN AP 150 (through which UE 190 may indirectly obtain WLAN-based Internet connectivity). In an example, the D2D P2P links 192 and 194 may be supported with any well-known D2D RAT, such as LTE Direct (LTE-D), WiFi Direct (WiFi-D), Bluetooth®, and so on.[0049] FIG. 2A illustrates an example wireless network structure 200. For example, a 5GC 210 (also referred to as a Next Generation Core (NGC)) can be viewed functionally as control plane (C-plane) functions 214 (e.g., UE registration, authentication, network access, gateway selection, etc.) and user plane (U-plane) functions 212, (e.g., UE gateway function, access to data networks, IP routing, etc.) which operate cooperatively to form the core network. User plane interface (NG-U) 213 and control plane interface (NG-C) 215 connect the gNB 222 to the 5GC 210 and specifically to the user plane functions 212
and control plane functions 214, respectively. In an additional configuration, an ng-eNB 224 may also be connected to the 5GC 210 viaNG-C 215 to the control plane functions 214 and NG-U 213 to user plane functions 212. Further, ng-eNB 224 may directly communicate with gNB 222 via a backhaul connection 223. In some configurations, a Next Generation RAN (NG-RAN) 220 may have one or more gNBs 222, while other configurations include one or more of both ng-eNBs 224 and gNBs 222. Either (or both) gNB 222 or ng-eNB 224 may communicate with one or more UEs 204 (e.g., any of the UEs described herein).[0050] Another optional aspect may include a location server 230, which may be in communication with the 5GC 210 to provide location assistance for UE(s) 204. The location server 230 can be implemented as a plurality of separate servers (e.g., physically separate servers, different software modules on a single server, different software modules spread across multiple physical servers, etc.), or alternately may each correspond to a single server. The location server 230 can be configured to support one or more location services for UEs 204 that can connect to the location server 230 via the core network, 5GC 210, and/or via the Internet (not illustrated). Further, the location server 230 may be integrated into a component of the core network, or alternatively may be external to the core network (e.g., a third-party server, such as an original equipment manufacturer (OEM) server or service server).[0051] FIG. 2B illustrates another example wireless network structure 250. A 5GC 260 (which may correspond to 5GC 210 in FIG. 2A) can be viewed functionally as control plane functions, provided by an access and mobility management function (AMF) 264, and user plane functions, provided by a user plane function (UPF) 262, which operate cooperatively to form the core network (i.e., 5GC 260). The functions of the AMF 264 include registration management, connection management, reachability management, mobility management, lawful interception, transport for session management (SM) messages between one or more UEs 204 (e.g., any of the UEs described herein) and a session management function (SMF) 266, transparent proxy services for routing SM messages, access authentication and access authorization, transport for short message service (SMS) messages between the UE 204 and the short message service function (SMSF) (not shown), and security anchor functionality (SEAF). The AMF 264 also interacts with an authentication server function (AUSF) (not shown) and the UE 204, and
receives the intermediate key that was established as a result of the UE 204 authentication process. In the case of authentication based on a UMTS (universal mobile telecommunications system) subscriber identity module (USIM), the AMF 264 retrieves the security material from the AUSF. The functions of the AMF 264 also include security context management (SCM). The SCM receives a key from the SEAF that it uses to derive access-network specific keys. The functionality of the AMF 264 also includes location services management for regulatory services, transport for location services messages between the UE 204 and a location management function (LMF) 270 (which acts as a location server 230), transport for location services messages between the NG-RAN 220 and the LMF 270, evolved packet system (EPS) bearer identifier allocation for interworking with the EPS, and UE 204 mobility event notification. In addition, the AMF 264 also supports functionalities for non-3GPP (Third Generation Partnership Project) access networks.[0052] Functions of the UPF 262 include acting as an anchor point for intra-/inter-RAT mobility (when applicable), acting as an external protocol data unit (PDU) session point of interconnect to a data network (not shown), providing packet routing and forwarding, packet inspection, user plane policy rule enforcement (e.g., gating, redirection, traffic steering), lawful interception (user plane collection), traffic usage reporting, quality of service (QoS) handling for the user plane (e.g., uplink/ downlink rate enforcement, reflective QoS marking in the downlink), uplink traffic verification (service data flow (SDF) to QoS flow mapping), transport level packet marking in the uplink and downlink, downlink packet buffering and downlink data notification triggering, and sending and forwarding of one or more “end markers” to the source RAN node. The UPF 262 may also support transfer of location services messages over a user plane between the UE 204 and a location server, such as an SLP 272.[0053] The functions of the SMF 266 include session management, UE Internet protocol (IP) address allocation and management, selection and control of user plane functions, configuration of traffic steering at the UPF 262 to route traffic to the proper destination, control of part of policy enforcement and QoS, and downlink data notification. The interface over which the SMF 266 communicates with the AMF 264 is referred to as the Nil interface.
[0054] Another optional aspect may include an LMF 270, which may be in communication with the 5GC 260 to provide location assistance for UEs 204. The LMF 270 can be implemented as a plurality of separate servers (e.g., physically separate servers, different software modules on a single server, different software modules spread across multiple physical servers, etc.), or alternately may each correspond to a single server. The LMF 270 can be configured to support one or more location services for UEs 204 that can connect to the LMF 270 via the core network, 5GC 260, and/or via the Internet (not illustrated). The SLP 272 may support similar functions to the LMF 270, but whereas the LMF 270 may communicate with the AMF 264, NG-RAN 220, and UEs 204 over a control plane (e.g., using interfaces and protocols intended to convey signaling messages and not voice or data), the SLP 272 may communicate with UEs 204 and external clients (not shown in FIG. 2B) over a user plane (e.g., using protocols intended to carry voice and/or data like the transmission control protocol (TCP) and/or IP).[0055] User plane interface 263 and control plane interface 265 connect the 5GC 260, and specifically the UPF 262 and AMF 264, respectively, to one or more gNBs 222 and/or ng-eNBs 224 in the NG-RAN 220. The interface between gNB(s) 222 and/or ng-eNB(s) 224 and the AMF 264 is referred to as the “N2” interface, and the interface between gNB(s) 222 and/or ng-eNB(s) 224 and the UPF 262 is referred to as the “N3” interface. The gNB(s) 222 and/or ng-eNB(s) 224 of the NG-RAN 220 may communicate directly with each other via backhaul connections 223, referred to as the “Xn-C” interface. One or more of gNBs 222 and/or ng-eNBs 224 may communicate with one or more UEs 204 over a wireless interface, referred to as the “Uu” interface.[0056] The functionality of a gNB 222 is divided between a gNB central unit (gNB-CU) 226 and one or more gNB distributed units (gNB-DUs) 228. The interface 232 between the gNB- CU 226 and the one or more gNB-DUs 228 is referred to as the “FI” interface. A gNB- CU 226 is a logical node that includes the base station functions of transferring user data, mobility control, radio access network sharing, positioning, session management, and the like, except for those functions allocated exclusively to the gNB-DU(s) 228. More specifically, the gNB-CU 226 hosts the radio resource control (RRC), service data adaptation protocol (SDAP), and packet data convergence protocol (PDCP) protocols of the gNB 222. A gNB-DU 228 is a logical node that hosts the radio link control (RLC), medium access control (MAC), and physical (PHY) layers of the gNB 222. Its operation
is controlled by the gNB-CU 226. One gNB-DU 228 can support one or more cells, and one cell is supported by only one gNB-DU 228. Thus, a UE 204 communicates with the gNB-CU 226 via the RRC, SDAP, and PDCP layers and with a gNB-DU 228 via the RLC, MAC, and PHY layers.[0057] FIGS. 3A, 3B, and 3C illustrate several example components (represented by corresponding blocks) that may be incorporated into a UE 302 (which may correspond to any of the UEs described herein), a base station 304 (which may correspond to any of the base stations described herein), and a network entity 306 (which may correspond to or embody any of the network functions described herein, including the location server 230 and the LMF 270, or alternatively may be independent from the NG-RAN 220 and/or 5GC 210/260 infrastructure depicted in FIGS. 2A and 2B, such as a private network) to support the file transmission operations as taught herein. It will be appreciated that these components may be implemented in different types of apparatuses in different implementations (e.g., in an ASIC, in a system-on-chip (SoC), etc.). The illustrated components may also be incorporated into other apparatuses in a communication system. For example, other apparatuses in a system may include components similar to those described to provide similar functionality. Also, a given apparatus may contain one or more of the components. For example, an apparatus may include multiple transceiver components that enable the apparatus to operate on multiple carriers and/or communicate via different technologies.[0058] The UE 302 and the base station 304 each include one or more wireless wide area network (WWAN) transceivers 310 and 350, respectively, providing means for communicating (e.g., means for transmitting, means for receiving, means for measuring, means for tuning, means for refraining from transmitting, etc.) via one or more wireless communication networks (not shown), such as an NR network, an LTE network, a GSM network, and/or the like. The WWAN transceivers 310 and 350 may each be connected to one or more antennas 316 and 356, respectively, for communicating with other network nodes, such as other UEs, access points, base stations (e.g., eNBs, gNBs), etc., via at least one designated RAT (e.g., NR, LTE, GSM, etc.) over a wireless communication medium of interest (e.g., some set of time/frequency resources in a particular frequency spectrum). The WWAN transceivers 310 and 350 may be variously configured for transmitting and encoding signals 318 and 358 (e.g., messages, indications, information, and so on),
respectively, and, conversely, for receiving and decoding signals 318 and 358 (e.g., messages, indications, information, pilots, and so on), respectively, in accordance with the designated RAT. Specifically, the WWAN transceivers 310 and 350 include one or more transmitters 314 and 354, respectively, for transmitting and encoding signals 318 and 358, respectively, and one or more receivers 312 and 352, respectively, for receiving and decoding signals 318 and 358, respectively.[0059] The UE 302 and the base station 304 each also include, at least in some cases, one or more short-range wireless transceivers 320 and 360, respectively. The short-range wireless transceivers 320 and 360 may be connected to one or more antennas 326 and 366, respectively, and provide means for communicating (e.g., means for transmitting, means for receiving, means for measuring, means for tuning, means for refraining from transmitting, etc.) with other network nodes, such as other UEs, access points, base stations, etc., via at least one designated RAT (e.g., WiFi, LTE-D, Bluetooth®, Zigbee®, Z-Wave®, PC5, dedicated short-range communications (DSRC), wireless access for vehicular environments (WAVE), near-field communication (NFC), etc.) over a wireless communication medium of interest. The short-range wireless transceivers 320 and 360 may be variously configured for transmitting and encoding signals 328 and 368 (e.g., messages, indications, information, and so on), respectively, and, conversely, for receiving and decoding signals 328 and 368 (e.g., messages, indications, information, pilots, and so on), respectively, in accordance with the designated RAT. Specifically, the short-range wireless transceivers 320 and 360 include one or more transmitters 324 and 364, respectively, for transmitting and encoding signals 328 and 368, respectively, and one or more receivers 322 and 362, respectively, for receiving and decoding signals 328 and 368, respectively. As specific examples, the short-range wireless transceivers 320 and 360 may be WiFi transceivers, Bluetooth® transceivers, Zigbee® and/or Z-Wave® transceivers, NFC transceivers, or vehicle-to-vehicle (V2V) and/or vehicle-to-everything (V2X) transceivers.[0060] The UE 302 and the base station 304 also include, at least in some cases, satellite signal receivers 330 and 370. The satellite signal receivers 330 and 370 may be connected to one or more antennas 336 and 376, respectively, and may provide means for receiving and/or measuring satellite positioning/communication signals 338 and 378, respectively. Where the satellite signal receivers 330 and 370 are satellite positioning system receivers,
the satellite positioning/communication signals 338 and 378 may be global positioning system (GPS) signals, global navigation satellite system (GLONASS) signals, Galileo signals, Beidou signals, Indian Regional Navigation Satellite System (NAVIC), Quasi- Zenith Satellite System (QZSS), etc. Where the satellite signal receivers 330 and 370 are non-terrestrial network (NTN) receivers, the satellite positioning/communication signals 338 and 378 may be communication signals (e.g., carrying control and/or user data) originating from a 5G network. The satellite signal receivers 330 and 370 may comprise any suitable hardware and/or software for receiving and processing satellite positioning/communication signals 338 and 378, respectively. The satellite signal receivers 330 and 370 may request information and operations as appropriate from the other systems, and, at least in some cases, perform calculations to determine locations of the UE 302 and the base station 304, respectively, using measurements obtained by any suitable satellite positioning system algorithm.[0061] The base station 304 and the network entity 306 each include one or more network transceivers 380 and 390, respectively, providing means for communicating (e.g., means for transmitting, means for receiving, etc.) with other network entities (e.g., other base stations 304, other network entities 306). For example, the base station 304 may employ the one or more network transceivers 380 to communicate with other base stations 304 or network entities 306 over one or more wired or wireless backhaul links. As another example, the network entity 306 may employ the one or more network transceivers 390 to communicate with one or more base station 304 over one or more wired or wireless backhaul links, or with other network entities 306 over one or more wired or wireless core network interfaces.[0062] A transceiver may be configured to communicate over a wired or wireless link. A transceiver (whether a wired transceiver or a wireless transceiver) includes transmitter circuitry (e.g., transmitters 314, 324, 354, 364) and receiver circuitry (e.g., receivers 312, 322, 352, 362). A transceiver may be an integrated device (e.g., embodying transmitter circuitry and receiver circuitry in a single device) in some implementations, may comprise separate transmitter circuitry and separate receiver circuitry in some implementations, or may be embodied in other ways in other implementations. The transmitter circuitry and receiver circuitry of a wired transceiver (e.g., network transceivers 380 and 390 in some implementations) may be coupled to one or more wired network interface ports. Wireless
transmitter circuitry (e.g., transmitters 314, 324, 354, 364) may include or be coupled to a plurality of antennas (e.g., antennas 316, 326, 356, 366), such as an antenna array, that permits the respective apparatus (e.g., UE 302, base station 304) to perform transmit “beamforming,” as described herein. Similarly, wireless receiver circuitry (e.g., receivers 312, 322, 352, 362) may include or be coupled to a plurality of antennas (e.g., antennas 316, 326, 356, 366), such as an antenna array, that permits the respective apparatus (e.g., UE 302, base station 304) to perform receive beamforming, as described herein. In an aspect, the transmitter circuitry and receiver circuitry may share the same plurality of antennas (e.g., antennas 316, 326, 356, 366), such that the respective apparatus can only receive or transmit at a given time, not both at the same time. A wireless transceiver (e.g., WWAN transceivers 310 and 350, short-range wireless transceivers 320 and 360) may also include a network listen module (NLM) or the like for performing various measurements.[0063] As used herein, the various wireless transceivers (e.g., transceivers 310, 320, 350, and 360, and network transceivers 380 and 390 in some implementations) and wired transceivers (e.g., network transceivers 380 and 390 in some implementations) may generally be characterized as “a transceiver,” “at least one transceiver,” or “one or more transceivers.” As such, whether a particular transceiver is a wired or wireless transceiver may be inferred from the type of communication performed. For example, backhaul communication between network devices or servers will generally relate to signaling via a wired transceiver, whereas wireless communication between a UE (e.g., UE 302) and a base station (e.g., base station 304) will generally relate to signaling via a wireless transceiver.[0064] The UE 302, the base station 304, and the network entity 306 also include other components that may be used in conjunction with the operations as disclosed herein. The UE 302, the base station 304, and the network entity 306 include one or more processors 332, 384, and 394, respectively, for providing functionality relating to, for example, wireless communication, and for providing other processing functionality. The processors 332, 384, and 394 may therefore provide means for processing, such as means for determining, means for calculating, means for receiving, means for transmitting, means for indicating, etc. In an aspect, the processors 332, 384, and 394 may include, for example, one or more general purpose processors, multi-core processors, central
processing units (CPUs), ASICs, digital signal processors (DSPs), field programmable gate arrays (FPGAs), other programmable logic devices or processing circuitry, or various combinations thereof.[0065] The UE 302, the base station 304, and the network entity 306 include memory circuitry implementing memories 340, 386, and 396 (e.g., each including a memory device), respectively, for maintaining information (e.g., information indicative of reserved resources, thresholds, parameters, and so on). The memories 340, 386, and 396 may therefore provide means for storing, means for retrieving, means for maintaining, etc. In some cases, the UE 302, the base station 304, and the network entity 306 may include RF Sensing Module 342, 388, and 398, respectively. The RF Sensing Module 342, 388, and 398 may be hardware circuits that are part of or coupled to the processors 332, 384, and 394, respectively, that, when executed, cause the UE 302, the base station 304, and the network entity 306 to perform the functionality described herein. In other aspects, the RF Sensing Module 342, 388, and 398 may be external to the processors 332, 384, and 394 (e.g., part of a modem processing system, integrated with another processing system, etc.). Alternatively, the RF Sensing Module 342, 388, and 398 may be memory modules stored in the memories 340, 386, and 396, respectively, that, when executed by the processors 332, 384, and 394 (or a modem processing system, another processing system, etc.), cause the UE 302, the base station 304, and the network entity 306 to perform the functionality described herein. FIG. 3A illustrates possible locations of the RF Sensing Module 342, which may be, for example, part of the one or more WWAN transceivers 310, the memory 340, the one or more processors 332, or any combination thereof, or may be a standalone component. FIG. 3B illustrates possible locations of the RF Sensing Module 388, which may be, for example, part of the one or more WWAN transceivers 350, the memory 386, the one or more processors 384, or any combination thereof, or may be a standalone component. FIG. 3C illustrates possible locations of the RF Sensing Module 398, which may be, for example, part of the one or more network transceivers 390, the memory 396, the one or more processors 394, or any combination thereof, or may be a standalone component.[0066] The UE 302 may include one or more sensors 344 coupled to the one or more processors 332 to provide means for sensing or detecting movement and/or orientation information that is independent of motion data derived from signals received by the one or more
WWAN transceivers 310, the one or more short-range wireless transceivers 320, and/or the satellite receiver 330. By way of example, the sensor(s) 344 may include an accelerometer (e.g., a micro-electrical mechanical system (MEMS) device), a gyroscope, a geomagnetic sensor (e.g., a compass), an altimeter (e.g., a barometric pressure altimeter), and/or any other type of movement detection sensor. Moreover, the sensor(s) 344 may include a plurality of different types of devices and combine their outputs in order to provide motion information. For example, the sensor(s) 344 may use a combination of a multi-axis accelerometer and orientation sensors to provide the ability to compute positions in two-dimensional (2D) and/or three-dimensional (3D) coordinate systems.[0067] In addition, the UE 302 includes a user interface 346 providing means for providing indications (e.g., audible and/or visual indications) to a user and/or for receiving user input (e.g., upon user actuation of a sensing device such a keypad, a touch screen, a microphone, and so on). Although not shown, the base station 304 and the network entity 306 may also include user interfaces.[0068] Referring to the one or more processors 384 in more detail, in the downlink, IP packets from the network entity 306 may be provided to the processor 384. The one or more processors 384 may implement functionality for an RRC layer, a packet data convergence protocol (PDCP) layer, a radio link control (RLC) layer, and a medium access control (MAC) layer. The one or more processors 384 may provide RRC layer functionality associated with broadcasting of system information (e.g., master information block (MIB), system information blocks (SIBs)), RRC connection control (e.g., RRC connection paging, RRC connection establishment, RRC connection modification, and RRC connection release), inter-RAT mobility, and measurement configuration for UE measurement reporting; PDCP layer functionality associated with header compression/decompression, security (ciphering, deciphering, integrity protection, integrity verification), and handover support functions; RLC layer functionality associated with the transfer of upper layer PDUs, error correction through automatic repeat request (ARQ), concatenation, segmentation, and reassembly of RLC service data units (SDUs), re-segmentation of RLC data PDUs, and reordering of RLC data PDUs; and MAC layer functionality associated with mapping between logical channels and
transport channels, scheduling information reporting, error correction, priority handling, and logical channel prioritization.[0069] The transmitter 354 and the receiver 352 may implement Layer-1 (LI) functionality associated with various signal processing functions. Layer-1, which includes a physical (PHY) layer, may include error detection on the transport channels, forward error correction (FEC) coding/decoding of the transport channels, interleaving, rate matching, mapping onto physical channels, modulation/demodulation of physical channels, and MIMO antenna processing. The transmitter 354 handles mapping to signal constellations based on various modulation schemes (e.g., binary phase-shift keying (BPSK), quadrature phase-shift keying (QPSK), M-phase-shift keying (M-PSK), M-quadrature amplitude modulation (M-QAM)). The coded and modulated symbols may then be split into parallel streams. Each stream may then be mapped to an orthogonal frequency division multiplexing (OFDM) subcarrier, multiplexed with a reference signal (e.g., pilot) in the time and/or frequency domain, and then combined together using an inverse fast Fourier transform (IFFT) to produce a physical channel carrying a time domain OFDM symbol stream. The OFDM symbol stream is spatially pre-coded to produce multiple spatial streams. Channel estimates from a channel estimator may be used to determine the coding and modulation scheme, as well as for spatial processing. The channel estimate may be derived from a reference signal and/or channel condition feedback transmitted by the UE 302. Each spatial stream may then be provided to one or more different antennas 356. The transmitter 354 may modulate an RF carrier with a respective spatial stream for transmission.[0070] At the UE 302, the receiver 312 receives a signal through its respective antenna(s) 316. The receiver 312 recovers information modulated onto an RF carrier and provides the information to the one or more processors 332. The transmitter 314 and the receiver 312 implement Layer- 1 functionality associated with various signal processing functions. The receiver 312 may perform spatial processing on the information to recover any spatial streams destined for the UE 302. If multiple spatial streams are destined for the UE 302, they may be combined by the receiver 312 into a single OFDM symbol stream. The receiver 312 then converts the OFDM symbol stream from the time-domain to the frequency domain using a fast Fourier transform (FFT). The frequency domain signal comprises a separate OFDM symbol stream for each subcarrier of the OFDM signal. The
symbols on each subcarrier, and the reference signal, are recovered and demodulated by determining the most likely signal constellation points transmitted by the base station 304. These soft decisions may be based on channel estimates computed by a channel estimator. The soft decisions are then decoded and de-interleaved to recover the data and control signals that were originally transmitted by the base station 304 on the physical channel. The data and control signals are then provided to the one or more processors 332, which implements Layer-3 (L3) and Layer-2 (L2) functionality.[0071] In the uplink, the one or more processors 332 provides demultiplexing between transport and logical channels, packet reassembly, deciphering, header decompression, and control signal processing to recover IP packets from the core network. The one or more processors 332 are also responsible for error detection.[0072] Similar to the functionality described in connection with the downlink transmission by the base station 304, the one or more processors 332 provides RRC layer functionality associated with system information (e.g., MIB, SIBs) acquisition, RRC connections, and measurement reporting; PDCP layer functionality associated with header compression/decompression, and security (ciphering, deciphering, integrity protection, integrity verification); RLC layer functionality associated with the transfer of upper layer PDUs, error correction through ARQ, concatenation, segmentation, and reassembly of RLC SDUs, re-segmentation of RLC data PDUs, and reordering of RLC data PDUs; and MAC layer functionality associated with mapping between logical channels and transport channels, multiplexing of MAC SDUs onto transport blocks (TBs), demultiplexing of MAC SDUs from TBs, scheduling information reporting, error correction through hybrid automatic repeat request (HARQ), priority handling, and logical channel prioritization.[0073] Channel estimates derived by the channel estimator from a reference signal or feedback transmitted by the base station 304 may be used by the transmitter 314 to select the appropriate coding and modulation schemes, and to facilitate spatial processing. The spatial streams generated by the transmitter 314 may be provided to different antenna(s) 316. The transmitter 314 may modulate an RF carrier with a respective spatial stream for transmission.[0074] The uplink transmission is processed at the base station 304 in a manner similar to that described in connection with the receiver function at the UE 302. The receiver 352 receives a signal through its respective antenna(s) 356. The receiver 352 recovers
information modulated onto an RF carrier and provides the information to the one or more processors 384.[0075] In the uplink, the one or more processors 384 provides demultiplexing between transport and logical channels, packet reassembly, deciphering, header decompression, control signal processing to recover IP packets from the UE 302. IP packets from the one or more processors 384 may be provided to the core network. The one or more processors 384 are also responsible for error detection.[0076] For convenience, the UE 302, the base station 304, and/or the network entity 306 are shown in FIGS. 3A, 3B, and 3C as including various components that may be configured according to the various examples described herein. It will be appreciated, however, that the illustrated components may have different functionality in different designs. In particular, various components in FIGS. 3A to 3C are optional in alternative configurations and the various aspects include configurations that may vary due to design choice, costs, use of the device, or other considerations. For example, in case of FIG. 3A, a particular implementation of UE 302 may omit the WWAN transceiver(s) 310 (e.g., a wearable device or tablet computer or PC or laptop may have Wi-Fi and/or Bluetooth capability without cellular capability), or may omit the short-range wireless transceiver(s) 320 (e.g., cellular-only, etc.), or may omit the satellite receiver 330, or may omit the sensor(s) 344, and so on. In another example, in case of FIG. 3B, a particular implementation of the base station 304 may omit the WWAN transceiver(s) 350 (e.g., a Wi-Fi “hotspot” access point without cellular capability), or may omit the short-range wireless transceiver(s) 360 (e.g., cellular-only, etc.), or may omit the satellite receiver 370, and so on. For brevity, illustration of the various alternative configurations is not provided herein, but would be readily understandable to one skilled in the art.[0077] The various components of the UE 302, the base station 304, and the network entity 306 may be communicatively coupled to each other over data buses 334, 382, and 392, respectively. In an aspect, the data buses 334, 382, and 392 may form, or be part of, a communication interface of the UE 302, the base station 304, and the network entity 306, respectively. For example, where different logical entities are embodied in the same device (e.g., gNB and location server functionality incorporated into the same base station 304), the data buses 334, 382, and 392 may provide communication between them.
[0078] The components of FIGS. 3 A, 3B, and 3C may be implemented in various ways. In some implementations, the components of FIGS. 3 A, 3B, and 3C may be implemented in one or more circuits such as, for example, one or more processors and/or one or more ASICs (which may include one or more processors). Here, each circuit may use and/or incorporate at least one memory component for storing information or executable code used by the circuit to provide this functionality. For example, some or all of the functionality represented by blocks 310 to 346 may be implemented by processor and memory component(s) of the UE 302 (e.g., by execution of appropriate code and/or by appropriate configuration of processor components). Similarly, some or all of the functionality represented by blocks 350 to 388 may be implemented by processor and memory component(s) of the base station 304 (e.g., by execution of appropriate code and/or by appropriate configuration of processor components). Also, some or all of the functionality represented by blocks 390 to 398 may be implemented by processor and memory component(s) of the network entity 306 (e.g., by execution of appropriate code and/or by appropriate configuration of processor components). For simplicity, various operations, acts, and/or functions are described herein as being performed “by a UE,” “by a base station,” “by a network entity,” etc. However, as will be appreciated, such operations, acts, and/or functions may actually be performed by specific components or combinations of components of the UE 302, base station 304, network entity 306, etc., such as the processors 332, 384, 394, the transceivers 310, 320, 350, and 360, the memories 340, 386, and 396, the RF Sensing Module 342, 388, and 398, etc.[0079] In some designs, the network entity 306 may be implemented as a core network component. In other designs, the network entity 306 may be distinct from a network operator or operation of the cellular network infrastructure (e.g., NG RAN 220 and/or 5GC 210/260). For example, the network entity 306 may be a component of a private network that may be configured to communicate with the UE 302 via the base station 304 or independently from the base station 304 (e.g., over a non-cellular communication link, such as Wi-Fi).[0080] FIG. 4 is a block diagram illustrating a system 400 to detect a user gesture, according to aspects of the disclosure. The system 400 includes a Wi-Fi device 402 (e.g., Wi-Fi enabled device), a type of user equipment (UE). The Wi-Fi device 402 may include a microphone 404 (e.g., a type of transducer), the radio frequency (RF) sensing module
342, and a transmit receive array 408. The RF sensing module 342 may use Wi-Fi sensing, millimeter (mm) wave sensing, 5GNR sensing, another type of RF-based sensing, or any combination thereof. The RF sensing module 342 may be capable of determining movement within a region 410 (e.g., a room or a portion of a room).[0081] A user 412 may (i) make an utterance 414 that includes one or more words and, at approximately the same time, (ii) perform a gesture 416. In this context, approximately the same time means that the user may perform the gesture 416 about 500 milliseconds (ms) or less prior to or 500 ms after making the utterance 414. In some aspects, a length of the utterance 414 may be longer than the time taken by the user to perform the gesture 416. The utterance 414 may include a trigger word 415, such as “this”, “that”, “here”, “there” or another trigger words. In some cases, the Wi-Fi device 402 may enable the user 412 to define one or more trigger words. If the Wi-Fi device 402 determines that the utterance 414 includes the trigger word 415, then the Wi-Fi device 402 may create a link 424 (e.g., using Wi-Fi, Bluetooth, Zigbee, or another near-field wireless communication protocol) with a voice assistant device 426 (e.g., a type of UE).[0082] The gesture 416 of the user 412 may have an associated motion 418 and an associated direction 420. The direction 420 may be associated with an object 422. The object 422 may be any type of controllable object, including (i) a physical object, such as a light source, a media playback device, blinds/shutters, a heating ventilation air conditioning (HVAC) controller such as a thermostat, or (ii) a more abstract type of object, such as a process, a software application, or the like. The object 422 may include a controller 434 that is Wi-Fi enabled to receive a command 433 via Wi-Fi from the voice assistant 426. The controller 434 is capable of controlling various functions (e.g., on, off, increase, decrease, and the like) of the object 422 based on the command 433 received from the voice assistant device 426. The functions that the controller 434 is capable of controlling may depend on the object 422. For example, when the object 422 is a light source, the command 433 may include on, off, brighten, dim, or the like. As another example, when the object 422 is a set of blinds or shutters, the command 433 may include open or close. As a further example, when the object 422 is an HVAC controller, the command 433 may include turn heat on, turn heat off, turn air-conditioning on, turn air-conditioning off, a specific temperature setting (e.g., set temperature to 20 degrees Celsius), increase the temperature by X degrees, decrease the temperature by X degrees, and so on. When the
object 422 is a media playback device, the command 433 may include initiate playback, pause playback, stop playback, increase volume, decrease volume, increase brightness, decrease brightness, increase contrast, decrease contrast, set the input source to Y (e.g., an over the air or cable channel, optical disc player, streaming service, internet site, or the like), send the audio output to Z, send a first language output to A and a second language output to B, and so on.[0083] The Wi-Fi device 402 may use the RF sensing module 342 to determine the motion 418 associated with the gesture 416. To create the enhanced directive 428, the Wi-Fi device 402 may use the RF sensing module 342 to determine a relative amount of the motion 418 and convert the relative amount to an amount that is understood by the object 422. Thus, the enhanced directive 428 may include the relative amount associated with the motion 418. For example, the relative amount of the motion 418 may include a distance between a thumb and a forefinger of a hand of the user, a distance between a left palm and a right palm of the user, or a distance between a starting position of the gesture 416 and an ending position of the gesture 416.[0084] The Wi-Fi device 402 may detect the gesture 416 using the RF sensing module 342 and use the microphone 404 to determine whether the utterance 414 occurred at approximately the same time as the gesture 416 was performed. If the Wi-Fi device 402 lacks the microphone 404, then the Wi-Fi device 402 may, after detecting the gesture 416, create the link 424 and send a request 429 to the voice assistant device 426 to determine whether the utterance 414 occurred at approximately the same time as the gesture 416 was performed. For example, the Wi-Fi device 402 may include a time at which the gesture 416 was detected in the request 429 to the voice assistant device 426. The voice assistant device 426 may store audio, such as the utterance 414, from the microphone 431 in a storage device 438 (e.g., a type of first-in-first-out (FIFO) buffer). The audio may be stored with an associated timestamp, enabling the voice assistant device 426 to determine whether the utterance 414 was made at approximately the same time as the gesture 416. The voice assistant device 426 may determine whether the utterance 414 was made at approximately the same time as the gesture 416 and indicate (e.g., via the link 424) to the Wi-Fi device 402 whether the utterance 414 was made at approximately the same time as the gesture 416.
[0085] If the Wi-Fi device 402 determines that the gesture 416 was performed without the utterance 414 being performed at approximately the same time, then the Wi-Fi device 402 may ignore the gesture 416. If the Wi-Fi device 402 determines that the gesture 416 was performed at approximately the same time as the utterance 414, then the Wi-Fi device 402 may determine the trigger word 415 in the utterance 414. The Wi-Fi device 402 may determine the direction 420 associated with the gesture 416 and determine the object 422 associated with the direction 420. The Wi-Fi device 402 may determine the motion 418 associated with the gesture 416. In some cases, the Wi-Fi device 402 may, based on the utterance 414 (including the trigger word 415), the object 422, the gesture 416, the motion 418 or any combination thereof, create an enhanced directive 428 and send the enhanced directive 428 to a skills application programming interface (API) 430 of the voice assistant device 426. In other cases, the Wi-Fi device 402 may send the utterance 414 (including the trigger word 415), the object 422, the gesture 416, the motion 418 or any combination thereof, to a cloud-based service 436 and the cloud-based service 436 may create the enhanced directive 428 for the Wi-Fi device 402 to send to the skills API 430 of the voice assistant device 426.[0086] The voice assistant device 426 may receive the enhanced directive 428 via the skills API 430. In response, the voice assistant device 426 may perform the action 432. For example, the action 432 may include sending a command 433 to the object 422. The object 422 may, after receiving the command 433, perform the command 433 (e.g., turn on, turn off, increase X, decrease X, or the like).[0087] Thus, a Wi-Fi device may use RF sensing to determine when a user has performed a gesture within a region (e.g., a room or a portion of a room). When the Wi-Fi device detects that the user has performed a gesture, the Wi-Fi device may determine whether the user made an utterance at approximately the same time as the user performed the gesture. If the Wi-Fi device has a microphone, the Wi-Fi device itself may determine whether the user made an utterance at approximately the same time as the user performed the gesture. If the Wi-Fi device lacks the microphone, the Wi-Fi device may establish a link to a voice assistant device and send a request with the time that the user performed the gesture and ask the voice assistant device to determine whether the user made an utterance at approximately the same time as the user performed the gesture. If the Wi-Fi device determines that the user made an utterance at approximately the same time as the
user performed the gesture, the Wi-Fi device may determine whether the utterance includes a trigger word. If the utterance includes a trigger word, the Wi-Fi device may determine a motion associated with the gesture and a direction associated with the gesture. The Wi-Fi device may determine an object that the user desires to control based on the gesture and the direction and, in some cases, the utterance. In some cases, the Wi-Fi device may create an enhanced directive based on the utterance, the gesture, the direction of the gesture, the motion associated with the gesture, and the type of object. In other cases, the Wi-Fi device may send the utterance, the gesture, the direction of the gesture, the motion associated with the gesture and the type of object to a cloud-based service to create the enhanced directive. The Wi-Fi device may send the enhanced directive to a skills API of the voice assistant device and the voice assistant device may perform an action, such as sending a command to a controller of the object. The command may cause the controller to cause the object to perform the command (e.g., turn on, turn off, decrease X, increase X, or the like). In this way, a user can intuitively control voice-controllable objects using both a gesture and an utterance, with the gesture indicating the object and the utterance indicating an action that the user desires the object to perform. A technical advantage of the system described herein includes the ability of a user to point at an object rather than verbally specifying the object (e.g., the lamp in the northeast comer of the living room”). Such a system may offer an advantage to users with a speech impediment (or a speech impairment) or those with a limited vocabulary because they can control an object with a gesture and a brief utterance rather than a long utterance.[0088] In the flow diagrams of FIG. 5 and FIG. 6, each block represents one or more operations that can be implemented in hardware, software, or a combination thereof. In the context of software, the blocks represent computer-executable instructions that, when executed by one or more processors, cause the processors to perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, modules, components, data structures, and the like that perform particular functions or implement particular abstract data types. The order in which the blocks are described is not intended to be construed as a limitation, and any number of the described operations can be combined in any order and/or in parallel to implement the processes. For discussion purposes, the processes 500 and 600 are described with reference to FIGS. 1,
2, 3, and 4, as described above, although other models, frameworks, systems and environments may be used to implement this process.[0089] FIG. 5 illustrates a process 500 that includes transmitting an enhanced directive to an application programming interface (API) of a voice assistant device, according to aspects of the disclosure. The process 500 may be performed by the Wi-Fi device 402 (e.g., a type of UE) of FIG. 4.[0090] At 502, the Wi-Fi device may receive, by a microphone of a device, an utterance from a user. For example, in FIG. 4, the Wi-Fi device 402 may receive the utterance 414 from the microphone 404 or from the microphone 431 of the voice assistant device 426 (e.g., via the link 424). In an aspect, 502 may be performed by transceivers 310, 320, processor 332, memory 340, and sensors 344, any or all of which may be considered means for performing this operation.[0091] At 504, the Wi-Fi device may determine, using radio frequency sensing, that the user performed a gesture while making the utterance. For example, in FIG. 4, the Wi-Fi device 402 may determine that the user 412 performed the gesture 416 using the RF sensing module 342 and determine whether the user 412 performed the gesture 416 at approximately the same time as (e.g., within 500 ms before or after) the user made the utterance 414. In an aspect, 504 may be performed by transceivers 310, 320, processor 332, memory 340, and RF sensing module 342, any or all of which may be considered means for performing this operation.[0092] At 506, the Wi-Fi device may determine an object associated with the gesture. For example, in FIG. 4, the Wi-Fi device 402 may determine the object 422 associated with the gesture 416 (e.g., based on the motion 418, the direction 420, the utterance 414, or any combination thereof). In an aspect, 506 may be performed by transceivers 310, 320, processor 332, memory 340, and RF sensing module 342, any or all of which may be considered means for performing this operation.[0093] At 508, the Wi-Fi device may transmit an enhanced directive to an application programming interface (API) of a voice assistant device. The enhanced directive is based on the object, the gesture, and the utterance and causes the smart assistant device to perform an action. For example, in FIG. 4, the Wi-Fi device 402 may transmit the enhanced directive 428 to the skills API 430 of the voice assistant device 426. The enhanced directive 428 may be based on the object 422, the gesture 416, the utterance
414, or any combination thereof. The enhanced directive 428 may cause the voice assistant device 426 to perform the action 432, such as sending the command 433 to the object 422. In an aspect, 508 may be performed by transceivers 310, 320, processor 332, memory 340, and RF sensing module 342, any or all of which may be considered means for performing this operation.[0094] Thus, a Wi-Fi device may receive (via a microphone) an utterance from a user, determine (using RF sensing) that the user performed a gesture while making the utterance, determine an object associated with the gesture, and transmit an enhanced directive to an API of a voice assistant device. The enhanced directive is determined based on the object, the gesture, and the utterance and causes the smart assistant device to perform an action, such as turning on an object, turning off an object, increasing or decreasing a parameter (e.g., temperature, volume, and the like) associated with the object, or another type of action that the object is capable of performing.[0095] As will be appreciated, a technical advantage of the process 500 includes enabling a user to control an object using a gesture and a brief utterance. For example, the user can gesture (e.g., point) at an object rather than verbally specifying a location of the object, thereby enabling a user with a speech impediment, a speech impairment, or with a limited vocabulary to control an object with a gesture and a brief utterance (rather than a long utterance). The user uses the gesture to identify the object and the utterance to specify an action that is to be performed to (or by) the object.[0096] FIG. 6 illustrates a process 600 that includes interaction between a Wi-Fi device (a type of UE) and a voice assistant device (a type of UE), according to aspects of the disclosure. In some cases, a portion of the process 600 may be performed by the Wi-Fi device 402 and a portion of the process 600 may be performed by the voice assistant device 426.[0097] At 602, the Wi-Fi device 402 determines, using radio frequency sensing, that a user performed a gesture. For example, in FIG. 4, the Wi-Fi device 402 uses the RF sensing module 342 to monitor the region 410 and determine when the user 412 has performed the gesture 416. In an aspect, 602 may be performed by transceivers 310, 320, processor 332, memory 340, and RF sensing module 342, any or all of which may be considered means for performing this operation.[0098] At 604, the Wi-Fi device 402 enters a gesture mode and creates a link to a voice assistant device. For example, in FIG. 4, the Wi-Fi device 402 may enter a gesture mode and
establish the link 424 between the Wi-Fi device 402 and the voice assistant device 426. In an aspect, 604 may be performed by transceivers 310, 320, processor 332, and memory 340, any or all of which may be considered means for performing this operation.[0099] At 606, the voice assistant device 426 may capture an utterance of the user using a microphone. For example, in FIG. 4, the voice assistant device 426 may capture the utterance 414 using the microphone 431. In an aspect, 606 may be performed by transceivers 310, 320, processor 332, sensors 344, and memory 340, any or all of which may be considered means for performing this operation.[0100] At 608, the Wi-Fi device 402 may capture (using a microphone) an utterance of a user or may receive (e.g., via the link) the utterance from the voice assistant device. For example, in FIG. 4, the Wi-Fi device 402 may capture the utterance 414 via the microphone 404 or the Wi-Fi device 402 may receive (e.g., via the link 424) the utterance 414 from the voice assistant device 426. In an aspect, 608 may be performed by transceivers 310, 320, processor 332, sensors 344, and memory 340, any or all of which may be considered means for performing this operation.[0101] At 610, the voice assistant device 426 determines a speech command based on the utterance. For example, in FIG. 4, the voice assistant device 426 may determine a speech command (e.g., the action 432) based on the utterance 414 or use the cloud-based service 436 to determine the speech command. In an aspect, 610 may be performed by transceivers 310, 320, processor 332, and memory 340, any or all of which may be considered means for performing this operation.[0102] At 612, the Wi-Fi device 402 determines an object associated with the gesture, interprets the gesture as a skill (e.g., associated with the object), and creates an enhanced directive. For example, in FIG. 4, the Wi-Fi device 402 determines the object 422 associated with the gesture 416, interprets the gesture 416 as a skill associated with the object 422, and creates (or uses the cloud-based service 436 to determine) the enhanced directive 428. In an aspect, 612 may be performed by transceivers 310, 320, processor 332, memory 340, and RF sensing module 342, any or all of which may be considered means for performing this operation.[0103] At 614, the Wi-Fi device 402 transmits the enhanced directive to an application programming interface (API) of the voice assistant device. For example, in FIG. 4, the Wi-Fi device 402 sends (e.g., via the link 424) the enhanced directive 428 to the skills
API 430 of the voice assistant device 426. In an aspect, 614 may be performed by transceivers 310, 320, processor 332, and memory 340, any or all of which may be considered means for performing this operation.[0104] At 616, the voice assistant device 426 receives the enhanced directive via the API and performs an action. For example, in FIG. 4, the voice assistant device 426 receives the enhanced directive 428 via the skills API 430. The enhanced directive 428 causes the voice assistant device 426 to perform the action 432. The action 432 may, for example, include sending the command 433 to the object 422. In an aspect, 616 may be performed by transceivers 310, 320, processor 332, and memory 340, any or all of which may be considered means for performing this operation.[0105] Thus, a Wi-Fi device may determine, using radio frequency sensing, that a user performed a gesture, enter a gesture mode, and create a link to a voice assistant device. The voice assistant device may receive (via a microphone) an utterance from a user. The Wi-Fi device may receive the utterance from the voice assistant device. The Wi-Fi device may determine an object associated with the gesture, interpret the gesture as a skill (associated with the object), and create an enhanced directive. The Wi-Fi device sends the enhanced directive to a skills API of the voice assistant device. The enhanced directive causes the smart assistant device to perform an action, such as turning on an object, turning off an object, increasing or decreasing a parameter (e.g., temperature, volume, and the like) associated with the object, or causing the object to perform another type of action that the object is capable of performing.[0106] As will be appreciated, a technical advantage of the process 600 includes enabling a user to control an object using a gesture that identifies an object and an utterance that specifies an action that is to be performed to (or by) the object. For example, the user can gesture (e.g., point) at an object rather than verbally specifying a location of the object, thereby enabling a user with a speech impediment, a speech impairment, or with a limited vocabulary to control an object with a gesture and a brief utterance (rather than a long utterance).[0107] In the detailed description above it can be seen that different features are grouped together in examples. This manner of disclosure should not be understood as an intention that the example clauses have more features than are explicitly mentioned in each clause. Rather, the various aspects of the disclosure may include fewer than all features of an individual
example clause disclosed. Therefore, the following clauses should hereby be deemed to be incorporated in the description, wherein each clause by itself can stand as a separate example. Although each dependent clause can refer in the clauses to a specific combination with one of the other clauses, the aspect(s) of that dependent clause are not limited to the specific combination. It will be appreciated that other example clauses can also include a combination of the dependent clause aspect(s) with the subject matter of any other dependent clause or independent clause or a combination of any feature with other dependent and independent clauses. The various aspects disclosed herein expressly include these combinations, unless it is explicitly expressed or can be readily inferred that a specific combination is not intended (e.g., contradictory aspects, such as defining an element as both an insulator and a conductor). Furthermore, it is also intended that aspects of a clause can be included in any other independent clause, even if the clause is not directly dependent on the independent clause. Implementation examples are described in the following numbered clauses:[0108] Clause 1. A method for instructing a smart assistant device to perform an action, the method comprising: receiving, by a microphone, an utterance from a user; determining, using radio frequency sensing, that the user performed a gesture while making the utterance; determining an object associated with the gesture; and transmitting an enhanced directive to an application programming interface (API) of a smart assistance device, the enhanced directive based on the object, the gesture, and the utterance, wherein the enhanced directive causes the smart assistant device to perform an action.[0109] Clause 2. The method of clause 1, further comprising: determining that the utterance includes a trigger word.[0110] Clause 3. The method of any of clauses 1 to 2, further comprising: determining a motion associated with the gesture; determining a direction of the motion; and identifying the object associated with the gesture based on the direction of the motion.[0111] Clause 4. The method of any of clauses 1 to 3, further comprising: determining a motion associated with the gesture; determining a relative amount associated with the motion; converting the relative amount to an amount that is understood by the object; and including the amount in the enhanced directive.[0112] Clause 5. The method of clause 4, determining the relative amount associated with the motion comprises one of: determining a first distance between a thumb and a forefinger
of a hand of the user; determining a second distance between a left palm and a right palm of the user; or determining a third distance between a starting position of the gesture and an ending position of the gesture.[0113] Clause 6. The method of any of clauses 1 to 5, further comprising: creating a link between a device and the smart assistant device.[0114] Clause 7. The method of any of clauses 1 to 6, wherein: the gesture comprises pointing or gesturing towards the object; and the utterance comprises the action associated with the object.[0115] Clause 8. The method of any of clauses 6 to 7, wherein the action comprises on, off, dim, brighten, increase, decrease, play, stop, pause, positioning of an audio object, or any combination thereof.[0116] Clause 9. The method of any of clauses 1 to 8, wherein the object comprises: a light source, a media playback device, a set of blinds or shutters, a controllable obj ect, a heating ventilation air conditioning (HVAC) controller, or any combination thereof.[0117] Those of skill in the art will appreciate that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.[0118] Further, those of skill in the art will appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the aspects disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
[0119] The various illustrative logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field-programable gate array (FPGA), or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, for example, a combination of a digital signa processor (DSP) and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.[0120] The methods, sequences and/or algorithms described in connection with the aspects disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in random access memory (RAM), flash memory, read-only memory (ROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), registers, hard disk, a removable disk, a compact disc (CD) ROM, optical disc, or any other form of storage medium known in the art. An example storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal (e.g., UE). In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.[0121] In one or more example aspects, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other
medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.[0122] While the foregoing disclosure shows illustrative aspects of the disclosure, it should be noted that various changes and modifications could be made herein without departing from the scope of the disclosure as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the aspects of the disclosure described herein need not be performed in any particular order. Furthermore, although elements of the disclosure may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated. |
Methods and devices provide an efficient user interface for activating a function by detecting a tickle gesture on a touch surface of a computing device. The tickle gesture may include short strokes in approximately opposite directions traced on a touch surface, such as a touchscreen or touchpad. The activated function may open an application or activate a search function. The index menu item may change based on the location and/or movement of the touch on the touch surface. Such functionality may show search results based on the menu item displayed before the user's finger was lifted from the touch surface. |
CLAIMS What is claimed is: 1. A method for providing a user interface gesture function on a computing device, comprising: detecting a touch path event on a user interface device; determining whether the touch path event is a tickle gesture; and activating a function associated with the tickle gesture when it is determined that the touch path event is the tickle gesture. 2. The method of claim 1, wherein determining whether the touch path event is a tickle gesture comprises: determining that the touch path event traces an approximately linear path; detecting a reversal in direction of the touch path event; determining a length of the touch path event in each direction; and determining a number of times the direction of the touch path event reverses. 3. The method of claim 2, wherein detecting a reversal in the direction of the touch path event comprises: detecting whether a current direction of the touch path event is between approximately 160° and approximately 200° of a previous path direction within the touch path event. 4. The method of claim 2, further comprising: comparing the length of the touch path event in each direction to a predefined length. 5. The method of claim 2, further comprising: comparing the number of times the direction of the touch path event reverses to a predefined number. 6. The method of claim 2, wherein determining the length of the touch path event in each direction comprises: detecting an end of the touch path event. 7. The method of claim 1, wherein activating a function associated with the tickle gesture comprises: activating a menu function including a menu selection item; and displaying the menu selection item. 8. The method of claim 7, further comprising: determining a location of the touch path event in the user interface display; displaying the menu selection item based on the determined touch path event location; determining when the touch path event is ended; and activating the menu selection item associated with the determined touch path event location when it is determined that the touch path event is ended. 9. The method of claim 7, further comprising: determining a location of the touch path event in the user interface display; detecting a motion associated with the touch path event; displaying the menu selection items based on the determined touch path event motion and location; determining when the touch path event is ended; and activating the menu selection item associated with the determined touch path event location when it is determined that the touch path event is ended. 10. A computing device, comprising: a processor;a user interface pointing device coupled to the processor; a memory coupled to the processor; and a display coupled to the processor, wherein the processor is configured to perform processes comprising: detecting a touch path event on a user interface device; determining whether the touch path event is a tickle gesture; and activating a function associated with the tickle gesture when it is determined that the touch path event is the tickle gesture. 11. The computing device of claim 10, wherein the processor is configured to perform processes such that determining whether the touch path event is a tickle gesture comprises: determining that the touch path event traces an approximately linear path; detecting a reversal in direction of the touch path event; determining a length of the touch path event in each direction; and determining a number of times the direction of the touch path event reverses. 12. The computing device of claim 11, wherein the processor is configured to perform processes such that detecting a reversal in the direction of the touch path event comprises: detecting whether a current direction of the touch path event is between approximately 160° and approximately 200° of a previous path direction within the touch path event. 13. The computing device of claim 11, wherein the processor is configured to perform further processes comprising: comparing the length of the touch path event in each direction to a predefined length. 14. The computing device of claim 11, wherein the processor is configured to perform further processes comprising: comparing the number of times the direction of the touch path event reverses to a predefined number. 15. The computing device of 11, wherein the processor is configured to perform processes such that determining the length of the touch path event in each direction comprises: detecting an end of the touch path event. 16. The computing device of claim 10, wherein the processor is configured to perform processes such that activating a function associated with the tickle gesture comprises: activating a menu function including a menu selection item; and displaying the menu selection item. 17. The computing device of claim 16, wherein the processor is configured to perform further processes comprising: determining a location of the touch path event in the user interface display; displaying the menu selection item based on the determined touch path event location; determining when the touch path event is ended; and activating the menu selection item associated with the determined touch path event location when it is determined that the touch path event is ended. 18. The computing device of claim 16, wherein the processor is configured to perform further processes comprising: determining a location of the touch path event in the user interface display; detecting a motion associated with the touch path event;displaying the menu selection items based on the determined touch path event motion and location; determining when the touch path event is ended; and activating the menu selection item associated with the determined touch path event location when it is determined that the touch path event is ended. 19. A computing device, comprising: means for detecting a touch path event on a user interface device; means for determining whether the touch path event is a tickle gesture; and means for activating a function associated with the tickle gesture when it is determined that the touch path event is the tickle gesture. 20. The method of claim 19, further comprising: means for determining that the touch path event traces an approximately linear path; means for detecting a reversal in direction of the touch path event; means for determining a length of the touch path event in each direction; and means for determining a number of times the direction of the touch path event reverses. 21. The computing device of claim 20, wherein means for detecting a reversal in direction of the touch path event comprises means for detecting whether a current direction of the touch path event is between approximately 160° and approximately 200° of a previous path direction within the touch path event. 22. The computing device of claim 20, further comprising: means for comparing the length of the touch path event in each direction to a predefined length. 23. The computing device of claim 20, further comprising: means for comparing the number of times the direction of the touch path event reverses to a predefined number. 24. The computing device claim 20, wherein means for determining the length of the touch path event in each direction comprises: means for detecting an end of the touch path event. 25. The computing device of claim 19, wherein means for activating a function associated with the tickle gesture comprises: means for activating a menu function including a menu selection item; and means for displaying the menu selection item. 26. The computing device of claim 25, further comprising: means for determining a location of the touch path event in the user interface display; means for displaying the menu selection item based on the determined touch path event location; means for determining when the touch path event is ended; and means for activating the menu selection item associated with the determined touch path event location when it is determined that the touch path event is ended. 27. The computing device of claim 25, further comprising: means for determining a location of the touch path event in the user interface display; means for detecting a motion associated with the touch path event; means for displaying the menu selection items based on the determined touch path event motion and location; means for determining when the touch path event is ended; andmeans for activating the menu selection item associated with the determined touch path event location when it is determined that the touch path event is ended. 28. A computer program product, comprising: a computer-readable medium, comprising: at least one instruction for detecting a touch path event on a user interface device; at least one instruction for determining whether the touch path event is a tickle gesture; and at least one instruction for activating a function associated with the tickle gesture when it is determined that the touch path event is a tickle gesture. 29. The computer program product of claim 28, wherein the computer-readable medium further comprises: at least one instruction for determining that the touch path event traces an approximately linear path; at least one instruction for detecting a reversal in direction of the touch path event; at least one instruction for determining the length of the touch path event in each direction; and at least one instruction for determining the number of times the direction of the touch path event reverses. 30. The computer program product of claim 29, wherein the at least one instruction for detecting a reversal in the direction of the touch path event comprises: at least one instruction for detecting whether a current direction of the touch path event is between approximately 160° and approximately 200° of a previous path direction within the touch path event. 31. The computer program product of claim 29, wherein the computer-readable medium further comprises: at least one instruction for comparing the length of the touch path event in each direction to a predefined length. 32. The computer program product of claim 29, wherein the computer-readable medium further comprises: at least one instruction for comparing the number of times the direction of the touch path event reversals to a predefined number. 33. The computer program product of claim 29, wherein at least one instruction for determining the length of the touch path event in each direction comprises: at least one instruction for detecting an end of the touch path event. 34. The computer program product of claim 28, wherein at least one instruction activating a function associated with the tickle gesture comprises: at least one instruction for activating a menu function including a menu selection item; and at least one instruction for displaying the menu selection item. 35. The computer program product of claim 34, wherein the computer-readable medium further comprises: at least one instruction for determining a location of the touch path event in the user interface display; at least one instruction for displaying the menu selection item based on the determined touch path event location; at least one instruction for determining when the touch path event is ended; and at least one instruction for activating the menu selection item associated with the determined touch path event location when it is determined that the touch path event is ended. 36. The computer program product of claim 34, wherein the computer-readable medium further comprises: at least one instruction for determining a location of the touch path event in the user interface display; at least one instruction for detecting a motion associated with the touch path event; at least one instruction for displaying the menu selection items based on the determined touch path event motion and location; at least one instruction for determining when the touch path event is ended; and at least one instruction for activating the menu selection item associated with the determined touch path event location when it is determined that the touch path event is ended. |
USER INTERFACE METHODS PROVIDING SEARCHING FUNCTIONALITY FIELD OF THE INVENTION [0001] The present invention relates generally to computer user interface systems and more particularly to user systems providing a search function. BACKGROUND [0002] Personal electronic devices (e.g. cell phones, PDAs, laptops, gaming devices) provide users with increasing functionality and data storage. Personal electronic devices serve as personal organizers, storing documents, photographs, videos, and music, as well as serving as portals to the Internet and electronic mail. In order to fit within the small displays of such devices, documents (e.g., music files and contact lists) are typically displayed in a viewer that can be controlled by a scrolling function. In order to view all or parts of a document or parse through a list of digital files, typical user interfaces permit users to scroll up or down by using a scroll bar, using a pointing device function such as a mouse pad or track ball. Another known user interface mechanism for activating the scroll function is a unidirectional vertical swipe movement of one finger on a touchscreen display as implemented on the Blackberry Storm® mobile device. However, such scroll methods for viewing documents and images can be difficult and time consuming, particularly to accomplish quick and accurate access to different parts of a large document or extensive lists. This is particularly the case in small portable computing devices whose usefulness depends upon the scrolling function given their small screen size. SUMMARY [0003] The various aspects include methods for providing a user interface gesture function on a computing device including detecting a touch path event on a user interface device, determining whether the touch path event is a tickle gesture, andactivating a function associated with the tickle gesture when it is determined that the touch path event is a tickle gesture. Determining whether the touch path event is a tickle gesture may include determining that the touch path event traces an approximately linear path, detecting a reversal in direction of the touch path event, determining a length of the touch path event in each direction, and determining a number of times the direction of the touch path event reverses. Detecting a reversal in the direction of the touch path event may include detecting whether the reversal in the direction of the touch path event is to an approximately opposite direction. The various aspects may also provide a method for providing a user interface gesture function on a computing device, including comparing the length of the touch path event in each direction to a predefined length. The various aspects may also include a method for providing a user interface gesture function on a computing device including comparing the number of times the direction of the touch path event reverses to a predefined number. Determining the length of the touch path event in each direction may include detecting the end of a touch path event. Activating a function associated with the tickle gesture may include activating a menu function including a menu selection item, and displaying the menu selection item. Activating a function associated with the tickle gesture may also include determining a location of the touch path event in the user interface display, displaying the menu selection item based on the determined touch path event location, determining when the touch path event is ended, and activating the menu selection item associated with the determined touch path event location when it is determined that the touch path event is ended. Activating a function associated with the tickle gesture may also include determining a location of the touch path event in the user interface display, detecting a motion associated with the touch path event, displaying the menu selection items based on the determined touch path event motion and location, determining when the touch path event is ended, and activating the menu selection item associated with the determined touch path event location when it is determined that the touch path event is ended.[0004] In an aspect a computing device may include a processor, a user interface pointing device coupled to the processor, a memory coupled to the processor, and a display coupled to the processor, in which the processor is configured to detect a touch path event on a user interface device, determine whether the touch path event is a tickle gesture, and activate a function associated with the tickle gesture when it is determined that the touch path event is a tickle gesture. The processor may determine whether the touch path event is a tickle gesture by determining that the touch path event traces an approximately linear path, detecting a reversal in direction of the touch path event, determining a length of the touch path event in each direction, and determining a number of times the direction of the touch path event reverses. The processor may detect a reversal in the direction of the touch path event by detecting whether the direction of the touch path event is approximately opposite that of a prior direction. The processor may also be configured to compare the length of the touch path event in each direction to a predefined length. The processor may also be configured to compare the number of times the direction of the touch path event reverses to a predefined number. The processor may determine the length of the touch path event in each direction by detecting the end of a touch path event. Activating a function associated with the tickle gesture may include activating a menu function including a menu selection item, and displaying the menu selection item. The processor may also be configured to determine a location of the touch path event in the user interface display, display the menu selection item based on the determined touch path event location, determine when the touch path event is ended, and activate the menu selection item associated with the determined touch path event location when it is determined that the touch path event is ended. The processor may also be configured to detect a motion associated with the touch path event, display the menu selection items based on the determined touch path event motion and location, determine when the touch path event is ended, and activate the menu selection item associated with the determined touch path event location when it is determined that the touch path event is ended.[0005] In an aspect, a computing device includes a means for detecting a touch path event on a user interface device, a means for determining whether the touch path event is a tickle gesture, and a means for activating a function associated with the tickle gesture when it is determined that the touch path event is a tickle gesture. The computing device may further include a means for determining that the touch path event traces an approximately linear path, a means for detecting a reversal in direction of the touch path event, a means for determining a length of the touch path event in each direction, and a means for determining a number of times the direction of the touch path event reverses. The reversal in the direction of the touch path event may be in an approximately opposite direction. The computing device may also include a means for comparing the length of the touch path event in each direction to a predefined length. The computing device may also include a means for comparing the number of times the direction of the touch path event reverses to a predefined number. The means for determining the length of the touch path event in each direction may include a means for detecting the end of a touch path event. The means for activating a function associated with the tickle gesture may include a means for activating a menu function including a menu selection item, and a means for displaying the menu selection item. The computing device may also include a means for determining a location of the touch path event in the user interface display, a means for displaying the menu selection item based on the determined touch path event location, a means for determining when the touch path event is ended, and a means for activating the menu selection item associated with the determined touch path event location when it is determined that the touch path event is ended. The computing device may also include a means for determining a location of the touch path event in the user interface display, a means for detecting a motion associated with the touch path event, a means for displaying the menu selection items based on the determined touch path event motion and location, a means for determining when the touch path event is ended, and a means for activating the menu selection item associated with the determinedtouch path event location when it is determined that the touch path event is ended. [0006] In an aspect a computer program product may include a computer- readable medium including at least one instruction for detecting a touch path event on a user interface device, at least one instruction for determining whether the touch path event is a tickle gesture, and at least one instruction for activating a function associated with the tickle gesture when it is determined that the touch path event is a tickle gesture. The computer-readable medium may also include at least one instruction for determining that the touch path event traces an approximately linear path, at least one instruction for detecting a reversal in direction of the touch path event, at least one instruction for determining the length of the touch path event in each direction, and at least one instruction for determining the number of times the direction of the touch path event reversals. The at least one instruction for detecting a reversal in the direction of the touch path event may include at least one instruction for detecting whether the reversal in the direction of the touch path event is to an approximately opposite direction. The computer-readable medium may also include at least one instruction for comparing the length of the touch path event in each direction to a predefined length. The computer-readable medium may also include at least one instruction for comparing the number of times the direction of the touch path event reverses to a predefined number. The at least one instruction for determining the length of the touch path event in each direction may include at least one instruction for detecting the end of a touch path event. The at least one instruction activating a function associated with the tickle gesture may include at least one instruction for activating a menu function including a menu selection item, and at least one instruction for displaying the menu selection item. The computer-readable medium may also include at least one instruction for determining a location of the touch path event in the user interface display, at least one instruction for displaying the menu selection item based on the determined touch path event location, at least one instruction for determining when the touch path event is ended, and at least one instruction for activating the menu selection itemassociated with the determined touch path event location when it is determined that the touch path event is ended. The computer-readable medium may also include at least one instruction for detecting a motion associated with the touch path event, at least one instruction for displaying the menu selection items based on the determined touch path event motion and location, at least one instruction for determining when the touch path event is ended, and at least one instruction for activating the menu selection item associated with the determined touch path event location when it is determined that the touch path event is ended. BRIEF DESCRIPTION OF THE DRAWINGS [0007] The accompanying drawings, which are incorporated herein and constitute part of this specification, illustrate exemplary aspects of the invention. Together with the general description given above and the detailed description given below, the drawings serve to explain features of the invention. [0008] FIG. 1 is a frontal view of a portable computing device illustrating a tickle gesture functionality activated by a finger moving in an up and down direction on a touchscreen display according to an aspect. [0009] FIG. 2 is a frontal view of a portable computing device illustrating tickle gesture functionality activated to display an index menu according to an aspect. [0010] FIG. 3 is a frontal view of a portable computing device illustrating navigating an index menu by moving a finger downwards on a touchscreen according to an aspect. [0011] FIG. 4 is a frontal view of a portable computing device illustrating a display of selected menu item. [0012] FIG. 5 is a frontal view of a portable computing device illustrating navigating an index menu by moving a finger downwards on a touchscreen according to an aspect.[0013] FIG. 6 is a frontal view of a portable computing device illustrating activating tickle gesture functionality by a finger moving in an up and down direction on a touchscreen display according to an aspect. [0014] FIG. 7 is a frontal view of a portable computing device illustrating a display of an index menu following a tickle gesture according to an aspect. [0015] FIG. 8 is a frontal view of a portable computing device illustrating tickle gesture functionality activated to display an index menu according to an aspect. [0016] FIG. 9 and 10 are frontal views of a portable computing device illustrating tickle gesture functionality activated to display an index menu according to an aspect. [0017] FIG. 11 is a frontal view of a portable computing device illustrating display of a selected menu item according to an aspect. [0018] FIG. 12 is a frontal view of a portable computing device illustrating display of a tickle gesture visual guide according to an aspect. [0019] FIG. 13 is a system block diagram of a computer device suitable for use with the various aspects. [0020] FIG. 14 is a process flow diagram of an aspect method for activating a tickle gesture function. [0021] FIG. 15 is a process flow diagram of an aspect method for implementing a tickle gesture function user interface using a continuous tickle gesture. [0022] FIG. 16 is a process flow diagram of an aspect method for implementing a tickle gesture function user interface using a discontinuous tickle gesture. [0023] FIG. 17 is a process flow diagram of a method for selecting an index menu item according to the various aspects.[0024] FIG. 18 is a component block diagram of an example portable computing device suitable for use with the various aspects. [0025] FIG. 19 is a circuit block diagram of an example computer suitable for use with the various aspects. DETAILED DESCRIPTION [0026] The various aspects will be described in detail with reference to the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. References made to particular examples and implementations are for illustrative purposes and are not intended to limit the scope of the invention or the claims. [0027] The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any implementation described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other implementations . [0028] The word "tickle gesture" is used herein to mean alternating repetitious strokes (e.g., back and forth, up and down, or down-lift-down strokes), performed on a touchscreen user interface. [0029] As used herein, a "touchscreen" is a touch sensing input device or a touch sensitive input device with an associated image display. As used herein, a "touchpad" is a touch sensing input device without an associated image display. A touchpad, for example, can be implemented on any surface of an electronic device outside the image display area. Touchscreens and touchpads are generically referred to herein as a "touch surface." Touch surfaces may be integral parts of an electronic device, such as a touchscreen display, or a separate module, such as a touchpad, which can be coupled to the electronic device by a wired or wireless data link. The terms touchscreen, touchpad and touch surface may be used interchangeably hereinafter.[0030] As used herein, the terms "personal electronic device," "computing device" and "portable computing device" refer to any one or all of cellular telephones, personal data assistants (PDAs), palm-top computers, notebook computers, personal computers, wireless electronic mail receivers and cellular telephone receivers (e.g., the Blackberry® and Treo® devices), multimedia Internet enabled cellular telephones (e.g., the Blackberry Storm®), and similar electronic devices that include a programmable processor, memory, and a connected or integral touch surface or other pointing device (e.g., a computer mouse). In an example aspect used to illustrate various aspects of the present invention, the electronic device is a cellular telephone including an integral touchscreen display. However, this aspect is present merely as one example implementation of the various aspects, and as such is not intended to exclude other possible implementations of the subject matter recited in the claims. [0031] As used herein a "touch event" refers to a detected user input on a touch surface that may include information regarding location or relative location of the touch. For example, on a touchscreen or touchpad user interface device, a touch event refers to the detection of a user touching the device and may include information regarding the location on the device being touched. [0032] As used herein the term "path" refers to a sequence of touch event locations that trace a path within a graphical user interface (GUI) display during a touch event. Also, as used herein the term "path event" refers to a detected user input on a touch surface which traces a path during a touch event. A path event may include information regarding the locations or relative locations (e.g., within a GUI display) of the touch events which constitute the traced path. [0033] The various aspect methods and devices provide an intuitively easy to use touchscreen user interface gesture for performing a function, such as opening an application or activating a search function. Users may perform a tickle gesture on their computing device by touching the touchscreen with a finger and tracing a tickle gesture on the touchscreen. The tickle gesture is performed when a usertraces a finger in short strokes in approximately opposite directions (e.g., back and forth or up and down) on the touchscreen display of a computing device. [0034] The processor of a computing device may be programmed to recognize touch path events traced in short, opposite direction strokes as a tickle gesture and, in response, perform a function linked to or associated with the tickle gesture (i.e., a tickle gesture function). The path traced by a tickle gesture may then be differentiated from other path shapes, such as movement of a finger in one direction on a touchscreen for panning, zooming or selecting. [0035] Functions that may be linked to and initiated by a tickle gesture may include opening an application such as an address book application, a map program, a game, etc. The tickle gesture may also be associated with activating a function within an application. For example, the tickle gesture may activate a search function allowing the user to search a database associated with an open application, such as searching for names in an address book. [0036] Tickle gestures may be traced in different manners. For example, tickle gestures may be continuous or discontinuous. In tracing a continuous tickle gesture, a user may maintain contact of his/her finger on the touchscreen display during the entire tickle gesture. Alternatively, the user may discontinuously trace the tickle gesture by touching the touchscreen display in the direction of a tickle gesture stroke. For example, in a discontinuous tickle gesture the user may touch the touchscreen display, trace a downward stroke, and lift his/her finger off the touchscreen display before tracing a second downward stroke (referred to herein as a "down-lift-down" path trace). The computing device processor may be configured to recognize such discontinuous gestures as a tickle gesture. [0037] Parameters such as the length, repetition, and duration of the path traced in a tickle gesture touch event may be measured and used by the processor of a computing device to control the performance of the function linked to, or associated with, the tickle gesture. The processor may be configured to determine whether the path traced does not exceed a pre-determined strokelength, and whether the path includes a minimum number of repetitions of tickle gesture strokes within a specified time period. Such parameters may allow the processor to differentiate between other user interface gestures that may be similar in part to the tickle gesture. For example, a gesture that may activate a panning function may be differentiated from a tickle gesture based on the length of a stroke, since the panning function may require one long stroke of a finger in one direction on a touchscreen display. The length of the strokes of a tickle gesture may be set at an arbitrary number, such as 1 centimeter, so that it does not interfere with other gestures for activating or initiating other functions. [0038] A minimum number of stroke repetitions may be associated with the tickle gesture. The number of stroke repetitions may be set arbitrarily or as a user- settable parameter, and may be selected to avoid confusion with other gestures for activating other functions. For example, the user may be required to make at least five strokes each less than 1 centimeter before the computing device recognizes the touch event as a tickle gesture. [0039] The tickle gesture may also be determined based upon a time limit within which the user must execute the required strokes. Time limit may also be arbitrary or a user-settable parameter. Such time limits may allow the computing device to differentiate the tickle gesture from other gestures which activate different functions. For example, one stroke followed by another stroke more than 0.5 seconds later may be treated as conventional user gesture, such as panning, whereas one stroke followed by another in less than 0.5 seconds may be recognized as a tickle gesture, causing the processor to activate the linked functionality. The time limit may be imposed as a time out on the evaluation of a single touch path event such that if the tickle gesture parameters have not been satisfied by the end of the time limit, the touch path is immediately processed as a different gesture, even if the gesture later satisfies the tickle gesture parameters.[0040] In the various aspects the tickle gesture functionality may be enabled automatically as part of the GUI software. Automatic activation of the tickle gesture functionality may be provided as part of an application. [0041] In some aspects, the tickle gesture functionality may be automatically disabled by an application that employs user interface gestures that might be confused with the tickle gesture. For example, a drawing application may deactivate the tickle gesture so that drawing strokes are not misinterpreted as a tickle gesture. [0042] In some aspects, the tickle gesture may be manually enabled. To manually enable or activate the tickle gesture in an application, a user may select and activate the tickle gesture by pressing a button or by activating an icon on a GUI display. For example, the index operation may be assigned to a soft key, which the user may activate (e.g., by pressing or clicking) to launch the tickle gesture functionality. As another example, the tickle gesture functionality may be activated by a user command. For example, the user may use a voice command such as "activate index" to enable the tickle gesture functionality. Once activated, the tickle gesture functionality may be used in the manner described herein. [0043] The tickle gesture functionality may be implemented on any touch surface. In a particularly useful implementation, the touch surface is a touchscreen display since touchscreens are generally superimposed on a display image, enabling users to interact with the display image with the touch of a finger. In such applications, the user interacts with an image by touching the touchscreen display with a finger and tracing back and forth or up and down paths. Processes for the detection and acquisition of touchscreen display touch events (i.e., detection of a finger touch on a touchscreen) are well known, an example of which is disclosed in U.S. Patent No. 6,323,846, the entire contents of which are hereby incorporated by reference.[0044] When the required tickle gesture parameters are detected, the linked gesture function may be activated. The function linked to, or associated with, the tickle gesture may include opening an application or activating a search function. If the linked function is opening an application, the computing device processor may open the application and display it to the user on the display, in response to the user tracing a tickle gesture that satisfies the required parameters. [0045] If the linked function is activating a search functionality, when the required tickle gesture parameters are detected, the processor may generate a graphical user interface display that enables the user to conduct a search in the current application. Such a graphical user interface may include an index, which may be used to search a list of names, places, or topics arranged in an orderly manner. For example, when searching an address book, the search engine may display to the user an alphabetically arranged index of letters. A user may move between different alphabet letters by tracing his/her finger in one direction or the other on the touchscreen display. Similarly, when searching a document or a book, an index may include a list of numerically arranged chapter numbers for the document or book. In that case a user may navigate the chapters by tracing a path on a touchscreen or touch surface while the search function is activated. [0046] FIG. 1 shows an example computing device 100 that includes a touchscreen display 102 and function keys 106 for interfacing with a graphical user interface. In the illustrated example, the computing device 100 is running an address book application which displays the names of several contacts on the touchscreen display 102. The names in the address book may be arranged alphabetically. To access a name, the address book application may allow the user to scroll down an alphabetically arranged list of names. Alternatively, the address book application may enable the user to enter a name in the search box 118 that the application uses to search the address book database. These methods may be time consuming for the user. Scrolling down a long list of names may take a long time in large databases. Similarly, searching for a name using the search function also takes time to enter the search term and perform additionalsteps. For example, to search a name database using the search box 118, the user must type in the name, activate the search function, access another page with the search results, and select the name. Further, in many applications or user interface displays typing an entry also involves activating a virtual keyboard or pulling out a hard keyboard and changing the orientation of the display. [0047] In an aspect, a user may activate a search function for searching the address book application by touching the touchscreen with a finger 108, for example, and moving the finger 108 to trace a tickle gesture. An example direction and the general shape of the path that a user may trace to make a tickle gesture are shown by the dotted line 110. The dotted line 110 is shown to indicate the shape and direction of the finger 108 movement and is not included as part of the touchscreen display 102 in the aspect illustrated in FIG. 1. [0048] As illustrated in FIG. 2, once the search functionality is activated by a tickle gesture, an index menu 112 may be displayed. The index menu 112 may allow the user to search through the names in the address book by displaying an alphabetical tab 112a. As the user's finger 108 moves up or down, alphabet letters may be shown in sequence in relation to the vertical location of the finger touch. FIG. 2 shows the finger 108 moving downwards, as indicated by the dotted line 110. [0049] As illustrated in FIG. 3, when the user's finger 108 stops, the index menu 112 may display an alphabet tab 112a in relation to the vertical location of the finger touch on the display. To jump to a listing of names beginning with a particular letter, the user moves his/her finger 108 up or down until the desired alphabet tab 112a is displayed, at which time the user may pause (i.e., stop moving the finger on the touchscreen display). In the example shown in FIG. 3, the letter "O" tab is presented indicating that the user may jump to contact records for individuals whose name begins with the letter "O". [0050] To jump to a listing of names beginning with the letter on a displayed tab, the user lifts his/her finger 108 off of the touch surface. The result is illustrated inFIG. 4, which shows the results of lifting the finger 108 from the touchscreen display 102 while the letter "O" is displayed in the alphabetical tab 112a. In this example, the computer device 100 displays the names in the address book that begin with the letter "O". [0051] The speed in which the user traces a path while using the index menu may determine the level of information detail that may be presented to the user. Referring back to FIG. 3, the alphabetical tab 112a may only display the letter "O" when the user traces his/her finger 108 up or down the touchscreen display 102 in a fast motion. In an aspect illustrated in FIG. 5, the user may trace his/her finger 108 up or down the touchscreen display 102 at a medium speed to generate a display with more information in the alphabetical tab 112a, such as "Ob" which includes the first and second letter of a name in the address book database. When the user lifts his/her finger 108 from the touchscreen display 102 (as shown in FIG. 4), the computing device 100 may display all the names that begin with the displayed two letters. [0052] In a further aspect illustrated in FIG. 6, the user may trace his/her finger 108 down the touchscreen display 102 at a slow speed to generate a display with even more information on the alphabetical tab 112a, such as the entire name of particular contact records. When the user lifts his/her finger 108 from the touchscreen display 102, the computing device 100 may display a list of contacts with the selected name (as shown in FIG. 4), or open the data record of the selected name if there is only a single contact with that name. [0053] FIGs. 7 and 8 illustrate the use of the tickle gesture to activate search functionality within a multimedia application. In the example implementation, when a user's finger 108 traces a tickle gesture on the touchscreen display 102 while watching a movie, as shown in FIG. 7, a video search functionality may be activated. As illustrated in FIG. 8, activation of the search functionality while watching a movie may activate an index menu 112, including movie frames and a scroll bar 119 to allow the user to select a point in the movie to watch. In thisindex menu, the user may navigate back and forth through the movie frames to identify the frame from which the user desires to resume watching the movie. Other panning gestures may also be used to navigate through the movie frames. Once a desired movie frame is selected, by for example, bringing the desired frame to the foreground, the user may exit the index menu 112 screen by, for example, selecting an exit icon 200, or repeating the tickle gesture. Closing the search functionality by exiting the index menu 112 may initiate the video from the point selected by the user from the index menu 112, which is illustrated in FIG. 11. [0054] In another example illustrated in FIG. 9, the tickle gesture in a movie application may activate a search function that generates an index menu 112 including movie chapters in a chapter tab 112a. For example, once the search function is activated by a tickle gesture, the current movie chapter may appear (the illustrated example shown in FIG. 8). As the user moves his/her finger 108 up or down, the chapter number related to the vertical location of the finger 108 touch may appear in the chapter tab 112a. FIG. 10 illustrates this functionality as the user's finger 108 has reached the top of the display 104, so the chapter tab 112a has changed from chapter 8 to chapter 1. By lifting the finger 108 from the touchscreen display 102, the user informs the computing device 100 in this search function to rewind the movie back to the chapter corresponding to the chapter tab 112a. In this example, the movie will start playing from chapter 1, which is illustrated in FIG. 11. [0055] In an alternative aspect, the tickle gesture functionality within the GUI may be configured to display a visual aid within the GUI display to assist the user in tracing a tickle gesture path. For example, as illustrated in FIG. 12, when the user begins to trace a tickle gesture, a visual guide 120 may be presented on the touchscreen display 102 to illustrate the path and path length that the user should trace to activate the tickle gesture function.[0056] The GUI may be configured so the visual guide 120 is displayed in response to a number of different triggers. In one implementation, a visual guide 112 may appear on the touchscreen display 102 in response to the touch of the user's fmger. In this case, the visual guide 120 may appear each time the tickle gesture functionality is enabled and the user touches the touchscreen display 102. In a second implementation, the visual guide 120 may appear in response to the user touching and applying pressure to the touchscreen display 102 or a touchpad. In this case, just touching the touchscreen display 102 (or a touchpad) and tracing a tickle gesture will not cause a visual guide 120 to appear, but the visual guide 120 will appear if the user touches and presses the touchscreen display 102 or touchpad. In a third implementation, a soft key may be designated which when pressed by the user initiates display of the visual guide 120. In this case, the user may view the visual guide 120 on the touchscreen display 102 by pressing the soft key, and then touch the touchscreen to begin tracing the shape of the visual guide 120 in order to activate the function linked to, or associated with, the tickle gesture. In a fourth implementation, the visual guide 120 may be activated by voice command, as in the manner of other voice activated functions that may be implemented on the portable computing device 100. In this case, when the user's voice command is received and recognized by the portable computing device 100, the visual guide 120 is presented on the touchscreen display 102 to serve as a visual aid or guide for the user. [0057] The visual guide 120 implementation description provided above is only one example of visual aids that may be implemented as part of the tickle gesture functionality. As such, these examples are not intended to limit the scope of the present invention. Further, the tickle gesture functionality may be configured to enable users to change the display and other features of the function, based on their individual preferences, by using known methods. For example, users may turn off the visual guide 120 feature, or configure the tickle gesture functionality to show a visual guide 120 only when the user touches and holds a fmger in one place on the touchscreen for a period of time, such as more than 5 seconds.[0058] FIG. 13 illustrates a system block diagram of software and/or hardware components of a computing device 100 suitable for use in implementing the various aspects. The computing device 100 may include a touch surface 101, such as a touchscreen or touchpad, a display 104, a processor 103, and a memory device 105. In some computing devices 100, the touch surface 101 and the display 104 may be the same device, such as a touchscreen display 102. Once a touch event is detected by the touch surface 101, information regarding the position of the touch is provided to the processor 103 on a near continuous basis. The processor 103 may be programmed to receive and process the touch information and recognize a tickle gesture, such as an uninterrupted stream of touch location data received from the touch surface 101. The processor 103 may also be configured to recognize the path traced during a tickle gesture touch event by, for example, noting the location of the touch at each instant and movement of the touch location over time. Using such information, the processor 103 can determine the traced path length and direction, and from this information recognize a tickle gesture based upon the path length, direction, and repetition. The processor 103 may also be coupled to memory 105 that may be used to store information related touch events, traced paths, and image processing data. [0059] FIG. 14 illustrates a process 300 for activating the tickle gesture function on a computing device 100 equipped with a touchscreen display 102. In process 300 at block 302, the processor 103 of a computing device 100 may be programmed to receive touch events from the touchscreen display 102, such as in the form of an interrupt or message indicating that the touchscreen display 102 is being touched. At decision block 304, the processor 103 may then determine whether the touch path event is a tickle gesture based on the touch path event data. If the touch path event is determined not to be a tickle gesture (i.e., decision block 304 = "No"), the processor 103 may continue with normal GUI functions at block 306. If the touch path event is determined to be a tickle gesture (i.e., decision block 304 = "Yes"), the processor 103 may activate a function linked to or associated with the tickle gesture at block 308.[0060] FIG. 15 illustrates an aspect process 400 for detecting continuous tickle gesture touch events. In process 400 at block 302, the processor 103 may be programmed to receive touch path events, and determine whether the touch path event is a new touch, decision block 402. If the touch path event is determined to be from a new touch (i.e. decision block 402 = "Yes"), the processor 103 may determine the touch path event location on the touchscreen display 102, at block 404, and store the touch path event location data, block 406. If the touch path event is determined not to be from a new touch (i.e., decision block 402 = "No"), the processor continues to store the location of the current touch path event, at block 406. [0061] In determining whether the touch path event is a continuous tickle gesture and to differentiate a tickle gesture from other GUI functions, the processor 103 may be programmed to identify different touch path event parameters based on predetermined measurements and criteria, such as the shape of the path event, the length of the path event in each direction, the number of times a path event reverses directions, and the duration of time in which the path events occur. For example in process 400 at block 407, the processor 103 may determine the direction traced in the touch path event, and at decision block 408, determine whether the touch path event is approximately linear. While users may attempt to trace a linear path with their fingers, such traced paths will inherently depart from a purely linear path due to variability in human movements and to variability in touch event locations, such as caused by varying touch areas and shapes due to varying touch pressure. Accordingly, as part of decision block 408 the processor may analyze the stored touch events to determine whether they are approximately linear within a predetermined tolerance. For example, the processor may compute a center point of each touch event, trace the path through the center points of a series of touch events representing a tickle stroke, apply a tolerance to each point, and determine whether the points form a approximately linear line within the tolerance. As another example, the processor may compute a center point of each touch event, trace the path through the center points of a series of touch events representing a tickle stroke, define a straight that best fitsthe center points (e.g., by using a least squares fit), and then determining whether the deviation from the best fit straight line fits all of the points within a predefined tolerance (e.g., by calculating a variance for the center points), or determining whether points near the end of the path depart further from the best fit line than do points near the beginning (which would indicate the path is curving). The tolerances used to determine whether a traced path is approximately linear may be predefined, such as plus or minus ten percent (10%). Since any disruption caused by an inadvertent activation of a search menu (or other function linked to the tickle gesture) may be minor, the tolerance used for determining whether a trace path is approximately equal may be relatively large, such as thirty percent (30%), without degrading the user experience. [0062] In analyzing the touch path event to determine whether the path is approximately linear (decision block 408) and reverses direction a predetermined number of times (decision blocks 416 and 418), the processor will analyze a series of touch events (e.g., one every few milliseconds, consistent with the touch surface refresh rate). Thus, the processor will continue to receive and process touch events in blocks 302, 406, 407 until the tickle gesture can be distinguished from other gestures and touch surface interactions. One way the processor can distinguish other gestures is if they depart from being approximately linear. Thus, if the touch path event is determined not to be approximately linear (i.e., decision block 408 = "No"), the processor 103 may perform normal GUI functions at block 410, such as zooming or panning. However, if the touch path event is determined to be approximately linear (i.e., decision block 408 = "Yes"), the processor 103 may continue to evaluate the touch path traced by received touch events to evaluate other bases for differentiating the tickle gesture from other gestures. [0063] A second basis for differentiating the tickle gesture from other touch path events is the length of a single stroke since the tickle gesture is defined as a series of short strokes. Thus, at decision block 414 as the processor 103 receives eachtouch event, the processor may determine whether the path length in one direction is less than a predetermined value "x". Such a predetermined path length may be used to allow the processor 103 to differentiate between a tickle gesture and other linear gestures that may include tracing a path event on a touchscreen display 102. If the path length in one direction is greater than the predetermined value "x" (i.e., decision block 414 = "No"), this indicates that the touch path event is not associated with the tickle gesture so the processor 103 may perform normal GUI functions at block 410. For example, the predetermined value may be 1 centimeter. In such a scenario, if the path event length extends beyond 1 cm in one direction, the processor 103 may determine that the path event is not a tickle gesture and perform functions associated with other gestures. [0064] A third basis for differentiating the tickle gesture from other touch path events is whether the path reverses direction. Thus, if the path length in each direction is less than or equal to the predetermined value (i.e., decision block 414 = "Yes"), the processor 103 may continue to evaluate the touch path traced by the received touch events to determine whether the path reverses direction at decision block 416. A reversal in the direction of the traced path may be determined by comparing the direction of the traced path determined in block 407 to a determined path direction in the previous portion of the traced path to determine whether the current path direction is approximately 180 degrees from that of the previous direction. Since there is inherent variability in human actions and in the measurement of touch events on a touch surface, the processor 103 may determine that a reversal in path direction has occurred when the direction of the path is between approximately 160° and approximately 200° of the previous direction within the same touch path event. If the processor 103 determines that the touch path does not reverse direction (i.e., determination block 416 = "No"), the processor 103 may continue receiving and evaluating touch events by returning to block 302. The process 400 may continue in this manner until the path length departs from being approximately linear (i.e., decision block 408 = "No"), a stroke length exceeds the predetermined pathlength (i.e., decision block 414 = "No"), or the traced path reverses direction (i.e., decision block 416 = "Yes"). [0065] If the touch pad event reverses directions (i.e., decision block 416 = "Yes"), the processor 103 may determine whether the number of times the path event has reversed directions exceeds a predefined value ("n") in decision block 418. The predetermined number of times that a path event must reverse direction before the processor 103 recognizes it as a tickle gesture determines how much "tickling" is required to initiate the linked function. If the number of times the touch pad event reverses direction is less than the predetermined number "n" (i.e., decision block 418 = "No"), the processor 103 may continue to monitor the gesture by returning to block 302. The process 400 may continue in this manner until the path length departs from being approximately linear (i.e., decision block 408 = "No"), a stroke length exceeds the predetermined path length (i.e., decision block 414 = "No"), or the number of times the touch pad event reverses direction is equal to the predetermined number "n" (i.e., decision block 418 = "Yes"). When the number of strokes is determined to equal the predetermined number "n", the processor 103 may activate the function linked to the tickle gesture, such as activating a search function at block 420 or opening an application at block 421. For example, when "n" is five direction reversals, the processor 103 may recognize the touch path event as a tickle gesture when it determines that the touch path event traces approximately linear strokes, the length of all strokes is less than 1 cm in each direction, and the path reverses directions at least five times. Instead of counting direction reversals the processor 103 may count the number of strokes. [0066] Optionally, before determining whether a touch path event is a tickle gesture, the processor 103 may be configured to determine whether the number of direction reversals "n" (or strokes or other parameters) is performed within a predetermined time span "t" in optional decision block 419. If the number of direction reversals "n" are not performed within the predetermined time limit "t" (i.e., optional decision block 419 = "No"), the processor 103 may perform thenormal GUI functions at block 410. If the number of direction reversals "n" are performed within the time limit "t" (i.e., optional decision block 419 = "Yes"), the processor 103 may activate the function linked with the tickle gesture, such as activating a search function at block 420 or opening an application at block 421. Alternatively, the optional decision block 419 may be implemented as a time-out test that terminates evaluation of the touch path as a tickle gesture (i.e., determines that the traced path is not a tickle gesture) as soon as the time since the new touch event (i.e., when decision block 402 = "Yes") equals the predetermined time limit "t," regardless of whether the number of strokes or direction reversals equals the predetermined minimum associated with the tickle gesture. [0067] FIG. 16 illustrates a process 450 for detecting discontinuous tickle gesture touch events, e.g., a series of down-lift-down strokes. In process 450 at block 302, the processor 103 may be programmed to receive touch path events, and determine whether each touch path event is a new touch, decision block 402. If the touch path event is from a new touch (i.e. decision block 402 = "Yes"), the processor 103 may determine the touch path event start location on the touchscreen display 102 at block 403, and the touch path event end location at block 405, and store the touch path event start and end location data at block 406. If the touch path event is not from a new touch (i.e., decision block 402 = "No"), the processor continues to store the location of the current touch path event at block 406. [0068] In process 450 at decision block 408, the processor 103 may determine whether the touch path event that is being traced by the user on the touchscreen display 102 follows an approximately linear path. If the touch path event being traced by the user is determined not to follow an approximately linear path (i.e., decision block 408 = "No"), the processor 103 may resume normal GUI functions associated with the path being traced at block 410. If the touch path event being traced by the user is determined to follow an approximately linear path (i.e., decision block 408 = "Yes"), the processor 103 may determine thelength of the path being traced by the user at decision block 409. The predetermined length "y" may be designated as the threshold length beyond which the processor 103 can exclude the traced path as a tickle gesture. Thus, if the length of the traced path is longer than the predetermined length "y" (i.e., decision block 409 = "No"), the processor 103 may continue normal GUI functions at block 410. If the length of the traced path is determined to be shorter than the predetermined length "y" (i.e., decision block 409 = "Yes"), the processor 103 may determine whether the touch ends at decision block 411. [0069] If the touch event does not end (i.e., decision block 411 = "No"), the processor 103 may perform normal GUI functions at block 410. If the touch ends (i.e., decision block 411 = "Yes"), the processor 103 may determine whether the number of paths traced one after another in a series of paths is greater than a predetermined number "p" at decision block 413. The predetermined number of paths traced in a series "p" is the number beyond which the processor 103 can identify the traced path as a tickle gesture. Thus, if the number of traced paths in a series is less than "p" (i.e., decision block 413 = "No"), the processor 103 may continue to monitor touch events by returning to block 302 to receive a next touch event. If the number of traced paths in a series is equal to "p" (i.e., decision block 413 = "Yes"), the processor 103 may determine that the path traces a tickle gesture, and activate the function linked to or associated with the tickle gesture, such as a search function at block 420, or open an application at block 421. [0070] Optionally, if the number of traced paths are greater than "p" (i.e., decision block 413 = "Yes"), the processor 103 may determine whether the time period during which the touch paths have been traced is less than a predetermined time limit "t" at decision block 417. A series of touch path events that take longer than time limit "t" to satisfy the other parameters of a tickle gesture specification may not be the tickle gesture (e.g., such as series of down- panning gestures). Thus, if the processor 103 determines that the touch path events were traced during a time period greater than "t" (i.e., decision block 417= "No"), the processor 103 may perform the normal GUI functions associated with the traced path at block 410. If the processor 103 determines that the touch path events were performed within the time limit "t" (i.e., decision block 417 = "Yes"), the processor 103 may recognize the touch path events as a tickle gesture and activate the function linked to the gesture, such as activating a search functionality at block 420, or open an application at block 421. [0071] FIG. 17 shows a process 500 for generating a menu for searching a database once a tickle gesture is recognized in block 420 (FIGs. 15 and 16). In process 500 at block 501, once the menu function is activated, the processor may generate an index menu 112 for presentation on the display 104. As part of generating the index menu 112 the processor 103 may determine the location of the touch of the user's finger 108 on the touchscreen at block 502. The processor 103 may also determine the speed at which the touch path event is being traced by the user's finger 108 at block 504. At block 506 the processor may generate a display including an index menu 112 item in a menu tab 112a, for example, based on the location of the touch path event. Optionally, at block 507 the processor may take into account the speed of the touch path event in displaying index menu 112 items. For example, the index menu 112 items may be abbreviated when the touch path event is traced in a high speed, and may include more details when the touch path event is traced at a slower speed. At decision block 508 the processor 103 may determine whether the user's touch ends (i.e., the user's finger is no longer in contact with the touch surface). If the processor determines that the user touch has ended (i.e., decision block 508 = "Yes"), the processor 103 may display information related to the current index menu 112 item at block 510, and close the index menu 112 graphical user interface at block 512. [0072] The aspects described above may be implemented on any of a variety of portable computing devices 100. Typically, such portable computing devices 100 will have in common the components illustrated in FIG. 18. For example, the portable computing devices 100 may include a processor 103 coupled tointernal memory 105 and a touch surface input device 101 or display 104. The touch surface input device 101 can be any type of touchscreen display 102, such as a resistive-sensing touchscreen, capacitive-sensing touchscreen, infrared sensing touchscreen, acoustic/piezoelectric sensing touchscreen, or the like. The various aspects are not limited to any particular type of touchscreen display 102 or touchpad technology. Additionally, the portable computing device 100 may have an antenna 134 for sending and receiving electromagnetic radiation that is connected to a wireless data link and/or cellular telephone transceiver 135 coupled to the processor 103. Portable computing devices 100 which do not include a touchscreen input device 102 (typically including a display 104) typically include a key pad 136 or miniature keyboard, and menu selection keys or rocker switches 137 which serve as pointing devices. The processor 103 may further be connected to a wired network interface 138, such as a universal serial bus (USB) or Fire Wire® connector socket, for connecting the processor 103 to an external touchpad or touch surfaces, or external local area network. [0073] In some implementations, a touch surface can be provided in areas of the electronic device 100 outside of the touchscreen display 102 or display 104. For example, the keypad 136 can include a touch surface with buried capacitive touch sensors. In other implementations, the keypad 136 may be eliminated so the touchscreen display 102 provides the complete GUI. In yet further implementations, a touch surface may be an external touchpad that can be connected to the electronic device 100 by means of a cable to a cable connector 138, or a wireless transceiver (e.g., transceiver 135) coupled to the processor 103. [0074] A number of the aspects described above may also be implemented with any of a variety of computing devices, such as a notebook computer 2000 illustrated in FIG. 19. Such a notebook computer 2000 typically includes a housing 2466 that contains a processor 2461 coupled to volatile memory 2462 and to a large capacity nonvolatile memory, such as a disk drive 2463. The computer 2000 may also include a floppy disc drive 2464 and a compact disc(CD) drive 2465 coupled to the processor 2461. The computer housing 2466 typically also includes a touchpad 2467, keyboard 2468, and the display 2469. [0075] The computing device processor 103, 2461 may be any programmable microprocessor, microcomputer or multiple processor chip or chips that can be configured by software instructions (applications) to perform a variety of functions, including the functions of the various aspects described above. In some portable computing devices 100, 2000 multiple processors 103, 2461 may be provided, such as one processor dedicated to wireless communication functions and one processor dedicated to running other applications. The processor may also be included as part of a communication chipset. [0076] The various aspects may be implemented by a computer processor 401, 461, 481 executing software instructions configured to implement one or more of the described methods or processes. Such software instructions may be stored in memory 105, 2462 in hard disc memory 2463, on tangible storage medium or on servers accessible via a network (not shown) as separate applications, or as compiled software implementing an aspect method or process. Further, the software instructions may be stored on any form of tangible processor-readable memory, including: a random access memory 105, 2462, hard disc memory 2463, a floppy disk (readable in a floppy disc drive 2464), a compact disc (readable in a CD drive 2465), electrically erasable/programmable read only memory (EEPROM), read only memory (such as FLASH memory), and/or a memory module (not shown) plugged into the computing device 5, 6, 7, such as an external memory chip or USB-connectable external memory (e.g., a "flash drive") plugged into a USB network port. For the purposes of this description, the term memory refers to all memory accessible by the processor 103, 2461 including memory within the processor 103, 2461 itself. [0077] The foregoing method descriptions and the process flow diagrams are provided merely as illustrative examples and are not intended to require or imply that the processes of the various aspects must be performed in the orderpresented. As will be appreciated by one of skill in the art the order of blocks and processes in the foregoing aspects may be performed in any order. Words such as "thereafter," "then," "next," etc. are not intended to limit the order of the processes; these words are simply used to guide the reader through the description of the methods. Further, any reference to claim elements in the singular, for example, using the articles "a," "an" or "the" is not to be construed as limiting the element to the singular. [0078] The various illustrative logical blocks, modules, circuits, and algorithm processes described in connection with the aspects disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and algorithms have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. [0079] The hardware used to implement the various illustrative logics, logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core,or any other such configuration. Alternatively, some processes or methods may be performed by circuitry that is specific to a given function. [0080] In one or more exemplary aspects, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. The processes of a method or algorithm disclosed herein may be embodied in a processor- executable software module executed which may reside on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that may be accessed by a computer. By way of example, and not limitation, such computer-readable media may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to carry or store desired program code in the form of instructions or data structures and that may be accessed by a computer. Also, any connection is properly termed a computer- readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and blu- ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions stored on a machine readable medium and/or computer-readable medium, which may be incorporated into a computer program product.[0081] The foregoing description of the various aspects is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the invention. Thus, the present invention is not intended to be limited to the aspects shown herein, and instead the claims should be accorded the widest scope consistent with the principles and novel features disclosed herein. |
A system and method provide for receiving touch panel raw data and identifying data of interest in the raw data. One or more events may be detected based on the data of interest and a touch processing policy. In one example, the receiving and the identifying are conducted by a touch panel controller, and the detecting is conducted by a host processor. Other techniques, such as subsurface scanning and hybrid scanning may also be used. |
1.A computing system, including:Case with notebook form factor;A touch panel having a touch sensor, a touch panel controller, and firmware having a set of stored controller instructions that when executed by the touch panel controller causes the touch panel controllerReceiving touch panel raw data from the touch sensor, andIdentify the data of interest in the original data;Host processor; andA machine-readable medium that includes a set of stored processor instructions that when executed by the host processor causes the host processorDetecting events based on the data of interest and a touch processing strategy, where the touch processing strategy includes at least one of a debounce component, a finger detection component, a hand rest detection component, a speckle detection component, a specific hand shape detection component, and a secondary hand shape detection component One, andForward the event to at least one of an operating system navigation process and an on-screen keyboard application process.2.The computing system of claim 1, wherein the set of stored processor instructions is used to cause the processorGenerate a scan area based on at least one of the current processor usage rate, the current effective touch report, the historical effective touch report, the generated scan area, the application center logic, and the surface capacitance scan result, andSending the scan area to the touch panel controller, andA set of stored controller instructions is used to make the touch panel controllerOnly a part of the touch panel within the scanning area is scanned to obtain raw data.3.The computing system of claim 1, wherein the set of stored controller instructions causes the touch panel controllerPerforming a surface capacitance scan of the touch panel to obtain a region of interest in the touch panel,Transmitting the surface capacitance scan result to the host processor, andExpanding the region of interest to include a region adjacent to the region of interest, andA projected capacitance scan is performed on the expanded area of interest to identify the data of interest.4.The computing system of claim 1, wherein the firmware includes at least one of a read-only memory (ROM) and a flash memory, and the machine-readable medium includes a random access memory (RAM).5.A device including:Touch panel controller;Firmware, the firmware includes a set of stored controller instructions that when executed by the touch panel controller cause the touch panel controllerReceive raw data from the touch panel, andIdentify the data of interest in the original data;Host processor; andA machine-readable medium that includes a set of stored processor instructions that when executed by the host processor causes the host processorDetect events based on the data of interest and touch processing strategies.6.The apparatus of claim 5, wherein the set of stored processor instructions causes the processor to forward the event to at least one of an operating system process and an application process.7.The device of claim 5, wherein the touch processing strategy includes at least one of a debounce component, a finger detection component, a hand rest detection component, a speckle detection component, a specific hand shape detection component, and a secondary hand shape detection component .8.The apparatus of claim 5, wherein the set of stored processor instructions causes the processorGenerate a scan area, andTransmitting the scanning area to the touch panel controller, andThe set of stored controller instructions causes the touch panel controllerOnly a part of the touch panel within the scanning area is scanned to obtain raw data.9.The apparatus of claim 8, wherein the set of stored processor instructions causes the processor to be based on current processor usage, current effective touch reports, historical effective touch reports, application center logic, and At least one of the surface capacitance scanning results generates the scanning area.10.The apparatus of claim 5, wherein the set of stored controller instructions causes the touch panel controllerPerforming a surface capacitance scan of the touch panel to obtain a region of interest in the touch panel,Expanding the region of interest to include a region adjacent to the region of interest, andA projected capacitance scan is performed on the expanded area of interest to identify the data of interest.11.The apparatus of claim 10, wherein the set of stored controller instructions causes the touch panel controller to transmit the surface capacitance scan result to a host processor.12.The apparatus of claim 5, wherein the firmware includes at least one of a read-only memory (ROM) and a flash memory, and the machine-readable medium includes a random access memory (RAM).13.One method includes:Receive raw data of touch panel;Identify the data of interest in the original data; andAn event is detected based on the data of interest and a touch processing strategy, wherein the receiving and identification are performed by a touch panel controller, and the detection is performed by a host processor.14.The method of claim 13, further comprising forwarding the event to at least one of an operating system process and an application process.15.The method according to claim 13, wherein the touch processing strategy includes at least one of a debounce component, a finger detection component, a hand rest detection component, a speckle detection component, a specific hand shape detection component, and a secondary hand shape detection component .16.The method of claim 13, further comprising:Generate scan area;Transferring the scanning area to the touch panel controller; andOnly a part of the touch panel within the scanning area is scanned to obtain raw data, wherein the generation and transmission are performed by the host processor, and the scanning is performed by the touch panel controller.17.The method of claim 16, wherein the scan area is generated based on at least one of current processor usage, current effective touch report, historical effective touch report, application center logic, and surface capacitance scan results .18.The method of claim 13, further comprising:Performing a surface capacitance scan on the touch panel to obtain an area of interest in the touch panel;Expanding the region of interest to include a region adjacent to the region of interest; andA projected capacitance scan is performed on the expanded region of interest to identify the data of interest, wherein the surface capacitance scan, the expansion of the region of interest, and the projected capacitance scan are performed by the touch panel controller.19.The method of claim 18, further comprising transmitting a surface capacitance scan result to the host processor.20.The method of claim 13, further comprising modifying the touch processing strategy through software upgrade. |
Interested touch panel area reporting schemeTechnical fieldVarious embodiments generally relate to touch panels. More specifically, this embodiment relates to an improved technique of processing raw data from a touch panel.Background techniqueThe touch panel may be configured as a user interface (UI) in various situations. Generally, touch panels include internal control firmware that processes raw data and detects finger touches. These finger touches can be reported as events by the firmware to other system components for application-specific processing. This method has limitations due to the implementation of finger touch event detection in firmware, and may also have limitations in processing power, and the firmware may be difficult to upgrade and / or modify.BRIEF DESCRIPTIONBy reading the following specification and appended claims, and by referring to the following drawings, various advantages of embodiments of the present invention will be apparent to those of ordinary skill in the art, where:1 is a block diagram of an example of a computing system according to an embodiment;2 is a flowchart of an example of a method of identifying a region of interest according to an embodiment;3 is a flowchart of an example of a sub-surface scanning method according to an embodiment;4 is a flowchart of an example of a hybrid scanning method according to an embodiment;FIG. 5 is a diagram of an example of a computing system having a touch panel and a form factor housing of a notebook according to an embodiment.detailed descriptionVarious embodiments may provide a method of receiving raw data of the touch panel. You can identify the data of interest in the original data and detect events based on the data of interest and touch processing strategies. Reception and identification can be performed by the touch panel controller, and detection can be performed by the host processor.Various embodiments may also provide a device including a touch panel controller, firmware, a host processor, and a machine-readable medium. The firmware may have a set of stored controller instructions that, when executed by the touch panel controller, cause the touch panel controller to receive the touch raw data and identify the data of interest in the raw data. The machine-readable medium may include a set of stored processor instructions that when executed by the host processor cause the host processor to detect events based on the data of interest and touch processing strategies.In addition, various embodiments may include a computing system with a notebook form factor housing and a touch screen. The touch panel may have a touch sensor, a touch panel controller, and firmware with a set of stored controller instructions that when executed by the touch panel controller cause the touch panel controller to receive touch panel raw data from the touch sensor. The controller instructions may also cause the touch panel controller to identify the data of interest in the original data. In addition, the computing system includes a host processor and a machine-readable medium having a set of stored processor instructions that when executed by the host processor cause the host processor to detect events based on the data of interest and touch processing strategies. The touch processing strategy may include a debounce component, a finger detection component, a hand rest detection component, a spot detection component, a specific hand shape detection component, and / or a secondary hand shape detection component. The processor instructions may also cause the processor to forward the event to another process such as an operating system (OS) navigation process and / or an on-screen keyboard application process.Turning now to FIG. 1, a computing system 10 is shown, where the system 10 may be part of a mobile platform such as a laptop, personal digital assistant (PDA), wireless smart phone, media player, imaging device, etc., or any of them combination. The system 10 may also be part of a fixed platform such as a personal computer (PC), server, workstation, etc. The illustrated system 10 includes a host processor 12, which may include an integrated memory controller (not shown) that provides access to system memory 14, which may include dual data rate (DDR) synchronous dynamic random access memory Take the memory (SDRAM, such as DDR3 SDRAM JEDEC standard JESD79-3C, April 2008) module. The modules of the system memory 14 may be combined into a single in-line memory module (SIMM), a dual in-line memory module (DIMM), a small-sized DIMM (SODIMM), and so on. The processor 12 may also have one or more processor cores (not shown), where each core may have all the functions of an instruction fetch unit, instruction decoder, level one (L1) cache, execution unit, and so on. In one example, the internal cache of the processor 12 may be implemented with static RAM (SRAM). The processor 12 can also execute an operating system (OS) such as Microsoft Windows, Linux, or Mac (Macintosh) OS, and various other software applications.The processor 12 shown communicates with a platform controller hub (PCH) 16, which in some systems is also referred to as the Southbridge. PCH 16 may have an internal controller (not shown), such as USB (Universal Serial Bus, for example, USB Specification 2.0, USB Implementer Forum), Serial ATA (SATA, for example, SATA Rev. 3.0 specification, 2009 On September 27, SATA International Organization / SATA-IO), high-definition audio and other controllers. The PCH 16 shown is also coupled to one or more mass storage devices 18, which may include hard disk drives, read only memory (ROM), optical disks, flash memory, and the like. PCH 16 may provide support for user interface devices such as microphones, displays, keyboards, mice, speakers, etc. to allow users to interact with and sense information from the system 10.Specifically, the PCH 16 can communicate with the touch panel 20 having the touch sensor 22, the touch panel controller 24, and the firmware 26. The controller 24 may be embedded in fixed function hardware logic using circuit technology such as application specific integrated circuit (ASIC), complementary metal oxide semiconductor (CMOS) or transistor-transistor logic (TTL) technology, or any combination thereof Controller. In the illustrated example, the firmware 26, which can be implemented as logic in a programmable ROM (PROM) or flash memory, includes a set of controller instructions that when executed by the controller 24 cause the controller 24 to receive a touch from the touch sensor 22 Panel raw data 28, and then identify the data of interest 30 in the raw data 28. The data of interest 30 may be passed to the host processor 12 via PCH 16 for further processing.Specifically, a machine-readable medium such as mass storage device 18, system memory 14, or internal cache of processor 12 may include a set of stored processor instructions that when executed by processor 12 cause processing The device 12 detects one or more events 32 based on the data of interest 30 and the touch processing strategy. Touch processing strategies that can be easily modified via software upgrades may include, for example, anti-shake algorithm components, finger detection components, hand rest detection components, non-finger (e.g. speckle) detection components, specific hand shape detection components and / or secondary hand shape detection components . The processor instructions may also cause the processor 12 to forward the event 32 to other software components such as an OS navigation process or an on-screen keyboard application process.Therefore, the method shown enables the system 10 to apply different touch processing strategies to the data of interest 30 without changing the firmware 26 or increasing the number of gates in the controller 24. When appropriate, the much larger processing power of the main processor 12 can be applied to extract the most value from the touch panel raw data 28. At runtime, the touch processing strategy can be changed as needed, or by an upgrade that is easier to complete on the machine-readable medium used by the host processor 12 than in the firmware 26. For example, compared to the tracking and filtering rules applied when navigating the operating system, different data tracking and filtering rules may be appropriate when supporting on-screen keyboard usage. Likewise, in an extended / immature field of use like human-computer interaction using touch, the ability to quickly upgrade functions may be critical to realizing the market value of the computer system 10.In addition, a key concept may be to allow the host processor 12 to be idle when data of interest 30 is not present, which is advantageous for battery-powered mobile computers and handheld devices. In addition, sending data of interest instead of all touch panel raw data 28 may help minimize the amount of processing and bandwidth required by the host.FIG. 2 shows the method 34 of identifying the region of interest in the touch panel raw data in more detail. Method 34 may be executable software as a set of logical instructions stored in a machine or computer readable medium such as random access memory (RAM), read only memory (ROM), programmable ROM (PROM), flash memory, etc. It can also be implemented using fixed-function hardware such as application specific integrated circuits (ASIC), complementary metal oxide semiconductor (CMOS) or transistor-transistor logic (TTL) technology or any combination of circuit technologies. For example, any combination of one or more programming languages can be used to write computer program code for performing the operations shown in the software program 36 of the method 34, including programming languages such as Java, Smalltalk (object-oriented Programming environment language), C ++ and other object-oriented programming languages and traditional programming languages such as the "C" programming language or similar programming languages. On the other hand, fixed function hardware such as assembly language programming or machine code or low-level instructions may be used to perform operations as shown in the firmware process 38 of the method 34.As already pointed out, the touch controller firmware process 38 and the main processor software process 36 can cooperate using a predefined software interface and a division of responsibilities established for data processing. The touch control firmware process 38 shown is responsible for managing the physical sensing of the touch panel and applying low-level corrections to the raw data to ensure data linearity and other correction factors to compensate for environmental factors such as temperature and unit-to-unit differences and calibration The zero operating point / baseline changes. Therefore, the processing block 40 may be used to generate raw data of the touch panel and apply physical layer signal adjustment to the raw data. The touch controller firmware process 38 may also be responsible for detecting data of interest by applying relatively low thresholds or other means to the original data. Therefore, the box 42 shown is used to evaluate the raw data relative to the threshold. Touch controller developers can select this threshold based on knowledge of the expected noise level and the physical properties of the touch sensor. Only areas where the data exceeds the threshold and areas adjacent to its two dimensions can be classified as "interesting". No other strategies or actions need to be applied to the data through the firmware process 38.Block 44 is used to generate a compressed report of adjusted data that exceeds a threshold. Block 46 may be used to detect data of interest at each fixed touch panel scan interval, and if there is data of interest, then forward the data of interest to the host processor for further processing. The host software process 36 may remain idle until it receives a data report from the firmware process 38. The host software program 36 can then analyze and process the data. For example, at block 48, the data report may be subjected to application-specific and usage-specific processing. Specifically, the current touch processing strategy may be applied to the data, where the touch processing strategy may include any one or any combination of the following depending on the current system usage model: specific and different debounce algorithms, finger detection, hands Idle detection, non-finger (spot) detection, specific hand shape or secondary hand shape detection. Then, at block 50, the host processing algorithm may forward the detected event to the operating system or directly to the application as appropriate.In short, the method 34 shown can help minimize data traffic and host processing, while preserving the ability to implement application model specific data processing. In addition, these technologies enable host processing of touch panel data while maintaining host power life. Host processing of touch panel data can implement advanced and application-specific touch functions on a standard set of touch sensing hardware / firmware.Turning now to FIG. 3, a "subsurface scanning" method 52 is shown. In general, sub-surface scanning can be used in combination with the above techniques or alone. In many projected capacitive touch systems, each x / y data point being scanned can cost the system a unit of time due to the consumption of analog-to-digital converter time and data communication and processing time. In many systems, there may be a trade-off between the time the panel is scanned and the number of data points scanned. However, depending on the specific design of the panel and the specific data points in question, this trade-off may be "non-linear" due to the parallelization of hardware or other design choices. The sub-surface scanning method shown can reduce the scanning area known a priori that is not the region of interest, rather than trying to scan the entire surface of the touch sensor in the shortest time unit.For example, a static area of interest, such as around the active area of the keyboard, or a dynamic, discontinuous area around the touch from the last scan can be defined. Reducing the time to scan areas outside of interest can be used for many purposes: (1) the touch controller can be idle, thereby reducing power consumption, and (2) the touch controller firmware can be used by repeating data measurements in the area of interest At this time, for example, the data measurement is repeated at the x / y position currently being touched. By generating additional data points in the areas currently being touched, the effective scan rate of these areas can be increased. The increase in scan rate is very useful for reducing noise, averaging of position determination, and tracking processing algorithms.Therefore, the illustrated method 52 includes a touch controller firmware process 54 and a host software process 56 where the processing block 58 in the host software process 56 is used to process data touch reports containing data of interest and to other applications and / or operating systems Send one or more events. Block 60 is used to determine whether the scan area used by the firmware process 54 should be updated based on one or more factors such as current processor usage, current effective touch report, historical effective touch report, application center logic, and surface capacitance scan results. If it is determined at block 62 that a new scan area is needed based on these factors, then the block 64 shown is used to send the new scan area to the touch controller firmware process 54 where the scan area may be updated in block 66.Block 68 is used to scan only a portion of the touch panel within the scanning area to obtain raw data and generate a touch data report, as already described with respect to blocks 42 and 44 (FIG. 2). If it is determined at block 46 that there is any data to report to the host software process 56, the report containing such data may be transmitted through the controller firmware process 54.Therefore, the method 52 can implement a capacitive touch solution for controlling the area being scanned in software. Specifically, the capacitive touch solution can overscan one area of the touch panel while ignoring other areas of the panel. Since the touch screen area being touched is generally more interested in the system, this aspect can improve the performance of the touch sensing solution and / or reduce the cost of touch sensing because the average human-machine touch situation requires less hardware.4 shows a "hybrid scan" method 70 in conjunction with a visual example 72. In general, the method 70 shown can be implemented in touch controller firmware to achieve both surface capacitance and projected capacitance. Specifically, the touch controller may use the surface capacitance as a predictive amount to determine the area of the touch panel that is being touched. Therefore, block 74 can be used to perform a surface capacitance scan of the touch panel, while the shown block 76 applies a relatively low threshold to the resulting raw data to obtain the region of interest in the touch panel. If it is determined at block 78 that any scanned row / column exceeds the threshold, the touch controller may then use the projected capacitance to scan only the row / column segment in the column and row that is determined to contain the touch during the previous surface capacitance measurement . Therefore, block 80 may be used to generate a joint area of interest map for the touch panel using rows / columns above a threshold. In addition, the region of interest may be expanded in block 82 to include one or more regions adjacent to the original region of interest. Block 84 is used to perform a projected capacitance scan on the expanded region of interest to identify the data of interest. The resulting data report may be sent to the host software process at block 86.The result may be that the projected capacitance is used to scan all areas of the panel where touch occurs, and skip areas that do not contain touch. Therefore, there may be an average increase in panel scan rate, especially in systems where the average occurrence of touch is a small portion of the entire touch panel (eg, single or multiple finger touches). A faster average scan rate allows cheaper hardware to support a given touch sensor glass size. This technique is especially useful for larger touch panels (eg, notebook form factors) that are still designed for a single user.In addition, in surface capacitance measurements, entire rows or columns can be measured together to produce a single scalar measurement. Typically, each row and column is scanned to provide two one-dimensional arrays of data. For non-parallel hardware systems, the measured scan time is equal to the surface capacitance scan time × (number of rows + number of columns). On the other hand, in projected capacitance measurement, the intersection of various rows / columns can be scanned individually (or depending on the touch controller / touch sensor design in some parallel manner), thereby generating a two-dimensional bitmap of scalar data. Therefore, for non-parallel touch sensing systems, the measured scan time is equal to the projected capacitance scan time × (number of rows × number of columns). Therefore, it would be particularly advantageous to use the shown reduction in the number of rows and columns scanned by the projected capacitance.In short, hybrid scanning can optimize the scan rate by enabling the touch solution to quickly identify areas where there is a touch, and perform a complete projected capacitance scan only in those areas. Increasing the scan rate can help use a larger touch panel with the same hardware or increase the average scan rate for a given touch panel.FIG. 5 shows a computing system 86 including a case with a notebook form factor and a touch panel 88. As already noted, touch panel 88 may include a touch sensor, a touch panel controller, and firmware configured to identify data of interest in the raw data obtained from the touch sensor. Furthermore, as already discussed, the system 86 may include a host processor configured to support region of interest processing, sub-surface scanning, and hybrid scanning. In the example shown, the host processor of system 86 is configured to run soft keyboard application processes. Other processes may include operating system navigation processes (eg, touch computing to refresh the computing system), etc.The embodiments of the present invention are applicable to all types of semiconductor integrated circuit ("IC") chips. Examples of these IC chips include but are not limited to processors, controllers, chipset components, programmable logic arrays (PLA), memory chips, network chips, and so on. In addition, in some drawings, signal wires are indicated by lines. Some lines may be thicker to indicate more constituent signal paths, and have a digital mark to indicate the number of constituent signal paths, and / or have arrows at one or more ends to indicate the main information flow direction. However, this should not be interpreted in a restrictive manner. Rather, such added details may be used in conjunction with one or more exemplary embodiments to facilitate easier understanding of the circuit. Any signal lines presented (with or without additional information) may actually contain signal schemes that propagate in multiple directions and can be of any suitable type (e.g., digital or analog lines implemented with differential pairs, fiber optic lines, and / or single End line) to realize one or more signals.Example dimensions / models / values / ranges may have been given, but embodiments of the invention are not limited to this. As manufacturing technology (eg, lithography technology) matures over time, it is expected that the size of manufacturable equipment will become smaller and smaller. In addition, for simplicity of explanation and discussion, and in order not to obscure certain aspects of the embodiments of the present invention, well-known power / ground connections and other components connected to the IC chip may or may not be shown in the drawings. Furthermore, in order to avoid confusing the embodiments of the present invention, and in view of the fact that the details related to the arrangement of the block diagrams are highly dependent on the implementation platform of the embodiments, that these details should be within the scope of those skilled in the art, so The arrangement is shown in block diagram form. Among them, specific details (eg, circuits) are set forth to describe example embodiments of the present invention, and it should be apparent to those of ordinary skill in the art that the embodiments of the present invention can be implemented without these specific details or changes thereof. Therefore, the description is to be regarded as illustrative rather than restrictive.Some embodiments may be implemented, for example, using a machine or a tangible computer-readable medium or article, which may store instructions or instruction sets, which when executed by the machine may cause the machine to perform the method according to the embodiment And / or operation. Such machines may include, for example, any suitable processing platform, computing platform, computing device, processing device, computing system, processing system, computer, processor, etc., and may be implemented using any suitable combination of hardware and / or software. The machine-readable medium or article may include, for example, any suitable type of memory unit, memory device, memory article, memory medium, storage device, storage article, storage medium, and / or storage unit, such as memory, removable or non-removable medium , Erasable or non-erasable media, rewritable or re-writeable media, digital or analog media, hard disks, floppy disks, compact disk read-only memory (CD-ROM), recordable compact disks (CD-R), Re-writable optical disks (CD-RW), optical disks, magnetic media, magneto-optical media, removable memory cards or disks, various types of digital versatile disks (DVD), magnetic tapes, cassette tapes, etc. Instructions may include any such as source code, assembly code, interpreted code, executable code, static code, dynamic code, encrypted code, etc. implemented in any suitable high-level, low-level, object-oriented, visual, assembly and / or interpreted programming languages Suitable type of code.Unless specifically stated otherwise, it should be understood that terms such as "processing", "operation", "calculation", "determination", etc. refer to the actions and / or actions of a computer or computer system or similar electronic computing device Processing, these systems or devices manipulate and / or convert data represented in registers and / or memory as physical quantities (e.g. electronic) into physical quantities also represented in memory, registers or other such information storage, transmission or display devices Other data. These embodiments are not limited to this context.The term "coupled" is used herein to refer to any type of direct or indirect relationship between the components in question, and is applicable to electronic, mechanical, fluid, optical, electromagnetic, electromechanical, or other connection. In addition, the terms "first", "second", etc. are used herein only to facilitate discussion, and have no special time or chronological significance unless otherwise specified.Based on the above description, those of ordinary skill in the art should understand that the general technology of the embodiments of the present invention may be implemented in various forms. Therefore, although the embodiments of the present invention have been described in conjunction with specific examples, the scope of the embodiments of the present invention should not be limited to this, because other modifications based on the study of the drawings, the description, and the following claims modify the ordinary technology in the art. The personnel is obvious. |
Techniques are provided herein to form semiconductor devices having a different number of semiconductor nanoribbons compared to other semiconductor devices on the same substrate. In one example, two different semiconductor devices of a given memory cell, such as a random access memory (RAM) cell, include a p-channel device and an n-channel device. More specifically, the p-channel device (102) may be a GAA transistor with a first number of semiconductor nanoribbons (112a) while the n-channel device (104) may be a GAA transistor with a second number of semiconductor nanoribbons (112b) that is greater than the first number of semiconductor nanoribbons. In some cases, the n-channel device(s) have one additional semiconductor nanoribbon compared to the p-channel device(s). Depending on when the nanoribbons are removed during the fabrication process, different structural outcomes will occur that can be detected in the final device. |
An integrated circuit comprising:a first semiconductor device having a first set of two or more semiconductor nanoribbons extending between a first source region and a first drain region; anda second semiconductor device having a second set of one or more semiconductor nanoribbons extending between a second source region and a second drain region, the second set of semiconductor nanoribbons having a fewer number of nanoribbons than the first set of semiconductor nanoribbons.The integrated circuit of claim 1, wherein a first height between a bottommost nanoribbon and a topmost nanoribbon of the first set of semiconductor nanoribbons is greater than a second height between a bottommost nanoribbon and a topmost nanoribbon of the second set of semiconductor nanoribbons.The integrated circuit of claim 1 or 2, wherein a spacing between adjacent nanoribbons of the first set of semiconductor nanoribbons is substantially the same as a spacing between adjacent nanoribbons of the second set of semiconductor nanoribbons.The integrated circuit of any one of claims 1 through 3, wherein the first semiconductor device is an n-channel device and the second semiconductor device is a p-channel device.The integrated circuit of any one of claims 1 through 4, wherein the first source region and the first drain region extend above a topmost nanoribbon of the first set of semiconductor nanoribbons by a first height, and the second source region and the second drain region extend above a topmost nanoribbon of the second set of semiconductor nanoribbons by a second height that is greater than the first height.The integrated circuit of claim 5, wherein the first drain region and the second drain region are the same region.The integrated circuit of any one of claims 1 through 6, wherein the second semiconductor device comprises a gate electrode around the second set of semiconductor nanoribbons and a spacer along a side of the gate electrode, wherein the spacer includes a dummy channel structure that extends between the second drain region and the gate electrode or between the second source region and the gate electrode.The integrated circuit of any one of claims 1 through 7, wherein the second semiconductor device comprises a dielectric layer around each of the second set of semiconductor nanoribbons and a dummy dielectric layer suspended above the second set of semiconductor nanoribbons, where the dummy dielectric layer is not on any semiconductor nanoribbon.The integrated circuit of any one of claims 1 through 8, wherein the first set of semiconductor nanoribbons and the second set of semiconductor nanoribbons comprise germanium, silicon, or a combination thereof.A printed circuit board comprising the integrated circuit of any one of claims 1 through 9.A method of forming an integrated circuit, comprising:forming a first multilayer fin and a second multilayer fin, each of the first and second multilayer fins comprising first and second material layers, wherein the second material layers comprise a semiconductor material suitable for use as a nanoribbon;forming a dielectric layer between the first multilayer fin and the second multilayer fin;masking the second multilayer fin while leaving the first multilayer fin exposed; andremoving at least a topmost second material layer from the first multilayer fin.The method of claim 11, further comprising:removing a topmost first material layer from the first multilayer fin; andremoving another second material layer from the first multilayer fin.The method of claim 11 or 12, further comprising:forming a first drain region and a first source region on opposite sides of the first multilayer fin; andforming a second drain region and a second source region on opposite sides of the second multilayer fin, wherein a first height of the first drain region and the first source region is less than a second height of the second drain region and the second source region.The method of any one of claims 11 through 13, further comprising doping the second material layers of the first multilayer fin with p-type dopants and doping the second material layers of the second multilayer fin with n-type dopants.The method of any one of claims 11 through 14, further comprising removing the first material layers from the first multilayer fin and the first material layers from the second multilayer fin. |
FIELD OF THE DISCLOSUREThe present disclosure relates to integrated circuits, and more particularly, to gate-all-around (GAA) semiconductor devices.BACKGROUNDAs integrated circuits continue to scale downward in size, a number of challenges arise. For instance, reducing the size of memory and logic cells is becoming increasingly more difficult. Energy consumption of so many semiconductor devices on a given substrate becomes an increasing concern. Some processor cores employ voltage scaling techniques to decrease the energy consumption, however this makes the various semiconductor devices more susceptible to process and/or dopant variations that can cause the devices to not function properly. Accordingly, there remain a number of non-trivial challenges with respect to designing semiconductor devices that can function at lower voltage levels.BRIEF DESCRIPTION OF THE DRAWINGSFigures 1A and 1B are cross-sectional views that illustrate an example integrated circuit having semiconductor devices with a different number of semiconductor nanoribbons, in accordance with an embodiment of the present disclosure.Figures 2A - 2F are cross-sectional views that collectively illustrate an example process for forming semiconductor devices with a different number of semiconductor nanoribbons, in accordance with an embodiment of the present disclosure.Figures 3A and 3B are additional cross-sectional views that are orthogonal to the cross-sectional views of Figures 2A - 2F , and that illustrate the semiconductor devices with a different number of semiconductor nanoribbons, in accordance with an embodiment of the present disclosure.Figure 4A - 4D are cross-sectional views that collectively illustrate another example process for forming semiconductor devices with a different number of semiconductor nanoribbons, in accordance with an embodiment of the present disclosure.Figures 5A - 5D are cross-sectional views that collectively illustrate another example process for forming semiconductor devices with a different number of semiconductor nanoribbons, in accordance with an embodiment of the present disclosure.Figure 6 illustrates a cross-section view of a chip package containing one or more semiconductor dies, in accordance with some embodiments of the present disclosure.Figure 7 is a flowchart of a first fabrication process for semiconductor devices with a different number of semiconductor nanoribbons, in accordance with an embodiment of the present disclosure.Figure 8 is a flowchart of a second fabrication process for semiconductor devices with a different number of semiconductor nanoribbons, in accordance with an embodiment of the present disclosure.Figure 9 is a flowchart of a third fabrication process for semiconductor devices with a different number of semiconductor nanoribbons, in accordance with an embodiment of the present disclosure.Figure 10 illustrates a computing system including one or more integrated circuits, as variously described herein, in accordance with an embodiment of the present disclosure.Although the following Detailed Description will proceed with reference being made to illustrative embodiments, many alternatives, modifications, and variations thereof will be apparent in light of this disclosure. As will be further appreciated, the figures are not necessarily drawn to scale or intended to limit the present disclosure to the specific configurations shown. For instance, while some figures generally indicate perfectly straight lines, right angles, and smooth surfaces, an actual implementation of an integrated circuit structure may have less than perfect straight lines, right angles, and some features may have surface topology or otherwise be non-smooth, given real world limitations of the processing equipment and techniques used.DETAILED DESCRIPTIONTechniques are provided herein to form semiconductor devices having a different number of semiconductor nanoribbons (or other semiconductor bodies) compared to other semiconductor devices on the same substrate. The techniques can be used in any number of integrated circuit applications and are particularly useful with respect to logic and memory cells, such as those cells that use gate-all-around (GAA) transistors. In one example, two different semiconductor devices of a given memory cell, such as a synchronous random access memory (SRAM) cell, include a p-channel device and an n-channel device. More specifically, the p-channel device may be a GAA transistor with a first number of semiconductor nanoribbons while the n-channel device may be a GAA transistor with a second number of semiconductor nanoribbons that is greater than the first number of semiconductor nanoribbons. In some cases, the n-channel device(s) have one additional semiconductor nanoribbon compared to the p-channel device(s). According to an embodiment, the p-channel devices are made to include a fewer number of semiconductor nanoribbons in order to structurally lower the operating current through the p-channel devices by decreasing the number of active semiconductor channels. Depending on when the nanoribbons are removed during the fabrication process, different structural outcomes will occur that can be detected in the final device. Numerous variations and embodiments will be apparent in light of this disclosure.General OverviewAs previously noted above, there remain a number of non-trivial challenges with respect to designing semiconductor devices that consume less energy. As operating voltages decrease, the successful operation of the semiconductor devices of an integrated circuit becomes more susceptible to systemic process variations and/or random dopant fluctuations. In the example of a memory cell, such random dopant and/or process variations could result, for instance, in a p-channel device with a higher drive current than the corresponding n-channel device (strong p-type device and weak n-type device), which can lead to memory write errors. In particular, such a memory cell cannot be written to below some minimum voltage (write failure below Vmin). Some techniques have been implemented to mitigate write failures, but they incur additional power consumption and take up valuable chip footprint, and are relatively difficult to design for (layout).Thus, and in accordance with an embodiment of the present disclosure, techniques are provided herein to form p-channel devices that are structurally weaker (e.g., lower drive current) compared to n-channel devices on the same substrate. In some embodiments, the number of semiconductor nanoribbons are selectively reduced for the p-channel devices compared to the n-channel devices to weaken the p-channel devices. This reduction can be thought of as a depopulation of active channel pathways. Thus, the nanoribbon depopulation techniques provide a structural solution to reducing potential write errors in memory cells. Although many transistor designs may benefit from these techniques, they are especially useful for GAA structures which have a given number of individual semiconductor channel pathways in the form of nanoribbons to be individually removed. In some embodiments, one or more first (p-type) semiconductor devices each have exactly one fewer semiconductor nanoribbons than a number of nanoribbons found in each of one or more second (n-type) semiconductor devices on the same substrate. There many be any number of nanoribbons missing from a given semiconductor device compared to another semiconductor device on the same substrate. In some embodiments, nanoribbons may be removed from one or more first semiconductor devices such that a first height between a bottommost nanoribbon and a topmost nanoribbon of the one or more first semiconductor devices is less than a second height between a bottommost nanoribbon and a topmost nanoribbon of one or more second semiconductor devices. Note the techniques can be applied to other channel configurations as well, such as nanowires or other GAA configurations that allow for selective depopulation of channel pathways.Depending on when the nanoribbons are removed during the fabrication process, different structural outcomes will occur. In one example, semiconductor material in a multilayer fin is depopulated or otherwise removed before the source or drain regions have been formed. This yields an integrated circuit with different semiconductor devices having source or drain regions with different heights. In another example, semiconductor material in a multilayer fin is depopulated or otherwise removed after the removal of a sacrificial gate over the multilayer fin, but before any sacrificial material layers have been removed from the multilayer fin. This yields dummy nanoribbon portions within spacer structures adjacent to some of the semiconductor devices and source or drain regions that extend higher over some semiconductor devices compared to others. In another example, one or more semiconductor nanoribbons are depopulated or otherwise removed after the formation of a gate dielectric around the nanoribbons, but before the formation of a gate electrode. This yields dummy nanoribbon portions within spacer structures adjacent to some of the semiconductor devices, source or drain regions that extend higher over some semiconductor devices compared to others, and some semiconductor devices with a dummy gate dielectric layer suspended above the semiconductor nanoribbons. In any such cases, note that depopulated layers of the multilayer fin are not to confused with sacrificial layers of the multilayer fin. In particular, a depopulated layer would be a nanowire but for the depopulation, whereas a sacrificial layer is removed to release a nanowire.According to an embodiment, an integrated circuit includes a first semiconductor device having a first set of two of more semiconductor bodies extending between a first source region and a first drain region, and a second semiconductor device having a second set of one or more semiconductor bodies extending between a second source region and a second drain region. The second set of semiconductor bodies has a fewer number of bodies than the first set of semiconductor bodies. The first and second semiconductor bodies can be, for example, nanoribbons, nanowires, or other such bodies that can be depopulated using the techniques provided herein. The first semiconductor device has a first gate structure at least partially wrapped around the first set of two or more semiconductor bodies and the second semiconductor device has a second gate structure at least partially wrapped around the second set of one or more semiconductor bodies. Note the gate structures may be gate-all-around structures or tri-gate structures or double-gate structures, depending on the channel configuration.According to another embodiment, a method of forming an integrated circuit includes forming a first multilayer fin and a second multilayer fin, each of the first and second multilayer fins comprising first and second material layers, wherein the first material layers comprise a sacrificial material to be removed to release at least one of the second material layers, and the second material layers comprise a semiconductor material suitable for use as a channel; forming a dielectric layer between the first multilayer fin and the second multilayer fin; masking the second multilayer fin while leaving the first multilayer fin exposed; and removing at least a topmost second material layer from the first multilayer fin. Subsequent processing may include, for example, the selective etching of sacrificial layers (e.g., silicon germanium layers) included in the fins, so as to release one or more nanoribbons (e.g., silicon) or other gate-all-around channel regions.According to another embodiment, a method of forming an integrated circuit includes forming a first multilayer fin and a second multilayer fin, each of the first and second multilayer fins comprising first and second material layers, wherein the first material layers comprise a sacrificial material to be removed to release at least one of the second material layers, and the second material layers comprise a semiconductor material suitable for use as a channel; forming a first sacrificial gate over the first multilayer fin and a second sacrificial gate over the second multilayer fin; forming a first gate spacer on sidewalls of the first sacrificial gate and a second gate spacer on sidewalls of the second sacrificial gate; removing the second sacrificial gate; and removing at least a topmost second material layer of the first multilayer fin while protecting a topmost second material layer of the second multilayer fin. Subsequent processing may include, for example, the selective etching of sacrificial layers (e.g., silicon germanium layers) included in the fins, so as to release one or more nanoribbons (e.g., silicon) or other gate-all-around channel regions.The techniques are especially suited for use with gate-all-around transistors such as nanowire and nanoribbon transistors, but may also be applicable in some instances to finFET devices (e.g., reducing the height of some finFET devices compared to other finFET devices on the same substrate). The source and drain regions can be, for example, doped portions of a given fin or substrate, or epitaxial regions that are deposited during an etch-and-replace source/drain forming process. The dopant-type in the source and drain regions will depend on the polarity of the corresponding transistor. The gate electrode can be implemented with a gate-first process or a gate-last process (sometimes called a replacement metal gate, or RMG, process). Any number of semiconductor materials can be used in forming the transistors, such as group IV materials (e.g., silicon, germanium, silicon germanium) or group III-V materials (e.g., gallium arsenide, indium gallium arsenide).Use of the techniques and structures provided herein may be detectable using tools such as electron microscopy including scanning/transmission electron microscopy (SEM/TEM), scanning transmission electron microscopy (STEM), nano-beam electron diffraction (NBD or NBED), and reflection electron microscopy (REM); composition mapping; x-ray crystallography or diffraction (XRD); energy-dispersive x-ray spectroscopy (EDX); secondary ion mass spectrometry (SIMS); time-of-flight SIMS (ToF-SIMS); atom probe imaging or tomography; local electrode atom probe (LEAP) techniques; 3D tomography; or high resolution physical or chemical analysis, to name a few suitable example analytical tools. For instance, in some example embodiments, such tools may indicate p-type semiconductor devices having a different number of nanoribbons compared to n-type semiconductor devices. In some embodiments, such tools may indicate some semiconductor devices having dummy nanoribbon structures present within the spacer structures. In some embodiments, such tools may indicate adjacent semiconductor devices that share a source or drain region, where the source or drain region has a first height above the semiconductor nanoribbons of one semiconductor device and a different second height above the semiconductor nanoribbons of the other semiconductor device. In some embodiments, such tools may indicate the presence of dummy gate dielectric layers suspended above the semiconductor nanoribbons of some of the semiconductor devices.It should be readily understood that the meaning of "above" and "over" in the present disclosure should be interpreted in the broadest manner such that "above" and "over" not only mean "directly on" something but also include the meaning of over something with an intermediate feature or a layer therebetween. Further, spatially relative terms, such as "beneath," "below," "lower," "above," "upper," and the like, may be used herein for ease of description to describe one element or feature's relationship to another element (s) or feature (s) as illustrated in the figures. The spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. The apparatus may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein may likewise be interpreted accordingly.As used herein, the term "layer" refers to a material portion including a region with a thickness. A monolayer is a layer that consists of a single layer of atoms of a given material. A layer can extend over the entirety of an underlying or overlying structure, or may have an extent less than the extent of an underlying or overlying structure. Further, a layer can be a region of a homogeneous or inhomogeneous continuous structure, with the layer having a thickness less than the thickness of the continuous structure. For example, a layer can be located between any pair of horizontal planes between, or at, a top surface and a bottom surface of the continuous structure. A layer can extend horizontally, vertically, and/or along a tapered surface. A layer can be conformal to a given surface (whether flat or curvilinear) with a relatively uniform thickness across the entire layer.Materials that are "compositionally different" or "compositionally distinct" as used herein refers to two materials that have different chemical compositions. This compositional difference may be, for instance, by virtue of an element that is in one material but not the other (e.g., SiGe is compositionally different than silicon), or by way of one material having all the same elements as a second material but at least one of those elements is intentionally provided at a different concentration in one material relative to the other material (e.g., SiGe having 70 atomic percent germanium is compositionally different than from SiGe having 25 atomic percent germanium). In addition to such chemical composition diversity, the materials may also have distinct dopants (e.g., gallium and magnesium) or the same dopants but at differing concentrations. In still other embodiments, compositionally distinct materials may further refer to two materials that have different crystallographic orientations. For instance, (110) silicon is compositionally distinct or different from (100) silicon. Creating a stack of different orientations could be accomplished, for instance, with blanket wafer layer transfer.ArchitectureFigure 1A is a cross sectional view taken across two example semiconductor devices 102 and 104, according to an embodiment of the present disclosure. Each of semiconductor devices 102 and 104 may be non-planar metal oxide semiconductor (MOS) transistors, such as tri-gate or gate-all-around (GAA) transistors, although other transistor topologies and types could also benefit from the techniques provided herein. The illustrated embodiments herein use the GAA structure. Semiconductor devices 102 and 104 represent a portion of an integrated circuit that may contain any number of similar semiconductor devices.As can be seen, semiconductor devices 102 and 104 are formed on a substrate 106. Any number of semiconductor devices can be formed on substrate 106, but two are used here as an example. Substrate 106 can be, for example, a bulk substrate including group IV semiconductor material (such as silicon, germanium, or silicon germanium), group III-V semiconductor material (such as gallium arsenide, indium gallium arsenide, or indium phosphide), and/or any other suitable material upon which transistors can be formed. Alternatively, the substrate can be a semiconductor-on-insulator substrate having a desired semiconductor layer over a buried insulator layer (e.g., silicon over silicon dioxide). Alternatively, the substrate can be a multilayer substrate or superlattice suitable for forming nanowires or nanoribbons (e.g., alternating layers of silicon and SiGe, or alternating layers indium gallium arsenide and indium phosphide). Any number of substrates can be used.The semiconductor material in each of semiconductor devices 102 and 104 may be formed from substrate 106. Semiconductor devices 102 and 104 may each include semiconductor material as nanowires or nanoribbons that can be, for example, native to substrate 106 (formed from the substrate itself). Alternatively, the semiconductor material can be formed of material deposited onto an underlying substrate. In one such example case, a blanket layer of silicon germanium (SiGe) can be deposited onto a silicon substrate, and then patterned and etched to form a plurality of SiGe fins or nanoribbons. In another such example, non-native fins can be formed in a so-called aspect ratio trapping based process, where native fins are etched away so as to leave fin-shaped trenches which can then be filled with an alternative semiconductor material (e.g., group IV or III-V material). In still other embodiments, the fins include alternating layers of material (e.g., alternating layers of silicon and SiGe) that facilitates forming of nanowires and nanoribbons during a gate forming process where one type of the alternating layers are selectively etched away so as to liberate the other type of alternating layers within the channel region, so that a gate-all-around (GAA) process can then be carried out.As can further be seen, adjacent semiconductor devices are separated by a dielectric fill 108 that may include silicon oxide. Dielectric fill 108 provides shallow trench isolation (STI) between any adjacent semiconductor devices. Dielectric fill 108 can be any suitable dielectric material, such as silicon dioxide, aluminum oxide, or silicon oxycarbonitride.Semiconductor device 102 includes a subfin region 110 and a plurality of nanoribbons 112a above the subfin region 110 (semiconductor device 104 similarly includes nanoribbons 112b above subfin region 110). According to some embodiments, subfin region 110 comprises the same semiconductor material as substrate 106 and is adjacent to dielectric fill 108. According to some embodiments, nanoribbons 112a and 112b extend between a corresponding source and a drain region to provide an active region for a transistor (e.g., the semiconductor region beneath the gate). The source and drain regions are not shown in the cross-section of Figure 1A , but are shown in the orthogonal cross-sections through semiconductor devices 102 and 104 illustrated in Figures 3A and 3B .According to some embodiments, the source and drain regions are epitaxial regions that are provided using an etch-and-replace process. In other embodiments one or both of the source and drain regions could be, for example, implantation-doped native portions of the semiconductor fins or substrate. Any semiconductor materials suitable for source and drain regions can be used (e.g., group IV and group III-V semiconductor materials). The source and drain regions may include multiple layers such as liners and capping layers to improve contact resistance. In any such cases, the composition and doping of the source and drain regions may be the same or different, depending on the polarity of the transistors. In an example, for instance, one transistor is a p-type MOS (PMOS) transistor, and the other transistor is an n-type MOS (NMOS) transistor. Any number of source and drain configurations and materials can be used.Nanoribbons 112a and 112b include a gate dielectric 114 that may include a single material layer or multiple stacked material layers. In some embodiments, gate dielectric 114 includes a first dielectric layer such as silicon oxide and a second dielectric layer that includes a high-K material such as hafnium oxide. The hafnium oxide may be doped with an element to affect the threshold voltage of the given semiconductor device. In some embodiments, the gate dielectric 114 around semiconductor device 102 has a different element doping concentration compared to the gate dielectric 114 around semiconductor device 104. According to some embodiments, the doping element used in gate dielectric 114 is lanthanum. Gate dielectric 114 is present around each nanoribbon 112a and 112b and may also be present over subfin portion 110. In some embodiments, gate dielectric 114 is also present over the top surface of dielectric fill 108.According to some embodiments, a gate electrode 116 extends over the nanoribbons 112a and 112b of semiconductor devices 102 and 104, respectively. Gate electrode 116 may include any sufficiently conductive material such as a metal, metal alloy, or doped polysilicon. According to some embodiments, gate electrode 116 may be interrupted between any adjacent semiconductor devices by a gate cut structure. In some embodiments, one or more work function metals may be included around nanoribbons 112a and 112b. In some embodiments, semiconductor device 102 is a p-channel device that includes a work function metal having titanium and semiconductor device 104 is an n-channel device that includes a work function metal having tungsten. The combination of gate dielectric 114 and gate electrode 116 forms a gate structure for each of semiconductor device 102 and semiconductor device 104.As discussed above, semiconductor device 102 may be a p-channel device having semiconductor nanoribbons 112a doped with n-type dopants (e.g., phosphorous or arsenic) and semiconductor device 104 may be an n-channel device having semiconductor nanoribbons 112b doped with p-type dopants (e.g., boron). There are fewer nanoribbons 112a in semiconductor device 102 compared to semiconductor device 104, according to some embodiments. In one example, there is exactly one fewer nanoribbon 112a in semiconductor device 102 compared to semiconductor device 104. Various methods may be used to selectively remove one or more nanoribbons from semiconductor device 102 as compared to semiconductor device 104, as will be discussed in more detail herein. In some embodiments, one or more nanoribbons may be removed from any number of p-channel devices in an integrated circuit. Furthermore, different numbers of nanoribbons can be removed from different devices by repeating many of the processes described herein. For example, only a single nanoribbon may be removed from a first set of p-channel devices while two or more nanoribbons may be removed from a second set of p-channel devices in the same integrated circuit.Figure 1B illustrates an integrated circuit similar to that depicted in Figure 1A , except that the various features are drawn to reflect real-world process conditions, according to an embodiment. For instance, while Figure 1A generally indicates the various features using straight lines, right angles, and smooth surfaces, an actual integrated circuit structure configured in accordance with an embodiment of the present disclosure may have less than perfect straight lines and right angles, and some features may have a rough surface topography or otherwise be non-smooth, given real-world limitations of fabrication processes such as etching and depositing. As can be seen in Figure 1B , subfins 110 may be tapered rather than rectangular, and nanoribbons 112a/112b are more rounded and blob-like. Note that the nanoribbons may taper as well, such that the uppermost nanoribbon is less wide than the lowermost nanoribbon, and the middle nanoribbon has a width that is between the width of the lowermost nanoribbon and the width of the uppermost nanoribbon. Further note that sidewalls of the subfins 110 may be collinear with the sidewalls of the corresponding nanoribbons 112a and 112b.Fabrication MethodologyFigures 2A - 2F include cross-sectional views that collectively illustrate an example process for forming an integrated circuit configured with semiconductor devices having a different number of nanoribbons compared to other semiconductor devices on the same substrate, in accordance with an embodiment of the present disclosure. Each figure shows an example structure that results from the process flow up to that point in time, so the depicted structure evolves as the process flow continues, culminating in the structure shown in Figure 2F , which is similar to the structure illustrated in Figure 1A . The illustrated integrated circuit structure may be part of a larger integrated circuit that includes other integrated circuitry not depicted. Example materials and process parameters are given, but the present disclosure is not intended to be limited to any specific such materials or parameters, as will be appreciated.Figure 2A illustrates a cross-sectional view across a substrate having a series of material layers deposited over it, according to an embodiment of the present disclosure. The previous relevant discussion with respect to example configurations and materials for substrate 106 is equally applicable here. Alternating material layers may be deposited over substrate 106 that include sacrificial layers 202 alternating with semiconductor layers 204. Any number of alternating sacrificial layers 202 and semiconductor layers 204 may be deposited. Semiconductor layers 204 may include silicon, germanium, or a combination thereof. Sacrificial layers 202 have a different material composition than semiconductor layers 204. In some embodiments, sacrificial layers 202 include some combination of silicon and germanium. In other embodiments, sacrificial layers 202 include a higher germanium content compared to semiconductor layers 204. While dimensions can vary from one example embodiment to the next, the thickness of each semiconductor layer 204 and sacrificial layer 202 may be between about 5 nm and about 25 nm. Each of sacrificial layer 202 and semiconductor layer 204 may be deposited using any known material deposition technique, such as chemical vapor deposition (CVD), plasma-enhanced chemical vapor deposition (PECVD), physical vapor deposition (PVD), or atomic layer deposition (ALD).Figure 2B illustrates a cross-sectional view of the structure shown in Figure 2A following the formation of semiconductor fins, according to an embodiment of the present disclosure. Any number of fins can be patterned across the integrated circuit, but only two are illustrated here for clarity. Each of semiconductor device 102 and 104 includes a semiconductor fin. The fins can include at least a portion that is native to the substrate, as illustrated, or may be non-native to the substrate. Each of the illustrated fins includes a multilayer structure having alternating sacrificial layers 202 and semiconductor layers 204. In some embodiments, the fins are alternating with respect to transistor polarity. For instance, the fin of semiconductor device 102 can include a PMOS material fin (e.g., semiconductor layers 204 are doped with n-type dopants) and the fin of semiconductor device 104 can include an NMOS material fin (e.g., semiconductor layers 204 are doped with p-type dopants) for a first logic or memory cell. Numerous other configurations can be used, including fins included in integrated circuit sections other than memory or logic sections, such as analog mixed signal sections, input/output sections, radio frequency or transducer sections.The fins may be formed by using a patterned hard mask layer or photoresist such as a cap layer 206. According to some embodiments, cap layer 206 protects the underlying material during a directional etching process, such as reactive ion etching (RIE). Cap layer may be, for example, a nitride, oxynitride, a carbide, or an oxycarbonitride. While dimensions can vary from one example embodiment to the next, the total height of the fins extending above the surface of substrate 106 may be in the range of about 100 nm to about 250 nm.It should be noted that the fin fabrication process described with reference to Figures 2A and 2B is just one example process for forming multilayer fins. Other processes may be used as well, such as the aforementioned aspect ratio trapping based process.Figure 2C illustrates a cross-sectional view of the structure shown in Figure 2B following the formation of a dielectric fill 208, according to an embodiment of the present disclosure. In some embodiments, dielectric fill 208 includes silicon oxide, although other oxides or dielectrics may be used as well. Dielectric fill 208 may be deposited using any known dielectric material deposition technique, such as CVD, PECVD, flowable CVD, spin-on dielectric, or ALD, to name a few examples. Dielectric fill 208 may first be deposited to at least fill the regions between adjacent fins, and then polished back until it is level with a top surface of the fins, as illustrated. The polishing process may be performed using chemical mechanical polishing (CMP).Figure 2D illustrates a cross-sectional view of the structure shown in Figure 2C following the selective removal of one or more semiconductor layers 204, according to an embodiment of the present disclosure. A masking material 210 is deposited and patterned to cover one or more of the fins, such as the fin of semiconductor device 104. In some embodiments, masking material 210 is patterned to cover one or more n-channel semiconductor devices while exposing one or more of the p-channel semiconductor devices. Masking material 210 may be a photoresist or hard mask material, such as a carbon hard mask.The top one or more material layers of the exposed fins (such as the exposed fin of semiconductor device 102) may be removed using an isotropic or anisotropic etch process (such as a plasma-based etching process). In one example, reactive ion etching (RIE) is used to remove any number of material layers from the fin of semiconductor device 102. The removed material layers may include both semiconductor layers 204 and sacrificial layers 202, or just a top-most semiconductor layer 204, as illustrated. Each semiconductor layer 204 that is removed ultimately removes one nanoribbon from the resulting transistor of semiconductor device 102.Figure 2E illustrates a cross-sectional view of the structure shown in Figure 2D following the formation of dielectric fill 108, according to an embodiment of the present disclosure. Dielectric fill 108 may act as shallow trench isolation (STI) between adjacent semiconductor devices. In some embodiments, dielectric fill 108 is formed by recessing dielectric fill 208 using any known isotropic etching process. In some embodiments, dielectric fill 208 is completely removed, followed by the deposition of dielectric fill 108 to at least the same height as the fins, and then recessed back using any known controlled etching process to the final height shown. According to some embodiments, each of the fins includes a subfin portion 110 beneath an exposed fin 212 of semiconductor device 102 and an exposed fin 214 of semiconductor device 104 and between dielectric fill 108. Subfin portion 110 may include the same material as semiconductor substrate 106 and may be an integral part of semiconductor substrate 106 that would extend below dielectric fill 108. Following the formation of dielectric fill 108, the exposed fin 214 (e.g., with no removed layers) extending above a top surface of dielectric layer 108 may have a height between about 50 nm and about 200 nm. The width of the fins can be, for example, in the range of about 5 to about 15 nm, such as 6 nm wide. Exposed fin 212 will have a shorter height than exposed fin 214 since one or more layers of exposed fin 212 have been removed. The height of exposed fin 212 extending above a top surface of dielectric layer 108 will depend on the number of material layers that were removed, but in some examples is between about 30 nm and about 140 nm.At this stage, subsequent processes are performed to form the remaining GAA transistor structures that ultimately yield the structure illustrated in Figure 2F . Briefly, these remaining processes involve the formation of source and drain regions for each of semiconductor devices 102 and 104, the removal of sacrificial layers 202 to form suspended nanoribbons 112a/112b, the formation of gate dielectric 114 around the nanoribbons 112a/112b, and the formation of gate electrode 116. The results of many of these processes cannot be seen in the illustrated cross-section and so are shown in Figures 3A and 3B , which illustrate orthogonal cross-section views through semiconductor device 102 and 104, respectively, according to some embodiments. Since the nanoribbons of both semiconductor devices 102 and 104 were formed by removing similar sacrificial layers, the spacing between adjacent nanoribbons 112a is substantially the same as a spacing between adjacent nanoribbons 112b (e.g., within 1 nm).Figure 3A illustrates spacer structures 302 on either side of gate electrode 116 as would be understood to a person skilled in the relevant art. Spacer structures 302 may include a dielectric material, such as silicon nitride. Each of suspended nanoribbons 112a extends between source or drain regions 304a and 304b. As noted above, source or drain regions 304a and 304b can be epitaxial regions that are provided using an etch-and-replace process, and doped with n-type or p-type dopants depending on the channel type of the transistor. Any semiconductor materials suitable for source and drain regions can be used (e.g., group IV and group III-V semiconductor materials). In some embodiments, conductive contacts 306 are formed over source or drain regions 304a and 304b. Conductive contacts 306 may be any suitably conductive material such as most metals. In some embodiments, conductive contacts 306 include one or more of the same metal materials as gate electrode 116.Figure 3B illustrates many of the same structures for semiconductor device 104 as described above for semiconductor device 102. According to some embodiments, the source or drain regions 308a and 308b of semiconductor device 104 are taller compared to the source or drain regions 304a and 304b of semiconductor device 102. This may occur due to the fact that the number of semiconductor nanoribbons in semiconductor device 102 was reduced before the formation of source or drain regions 304a and 304b (e.g., the source and drain regions are grown to a sufficient height to contact each of the nanoribbons).Figures 4A - 4D include cross-sectional views that collectively illustrate another example process for forming an integrated circuit configured with semiconductor devices having a different number of nanoribbons compared to other semiconductor devices on the same substrate, in accordance with an embodiment of the present disclosure. In general, as compared to the process illustrated in Figures 2A - 2F , this procedure takes place after the source or drain regions have been formed. Each figure shows an example structure that results from the process flow up to that point in time, so the depicted structure evolves as the process flow continues. The illustrated integrated circuit structure may be part of a larger integrated circuit that includes other integrated circuitry not depicted. Example materials and process parameters are given, but the present disclosure is not intended to be limited to any specific such materials or parameters, as will be appreciated.Figure 4A illustrates a cross-sectional view across a substrate 401 having two adjacent semiconductor devices 402 and 404. Substrate 401 may be similar to substrate 106 as described above. Semiconductor devices 402 and 404 may be GAA structures that share a common source or drain region 406. Semiconductor device 402 includes semiconductor nanoribbons 408a while semiconductor device 404 includes semiconductor nanoribbons 408b. Semiconductor nanoribbons 408a and 408b alternate with sacrificial layers 410 between spacer structures 412. Semiconductor nanoribbons 408a and 408b may be similar to semiconductor layers 204 while sacrificial layers 410 may be similar to sacrificial layers 202, as discussed above.A sacrificial gate layer 414 may be present over both semiconductor devices 102 and 104 and within spacer structures 412. According to some embodiments, sacrificial gate layer 414 may include any material that can be safely removed without etching or otherwise damaging any portions of semiconductor nanoribbons 408a/408b and spacer structures 412. In some embodiments, sacrificial gate layer 414 comprises polysilicon. In some embodiments, a conductive contact 416 is formed over source or drain region 406 and may be similar to conductive contacts 306 described above.Figure 4B illustrates a cross-sectional view of the structure shown in Figure 4A following the removal of sacrificial gate layer 414, according to an embodiment of the present disclosure. Sacrificial gate layer 414 may be removed using any wet or dry isotropic process thus exposing the portions of the fins between spacer structures 412. In other words, the alternating layer stack of each of the fins is exposed within the trench between spacer structures 412 that is left behind after the removal of sacrificial gate layer 414.Figure 4C illustrates a cross-sectional view of the structure shown in Figure 4B following the selective removal of one or more nanoribbons, according to an embodiment of the present disclosure. A masking material 418 is deposited and patterned to cover one or more of the fins, such as the fin of semiconductor device 404 within the region between spacer structures 412. In some embodiments, masking material 418 is patterned to cover one or more n-channel semiconductor devices while exposing one or more of the p-channel semiconductor devices. Masking material 418 may be a photoresist or hard mask material, such as a carbon hard mask.The top one or more material layers of the exposed fins (such as the exposed fin of semiconductor device 402) may be removed using an anisotropic etch process (such as a plasma-based etching process). In one example, RIE is used to remove any number of material layers from the fin of semiconductor device 402 starting with the top layer and moving downwards. The removed material layers may include both nanoribbons 408a and sacrificial layers 410. According to some embodiments, any portion of a sacrificial layer 410 may be removed following the removal of the above nanoribbon 408a. In the illustrated embodiment, only a top nanoribbon 408a has been removed from the fin of semiconductor device 402.Since the etching process used to remove one or more nanoribbons is performed after the formation of source or drain region 406, source or drain region 406 will extend above a top-most nanoribbon 408a of semiconductor device 402 at a first height h1 and extend above a top-most nanoribbon 408b of semiconductor device 404 at a second height h2 that is less than the first height h1. The height difference will depend on the number of nanoribbons removed from semiconductor device 402. In some examples, height h1 may be between about 1 nm and about 5 nm and height h2 may be between about 10 nm and about 50 nm. Additionally, due to the timing of the etch process used to remove one or more nanoribbons, one or more dummy channel structures 420 are present within spacer structures 412. According to some embodiments, dummy channel structures 420 are aligned with the removed nanoribbons. In some embodiments, dummy channel structures 420 are aligned on the same plane as other nanoribbons 408b from semiconductor devices that did not have nanoribbons removed from that plane (such as semiconductor device 404). Dummy channel structures 420 may be formed as a pair with one dummy channel structure within one spacer structure 412 and the other dummy channel structure within the other spacer structure 412 of a given semiconductor device. It should be understood that only one source or drain region 406 has been illustrated, but that further source or drain regions would be present on the opposite sides of nanoribbons 408a and nanoribbons 408b and these further source or drain regions would have substantially the same height as source or drain region 406.Figure 4D illustrates a cross-sectional view of the structure shown in Figure 4C following the formation of the remaining transistor structures, according to an embodiment of the present disclosure. Following the removal of masking material 418 and sacrificial layers 410 from both semiconductor devices 402 and 404, gate dielectric layer 414 is formed over the suspended nanoribbons, followed by the formation of gate electrode 116 over semiconductor nanoribbons 408a and 408b. Note that the dummy channel structures 420 remain in the final structure with one extending between source or drain region 406 and gate electrode 116 and the other extending between another source or drain region (not shown) and gate electrode 116.Figures 5A - 5E include cross-sectional views that collectively illustrate another example process for forming an integrated circuit configured with semiconductor devices having a different number of nanoribbons compared to other semiconductor devices on the same substrate, in accordance with an embodiment of the present disclosure. In general, as compared to the process illustrated in Figures 2A- 2F , this procedure takes place after the gate dielectric layer has been formed over the suspended nanoribbons (e.g., just before the formation of the gate electrode). Each figure shows an example structure that results from the process flow up to that point in time, so the depicted structure evolves as the process flow continues. The illustrated integrated circuit structure may be part of a larger integrated circuit that includes other integrated circuitry not depicted. Example materials and process parameters are given, but the present disclosure is not intended to be limited to any specific such materials or parameters, as will be appreciated.Figure 5A illustrates a cross-sectional view across a substrate 401 having two adjacent semiconductor devices 502 and 504 that include many features described for semiconductor devices 402 and 404. Accordingly, semiconductor devices 502 and 504 are GAA structures that share a common source or drain region 406. Semiconductor device 502 includes semiconductor nanoribbons 506a while semiconductor device 504 includes semiconductor nanoribbons 506b.Semiconductor nanoribbons 506a/506b include a gate dielectric 508 that may be similar to gate dielectric 114 described above. As discussed above, spacer structures 412 define the edges of semiconductor nanoribbons 506a/506b, according to some embodiments. The structure illustrated in Figure 5A may be similar to one or more GAA structures just before the formation of a gate electrode.Figure 5B illustrates a cross-sectional view of the structure shown in Figure 5A following the selective removal of one or more nanoribbons, according to an embodiment of the present disclosure. Masking material 418, as discussed above, is deposited and patterned to cover one or more of the fins, such as the fin of semiconductor device 504 within the region between spacer structures 412. The top one or more suspended nanoribbons of any of the exposed devices (such as semiconductor device 502) may be removed using one or more anisotropic etches (such as a plasma-based etching process). In one example, a series of RIE processes are used to punch through any layers present on the top-most nanoribbon 506a before etching through the nanoribbon material itself. For example, a first RIE process may be used to punch through the top layer of gate dielectric 508 (different RIE processes may be used to punch through each layer of a multilayer gate dielectric). Once the semiconductor material of nanoribbon 506a is exposed, another RIE process may be used to etch away the exposed nanoribbon. This process may be repeated for however many nanoribbons are to be removed. In the illustrated embodiment, only a top nanoribbon 506a has been removed from semiconductor device 502. Note that gate dielectric 508 still remains around all lower nanoribbons beneath which nanoribbons have been removed. Additionally, in some embodiments, there is no gate dielectric 508 present on the sidewall of spacer structures 412 in the area where the removed one or more nanoribbons had been. In some other embodiments, another dielectric layer is deposited to effectively fill these discontinuities of gate dielectric layer 508 along the sidewalls of gate spacers 412.Since the etching process used to remove one or more nanoribbons is performed after the formation of source or drain region 406, source or drain region 406 will extend above a top-most nanoribbon 506a of semiconductor device 502 at a first height h1 and extend above a top-most nanoribbon 506b of semiconductor device 504 at a second height h2 that is less than the first height h1. The height difference will depend on the number of nanoribbons removed from semiconductor device 502. In some examples, height h1 may be between about 1 nm and about 5 nm and height h2 may be between about 10 nm and about 50 nm. Additionally, due to the timing of the etch process used to remove one or more nanoribbons, one or more dummy channel structures 420 are present within spacer structures 412 as already discussed above with reference to Figure 4C .Figure 5C illustrates a cross-sectional view of the structure shown in Figure 5B following the formation of gate electrode 116, according to an embodiment of the present disclosure. Note that the dummy channel structures 420 remain in the final structure with one extending between source or drain region 406 and gate electrode 116 and the other extending between another source or drain region (not shown) and gate electrode 116.In some embodiments, the timing of the nanoribbon removal after the formation of gate dielectric 508 may produce a dummy dielectric layer 510 suspended above the other nanoribbons 506a. Dummy dielectric layer 510 is from the dielectric gate 508 that had been surrounding the removed nanoribbon. Thus, according to some embodiments, dummy dielectric layer 510 extends between spacer structures 412 but does not surround or otherwise contact any of the semiconductor nanoribbons 506a. Figure 5D illustrates another cross-section view of semiconductor device 502 that is orthogonal to the cross-section view in Figure 5C passing through nanoribbons 506a. Dummy dielectric layer 510 may have a 'U' shape made up of the sides and bottom portion that had been around a nanoribbon (the top portion was removed during the removal of the nanoribbon). In some embodiments where the bottom portion of dummy dielectric layer 510 is removed (e.g., to remove further nanoribbons), only side portions of dummy dielectric layer 510 may remain extending between spacer structures 412.Figure 6 illustrates an example embodiment of a chip package 600, in accordance with an embodiment of the present disclosure. As can be seen, chip package 600 includes one or more dies 602. One or more dies 602 may include at least one integrated circuit having semiconductor devices, such as any of the semiconductor devices disclosed herein. One or more dies 602 may include any other circuitry used to interface with other devices formed on the dies, or other devices connected to chip package 600, in some example configurations.As can be further seen, chip package 600 includes a housing 604 that is bonded to a package substrate 606. The housing 604 may be any standard or proprietary housing, and may provide, for example, electromagnetic shielding and environmental protection for the components of chip package 600. The one or more dies 602 may be conductively coupled to a package substrate 606 using connections 608, which may be implemented with any number of standard or proprietary connection mechanisms, such as solder bumps, ball grid array (BGA), pins, or wire bonds, to name a few examples. Package substrate 606 may be any standard or proprietary package substrate, but in some cases includes a dielectric material having conductive pathways (e.g., including conductive vias and lines) extending through the dielectric material between the faces of package substrate 606, or between different locations on each face. In some embodiments, package substrate 606 may have a thickness less than 1 millimeter (e.g., between 0.1 millimeters and 0.5 millimeters), although any number of package geometries can be used. Additional conductive contacts 612 may be disposed at an opposite face of package substrate 606 for conductively contacting, for instance, a printed circuit board (PCB). One or more vias 610 extend through a thickness of package substrate 606 to provide conductive pathways between one or more of connections 608 to one or more of contacts 612. Vias 610 are illustrated as single straight columns through package substrate 606 for ease of illustration, although other configurations can be used (e.g., damascene, dual damascene, through-silicon via, or an interconnect structure that meanders through the thickness of substrate 606 to contact one or more intermediate locations therein). In still other embodiments, vias 610 are fabricated by multiple smaller stacked vias, or are staggered at different locations across package substrate 606. In the illustrated embodiment, contacts 612 are solder balls (e.g., for bump-based connections or a ball grid array arrangement), but any suitable package bonding mechanism may be used (e.g., pins in a pin grid array arrangement or lands in a land grid array arrangement). In some embodiments, a solder resist is disposed between contacts 612, to inhibit shorting.In some embodiments, a mold material 614 may be disposed around the one or more dies 602 included within housing 604 (e.g., between dies 602 and package substrate 606 as an underfill material, as well as between dies 602 and housing 604 as an overfill material). Although the dimensions and qualities of the mold material 614 can vary from one embodiment to the next, in some embodiments, a thickness of mold material 614 is less than 1 millimeter. Example materials that may be used for mold material 614 include epoxy mold materials, as suitable. In some cases, the mold material 614 is thermally conductive, in addition to being electrically insulating.MethodologyFigure 7 is a flow chart of a method 700 for forming at least a portion of an integrated circuit, according to an embodiment. Various operations of method 700 may be illustrated in Figures 2A-2F . However, the correlation of the various operations of method 700 to the specific components illustrated in the aforementioned figures is not intended to imply any structural and/or use limitations. Rather, the aforementioned figures provide one example embodiment of method 700. Other operations may be performed before, during, or after any of the operations of method 700. For example, method 700 does not explicitly describe many steps that are performed to form common transistor structures. Some of the operations of method 700 may be performed in a different order than the illustrated order.Method 700 begins with operation 702 where at least first and second multilayer fins are formed. The multilayer fins may include alternating layers of sacrificial layers and semiconductor layers over a substrate. the thickness of each of the semiconductor layers and sacrificial layers may be between about 5 nm and about 25 nm. Each of the sacrificial layers and semiconductor layers may be deposited using any known material deposition technique, such as CVD, PECVD, PVD, or ALD. Once the material layers have been deposited, the fins may be defined via an anisotropic etching process, such as RIE, using a patterned mask material to protect the fins from the etch. The fin height may include the alternating material layers and a subfin portion formed from the substrate material. While dimensions can vary from one example embodiment to the next, the total height of the fins extending above the surface of the substrate may be in the range of about 100 nm to about 250 nm. The width of the fins can be, for example, in the range of about 5 to about 15 nm, such as 6 nm wide.Method 700 continues with operation 704 where a dielectric fill is formed between at least the first and second fins. In some embodiments, the dielectric fill includes silicon oxide, although other oxides or dielectrics may be used as well. The dielectric fill may be deposited using any known dielectric material deposition technique, such as CVD, PECVD, flowable CVD, spin-on dielectric, or ALD, to name a few examples. The dielectric fill may first be deposited to at least fill the regions between adjacent fins, and then polished back until it is level with a top surface of the fins using, for example, CMP.Method 700 continues with operation 706 where the second fin is masked using a masking layer, while exposing the first fin. The masking layer may be patterned to cover one or more n-channel semiconductor devices while exposing one or more of the p-channel semiconductor devices. Accordingly, in this example, the first fin may include semiconductor material with n-type dopants and the second fin may include semiconductor material with p-type dopants. The masking layer may be a photoresist or hard mask material, such as a carbon hard mask.Method 700 continues with operation 708 where one or more material layers are removed from the exposed first fin. The top one or more material layers of the exposed fin may be removed using an isotropic or anisotropic etch process (such as a plasma-based etching process). In one example, RIE is used to remove any number of material layers from the exposed first fin. The removed material layers may include both semiconductor layers and sacrificial layers, or just a top-most semiconductor layer. Each semiconductor layer that is removed ultimately removes one nanoribbon from the resulting transistor of the first fin.Method 700 continues with operation 710 where remaining transistor structures are formed to complete the formation of first and second semiconductor devices from the first and second fins, respectively. These remaining processes involve the formation of source and drain regions for each of the semiconductor devices, the removal of sacrificial material layers within each of the fins to form suspended semiconductor nanoribbons, the formation of a gate dielectric around the nanoribbons, and the formation of a gate electrode around the nanoribbons. The results of many of these processes are shown in Figures 3A and 3B , which include cross-section views of a first semiconductor device having a fewer number of nanoribbons than a second semiconductor device.Figure 8 is a flow chart of a method 800 for forming at least a portion of an integrated circuit, according to an embodiment. Various operations of method 800 may be illustrated in Figures 4A - 4D . However, the correlation of the various operations of method 800 to the specific components illustrated in the aforementioned figures is not intended to imply any structural and/or use limitations. Rather, the aforementioned figures provide one example embodiment of method 800. Other operations may be performed before, during, or after any of the operations of method 800. For example, method 800 does not explicitly describe many steps that are performed to form common transistor structures. Some of the operations of method 800 may be performed in a different order than the illustrated order.Method 800 begins with operation 802 where at least first and second multilayer fins are formed as described above for operation 702 of method 700.Method 800 continues with operation 804 where first and second sacrificial gates are formed over the first and second fins. The sacrificial gates may run in an orthogonal direction to each of the fins and may include any material that can be safely removed later in the process without etching or otherwise damaging any portions of the fins or of the spacer structures formed in the next operation. In some embodiments, the sacrificial gates comprise polysilicon.Method 800 continues with operation 806 where spacer structures are formed on the sidewalls of the sacrificial gates. The spacer structures may be formed using an etch-back process where spacer material is deposited everywhere and then anisotropically etched to leave the material only on sidewalls of structures. The spacer structures may include a dielectric material, such as silicon nitride, silicon oxy-nitride, or any formulation of those layers incorporating carbon or boron dopants. In some embodiments, source and drain regions would be formed on either ends of the first and second fins using any of the techniques described above, although the source and drain regions may also be formed later in method 800.Method 800 continues with operation 808 where the sacrificial gates are removed. The sacrificial gates may be removed using any wet or dry isotropic process thus exposing the portions of the fins that had been under the sacrificial gates. The alternating layer stack of each of the fins would be exposed within the trench left behind between the spacer structures after the removal of the sacrificial gates.Method 800 continues with operation 810 where the second fin is masked using a masking layer, while exposing the first fin. The masking layer may be patterned to cover one or more n-channel semiconductor devices while exposing one or more of the p-channel semiconductor devices. Accordingly, in this example, the first fin may include semiconductor material with n-type dopants and the second fin may include semiconductor material with p-type dopants. The masking layer may be a photoresist or hard mask material, such as a carbon hard mask.Method 800 continues with operation 812 where one or more material layers are removed from the exposed first fin. The top one or more material layers of the exposed fin may be removed using an isotropic or anisotropic etch process (such as a plasma-based etching process). In one example, RIE is used to remove any number of material layers from the exposed first fin. The removed material layers may include both semiconductor layers and sacrificial layers, or just a top-most semiconductor layer. Each semiconductor layer that is removed ultimately removes one nanoribbon from the resulting transistor of the first fin.Method 800 continues with operation 814 where remaining transistor structures are formed to complete the formation of first and second semiconductor devices from the first and second fins, respectively. These remaining processes involve the removal of sacrificial material layers within each of the fins to form suspended semiconductor nanoribbons, the formation of a gate dielectric around the nanoribbons, and the formation of a gate electrode around the nanoribbons. The results of many of these processes are shown in Figure 4D , which include cross-section views of a first semiconductor device having a fewer number of nanoribbons than a second adjacent semiconductor device.Figure 9 is a flow chart of a method 900 for forming at least a portion of an integrated circuit, according to an embodiment. Various operations of method 900 may be illustrated in Figures 5A - 5D . However, the correlation of the various operations of method 900 to the specific components illustrated in the aforementioned figures is not intended to imply any structural and/or use limitations. Rather, the aforementioned figures provide one example embodiment of method 900. Other operations may be performed before, during, or after any of the operations of method 900. For example, method 900 does not explicitly describe many steps that are performed to form common transistor structures. Some of the operations of method 900 may be performed in a different order than the illustrated order.Method 900 begins with operation 902, which may occur after operations 802 - 808 from method 800 have been performed. Thus, the structure includes first and second multilayer fins extending between source and drain regions along with spacer structures over the ends of the fins. At operation 902, the sacrificial layers are removed from each of the first and second fins to form a suspended first set of nanoribbons from the first fin and a suspended second set of nanoribbons from the second fin. The sacrificial layers may be removed using a selective isotropic etching process that removes the material of the sacrificial layers but does not remove (or removes very little of) the semiconductor layers.Method 900 continues with operation 904 where a gate dielectric is formed over the first and second sets of nanoribbons. The gate dielectric may include any suitable dielectric (such as silicon dioxide, and/or a high-k dielectric material). Examples of high-k dielectric materials include, for instance, hafnium oxide, hafnium silicon oxide, lanthanum oxide, lanthanum aluminum oxide, zirconium oxide, zirconium silicon oxide, tantalum oxide, titanium oxide, barium strontium titanium oxide, barium titanium oxide, strontium titanium oxide, yttrium oxide, aluminum oxide, lead scandium tantalum oxide, and lead zinc niobate, to provide some examples. According to some embodiments, the gate dielectric is hafnium oxide with a thickness between about 1 nm and about 5 nm. In some embodiments, the gate dielectric may include one or more silicates (e.g., titanium silicate, tungsten silicate, niobium silicate, and silicates of other transition metals). The gate dielectric may be a multilayer structure, in some examples. For instance, the gate dielectric may include a first layer on the nanoribbons, and a second layer on the first layer. The first layer can be, for instance, an oxide of the nanoribbon material (e.g., silicon dioxide) and the second layer can be a high-k dielectric material (e.g., hafnium oxide).Method 900 continues with operation 906 where the second set of nanoribbons are masked using a masking layer, while exposing the first set of nanoribbons. The masking layer may be patterned to cover one or more n-channel semiconductor devices while exposing one or more of the p-channel semiconductor devices. Accordingly, in this example, the first set of nanoribbons may include semiconductor material with n-type dopants and the second set of nanoribbons may include semiconductor material with p-type dopants. The masking layer may be a photoresist or hard mask material, such as a carbon hard mask.Method 900 continues with operation 908 where the gate dielectric over the top nanoribbon of the first set of nanoribbons is removed. The gate dielectric may be removed using any anisotropic etching process, such as an RIE process to etch through the dielectric material.Method 900 continues with operation 910 where the top nanoribbon of the first set of nanoribbons is removed. According to some embodiment, the semiconductor material of the top nanoribbon may be removed using any anisotropic etching process. In one example, the same RIE processed used to etch through the gate dielectric in operation 908 is used continuously to also etch through the semiconductor material of the top nanoribbon of the first set of nanoribbons. In another example, a first RIE process is used to etch through the gate dielectric and a second different RIE process is used to etch through the semiconductor material of the top nanoribbon of the first set of nanoribbons. It should be understood that operations 908 and 910 may be repeated any number of times to remove any number of nanoribbons from the first set of nanoribbons.Method 900 continues with operation 912 where remaining transistor structures are formed to complete the formation of first and second semiconductor devices from the first and second fins, respectively. The remaining one or more processes involve at least the formation of a gate electrode around the nanoribbons. The results of these one or more processes are shown in Figures 5C and 5D , which include cross-section views of a first semiconductor device having a fewer number of nanoribbons than a second adjacent semiconductor device. Additionally, the first semiconductor device includes one or more dummy dielectric layers (depending on how many nanoribbons were removed) suspended above the remaining nanoribbons of the first semiconductor device, according to some embodiments.Example SystemFIG. 10 is an example computing system implemented with one or more of the integrated circuit structures as disclosed herein, in accordance with some embodiments of the present disclosure. As can be seen, the computing system 1000 houses a motherboard 1002. The motherboard 1002 may include a number of components, including, but not limited to, a processor 1004 and at least one communication chip 1006, each of which can be physically and electrically coupled to the motherboard 1002, or otherwise integrated therein. As will be appreciated, the motherboard 1002 may be, for example, any printed circuit board (PCB), whether a main board, a daughterboard mounted on a main board, or the only board of system 1000, etc.Depending on its applications, computing system 1000 may include one or more other components that may or may not be physically and electrically coupled to the motherboard 1002. These other components may include, but are not limited to, volatile memory (e.g., DRAM), nonvolatile memory (e.g., ROM), a graphics processor, a digital signal processor, a crypto processor, a chipset, an antenna, a display, a touchscreen display, a touchscreen controller, a battery, an audio codec, a video codec, a power amplifier, a global positioning system (GPS) device, a compass, an accelerometer, a gyroscope, a speaker, a camera, and a mass storage device (such as hard disk drive, compact disk (CD), digital versatile disk (DVD), and so forth). Any of the components included in computing system 1000 may include one or more integrated circuit structures or devices configured in accordance with an example embodiment (e.g., a module including an integrated circuit device on a substrate, the substrate having one or more first semiconductor devices with a first number of nanoribbons and one or more second semiconductor devices with a second number of nanoribbons different from the first number, as variously provided herein). In some embodiments, multiple functions can be integrated into one or more chips (e.g., for instance, note that the communication chip 1006 can be part of or otherwise integrated into the processor 1004).The communication chip 1006 enables wireless communications for the transfer of data to and from the computing system 1000. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non-solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. The communication chip 1006 may implement any of a number of wireless standards or protocols, including, but not limited to, Wi-Fi (IEEE 802.11 family), WiMAX (IEEE 802.16 family), IEEE 802.20, long term evolution (LTE), Ev-DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The computing system 1000 may include a plurality of communication chips 1006. For instance, a first communication chip 1006 may be dedicated to shorter range wireless communications such as Wi-Fi and Bluetooth and a second communication chip 1006 may be dedicated to longer range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others.The processor 1004 of the computing system 1000 includes an integrated circuit die packaged within the processor 1004. In some embodiments, the integrated circuit die of the processor includes onboard circuitry that is implemented with one or more semiconductor devices as variously described herein. The term "processor" may refer to any device or portion of a device that processes, for instance, electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory.The communication chip 1006 also may include an integrated circuit die packaged within the communication chip 1006. In accordance with some such example embodiments, the integrated circuit die of the communication chip includes one or more semiconductor devices as variously described herein. As will be appreciated in light of this disclosure, note that multi-standard wireless capability may be integrated directly into the processor 1004 (e.g., where functionality of any chips 1006 is integrated into processor 1004, rather than having separate communication chips). Further note that processor 1004 may be a chip set having such wireless capability. In short, any number of processor 1004 and/or communication chips 1006 can be used. Likewise, any one chip or chip set can have multiple functions integrated therein.In various implementations, the computing system 1000 may be a laptop, a netbook, a notebook, a smartphone, a tablet, a personal digital assistant (PDA), an ultra-mobile PC, a mobile phone, a desktop computer, a server, a printer, a scanner, a monitor, a set-top box, an entertainment control unit, a digital camera, a portable music player, a digital video recorder, or any other electronic device that processes data or employs one or more integrated circuit structures or devices formed using the disclosed techniques, as variously described herein.It will be appreciated that in some embodiments, the various components of the computing system 1000 may be combined or integrated in a system-on-a-chip (SoC) architecture. In some embodiments, the components may be hardware components, firmware components, software components or any suitable combination of hardware, firmware or software.Further Example EmbodimentsThe following examples pertain to further embodiments, from which numerous permutations and configurations will be apparent.Example 1 is an integrated circuit that includes a first semiconductor device having a first set of two or more semiconductor nanoribbons extending between a first source region and a first drain region, and a second semiconductor device having a second set of one or more semiconductor nanoribbons extending between a second source region and a second drain region. The second set of semiconductor nanoribbons has a fewer number of nanoribbons than the first set of semiconductor nanoribbons.Example 2 includes the subject matter of Example 1, wherein a first height between a bottommost nanoribbon and a topmost nanoribbon of the first set of semiconductor nanoribbons is greater than a second height between a bottommost nanoribbon and a topmost nanoribbon of the second set of semiconductor nanoribbons.Example 3 includes the subject matter of Example 1 or 2, wherein a spacing between adjacent nanoribbons of the first set of semiconductor nanoribbons is substantially the same as a spacing between adjacent nanoribbons of the second set of semiconductor nanoribbons.Example 4 includes the subject matter of any one of Examples 1-3, wherein the first semiconductor device is an n-channel device and the second semiconductor device is a p-channel device.Example 5 includes the subject matter of any one of Examples 1-4, wherein the first source region and the first drain region extend above a topmost nanoribbon of the first set of semiconductor nanoribbons by a first height, and the second source region and the second drain region extend above a topmost nanoribbon of the second set of semiconductor nanoribbons by a second height that is greater than the first height.Example 6 includes the subject matter of Example 5, wherein the first drain region and the second drain region are the same region.Example 7 includes the subject matter of any one of Examples 1-6, wherein the second semiconductor device comprises a gate electrode around the second set of semiconductor nanoribbons and a spacer along a side of the gate electrode, wherein the spacer includes a dummy channel structure that extends between the second drain region and the gate electrode or between the second source region and the gate electrode.Example 8 includes the subject matter of any one of Examples 1-7, wherein the second semiconductor device comprises a dielectric layer around each of the second set of semiconductor nanoribbons and a dummy dielectric layer suspended above the second set of semiconductor nanoribbons, where the dummy dielectric layer is not on any semiconductor nanoribbon.Example 9 includes the subject matter of any one of Examples 1-8, wherein the first set of semiconductor nanoribbons and the second set of semiconductor nanoribbons comprise germanium, silicon, or a combination thereof.Example 10 is a printed circuit board comprising the integrated circuit of any one of Examples 1-9.Example 11 is an integrated circuit that includes a first semiconductor device having a first set of two or more semiconductor bodies extending between a first source region and a first drain region, and a second semiconductor device having a second set of one or more semiconductor bodies extending between a second source region and a second drain region. The first semiconductor device has a first gate structure wrapped around the first set of two or more semiconductor bodies and the second semiconductor device has a second gate structure wrapped around the second set of one or more semiconductor bodies. The second set of semiconductor bodies has a fewer number of semiconductor bodies than the first set of semiconductor bodies.Example 12 is an electronic device that includes a chip package comprising one or more dies. At least one of the one or more dies includes a first semiconductor device having a first set of two or more semiconductor nanoribbons extending between a first source region and a first drain region, and a second semiconductor device having a second set of one or more semiconductor nanoribbons extending between a second source region and a second drain region. The second set of semiconductor nanoribbons has a fewer number of nanoribbons than the first set of semiconductor nanoribbons.Example 13 includes the subject matter of Example 12, wherein a first height between a bottommost nanoribbon and a topmost nanoribbon of the first plurality of semiconductor nanoribbons is greater than a second height between a bottommost nanoribbon and a topmost nanoribbon of the second plurality of semiconductor nanoribbons.Example 14 includes the subject matter of Examples 12 or 13, wherein a spacing between adjacent nanoribbons of the first plurality of semiconductor nanoribbons is substantially the same as a spacing between adjacent nanoribbons of the second plurality of semiconductor nanoribbons.Example 15 includes the subject matter of any one of Examples 12-14, wherein the first semiconductor device is an n-channel device and the second semiconductor device is a p-channel device.Example 16 includes the subject matter of any one of Examples 12-15, wherein the first source region and the first drain region extend above a topmost nanoribbon of the first plurality of semiconductor nanoribbons by a first height, and the second source region and the second drain region extend above a topmost nanoribbon of the second plurality of semiconductor nanoribbons by a second height that is greater than the first height.Example 17 includes the subject matter of Example 16, wherein the first drain region and the second drain region are the same region.Example 18 includes the subject matter of any one of Examples 12-17, wherein the second semiconductor device comprises a gate electrode around the second plurality of semiconductor nanoribbons and a spacer along a side of the gate electrode, wherein the spacer includes a dummy channel structure that extends between the second drain region and the gate electrode or between the second source region and the gate electrode.Example 19 includes the subject matter of any one of Examples 12-18, wherein the second semiconductor device comprises a dielectric layer around each of the second plurality of semiconductor nanoribbons and a dummy dielectric layer suspended above the second plurality of semiconductor nanoribbons, where the dummy dielectric layer is not on any nanoribbon.Example 20 includes the subject matter of any one of Examples 12-19, wherein the first plurality of semiconductor nanoribbons and the second plurality of semiconductor nanoribbons comprise germanium, silicon, or an alloy thereof.Example 21 is a method of forming an integrated circuit. The method includes forming a first multilayer fin and a second multilayer fin, each of the first and second multilayer fins comprising first and second material layers, wherein the second material layers comprise a semiconductor material suitable for use as a nanoribbon; forming a dielectric layer between the first multilayer fin and the second multilayer fin; masking the second multilayer fin while leaving the first multilayer fin exposed; and removing at least a topmost second material layer from the first multilayer fin.Example 22 includes the subject matter of Example 21, wherein removing at least the topmost second material layer comprises using an anisotropic etching procedure to remove the at least the topmost second material layer.Example 23 includes the subject matter of Example 21 or 22, further comprising: removing a topmost first material layer from the first multilayer fin; and removing another second material layer from the first multilayer fin.Example 24 includes the subject matter of any one of Examples 21-23, further comprising: forming a first drain region and a first source region on opposite sides of the first multilayer fin; and forming a second drain region and a second source region on opposite sides of the second multilayer fin, wherein a first height of the first drain region and the first source region is less than a second height of the second drain region and the second source region.Example 25 includes the subject matter of any one of Examples 21-24, wherein the masking comprises forming a carbon hard mask over the second multilayer fin.Example 26 includes the subject matter of any one of Examples 21-25, further comprising doping the second material layers of the first multilayer fin with p-type dopants and doping the second material layers of the second multilayer fin with n-type dopants.Example 27 includes the subject matter of any one of Examples 21-26, further comprising removing the first material layers from the first multilayer fin and the first material layers from the second multilayer fin.Example 28 is a method of forming an integrated circuit. The method includes forming a first multilayer fin and a second multilayer fin, each of the first and second multilayer fins comprising first and second material layers, wherein the second material layers comprise a semiconductor material suitable for use as a nanoribbon; forming a first sacrificial gate structure over the first multilayer fin and a second sacrificial gate structure over the second multilayer fin; forming a first gate structure spacer on sidewalls of the first sacrificial gate structure and a second gate structure spacer on sidewalls of the second sacrificial gate structure; removing the first sacrificial gate structure and the second sacrificial gate structure; and removing at least a topmost second material layer from the first multilayer fin while protecting a topmost second material layer from the second multilayer fin.Example 29 includes the subject matter of Example 28, further comprising masking the second multilayer fin while leaving the first multilayer fin exposed.Example 30 includes the subject matter of Example 29, wherein the masking comprises forming a carbon hard mask within a trench left behind by the removal of the second sacrificial gate structure.Example 31 includes the subject matter of any one of Examples 28-30, further comprising: removing the first material layers from the first multilayer fin and the first material layers from the second multilayer fin to form a first plurality of nanoribbons comprising the second material layers of the first multilayer fin and a second plurality of nanoribbons comprising the second material layers of the second multilayer fin, respectively; depositing a dielectric layer around the second material layers of the first plurality of nanoribbons and the second material layers of the second plurality of nanoribbons; and removing a portion of the dielectric layer around the topmost second material layer of the first plurality of nanoribbons.Example 32 includes the subject matter of any one of Examples 28-31, further comprising forming a source or drain region between the first multilayer fin and the second multilayer fin, wherein, following the removing of at least the topmost second material layer from the first multilayer fin, the source or drain region extends above a next topmost second material layer from the first multilayer fin by a first height, and the source or drain region extends above the topmost second material layer of the second multilayer fin by a second height that is less than the first height.Example 33 includes the subject matter of any one of Examples 28-32, further comprising doping the second material layers of the first multilayer fin with p-type dopants and doping the second material layers of the second multilayer fin with n-type dopants.The foregoing description of the embodiments of the disclosure has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Many modifications and variations are possible in light of this disclosure. It is intended that the scope of the disclosure be limited not by this detailed description, but rather by the claims appended hereto. |
Systems, apparatuses, and methods related to a computing tile are described. The computing tile may perform operations on received data to extract some of the received data. The computing tile may perform operations without intervening commands. The computing tile may perform operations on data streamed through the computing tile to extract relevant data from data received by the computing tile. In an example, the computing tile is configured to receive a command to initiate an operation to reduce a size of a block of data from a first size to a second size. The computing tile can then receive a block of data from a memory device coupled to the apparatus. The computing tile can then perform an operation on the block of data to extract predetermined data from the block of data to reduce a size of the block of data from a first size to a second size. |
What is claimed is:1. An apparatus, comprising:a computing tile comprising a processing device and a memory resource, wherein the computing tile is configured to:receive a command to initiate an operation to reduce a size of a block of data from a first size to a second size;responsive to receipt of the command, receive a block of data from a memory device coupled to the apparatus; andresponsive to receipt of the block of data, perform an operation on the block of data to extract predetermined data from the block of data to reduce a size of the block of data from a first size to a second size.2. The apparatus of claim 1, wherein the computing tile is configured to perform the operation on the block of data responsive to receipt of the block of data in the absence of an intervening command.3. The apparatus of any one of claims 1-2, wherein the computing tile is further configured to cause the reduced size block of data to be transferred to circuitry external to the computing tile.4. The apparatus of any one of claims 1-2, wherein the computing tile further comprises a DMA buffer to receive a subsequent block of data during performance of the operation on the block of data, and wherein the computing tile is configured to:perform a subsequent operation on the subsequent block of data to extract predetermined data from the subsequent block of data to reduce a size of the subsequent block of data from a first size to a second size in the absence of receipt of an intervening command to initiate the subsequent operation, and cause the reduced size subsequent block of data to be transferred to circuitry external to the computing tile in the absence of receipt of an intervening command by the computing tile.5. An apparatus, comprising:a memory resource coupled to the processing device and inbound buffering circuitry; anda processing device coupled to queuing circuitry and outbound buffering circuitry, wherein the processing resource is configured to:receive, via the queuing circuitry, a command to initiate an operation to reduce respective sizes of blocks of data;cause a first block of data to be loaded into the memory resource from the inbound buffering circuitry;cause the memory resource to perform the operation on the first block of data;cause a second block of data to be loaded into the inbound buffering circuitry;cause the second block of data to be loaded into the memory resource from the inbound buffering circuitry; andresponsive to a determination that the operation on the first block of data is complete, cause the memory resource to perform the operation on the second block of data.6. The apparatus of claim 5, wherein the processing device is further configured to cause the second block of data to be loaded into the inbound buffering circuitry, loaded into the memory resource, and cause the memory resource to perform the operation on the second block of data in the absence of an additional command separate from the command to initiate the operation.7. The apparatus of claim 5, wherein, as part of the operation, the memory resource is configured to:store the first block of data in a first partition of the memory resource; transfer a relevant portion of the first block of data to a second partition of the memory resource; andtransfer the data stored in the second partition to the outbound buffering circuitry.8. The apparatus of any one of claims 5-7, wherein the command to initiate the operation includes an interrupt message.9. The apparatus of any one of claims 5-7, wherein the first block of data or the second block of data includes data corresponding to a database, and wherein the operation comprises a filtering operation to extract particular columns of data from the first block of data or the second block of data.10. The apparatus of any one of claims 5-7, wherein the processing device is configured to cause, subsequent to performance of the operation, at least one of a first resultant block of data and a second resultant block of data to be:transferred to the outbound buffering circuitry;transferred from the outbound buffering circuitry to circuitry external to the apparatus;transferred to the outbound buffering circuitry; and transferred from the outbound buffering circuitry to circuitry external to the apparatus in the absence of an additional command separate from the command to initiate the operation.11. A system, comprising:a plurality of computing tiles each comprising a respective memory resource and a respective reduced instruction set computing (RISC) device, wherein computing tiles among the plurality of computing tiles are configured to:receive respective streams of data comprising a plurality of blocks of data; andperform operations on the blocks of data to extract requested portions of the blocks of data by transferring portions of the blocks of data between partitions of the respective memory resources.12. The system of claim 11, further comprising a communication subsystem coupled to the plurality of computing tiles, wherein the communication subsystem is configured to provide communications pathways between the plurality of computing tiles to allow a first computing tile among the plurality of computing tiles to access an address space associated with a second computing tile among the plurality of computing tiles.13. The system of any one of claims 11-12, further comprising a controller coupled to the computing tiles, wherein the controller is configured to allocate particular computing tiles among the plurality of computing tiles to perform the operations on the blocks of data.14. The system of any one of claims 11-12, wherein the computing tiles are configured to initiate the operations on the blocks of data in response to receipt of an initiation command, and wherein the computing tiles are configured to receive the respective streams of data and perform the operations on the blocks of data in the absence of a command subsequent to the initiation command.15. The system of any one of claims 11-12, wherein the computing tiles are configured to transfer the blocks of data that include the extracted requested portions of data to circuitry external to the computing tiles in response to completion of the operation to extract the requested data.16. A method, comprising:receiving, by a processing device, a command to initiate performance of an operation involving blocks of data stored in a memory device coupled to the processing device;receiving, responsive to the initiation command, a first block of data from the memory device;performing, responsive to receipt of the block of data, a first operation to extract data from the first block of data received from the memory device; receiving a second block of data from the memory device at the processing device while the processing device is performing the first operation; performing, responsive to completion of the first operation and before receiving an additional initiation command, a second operation to extract data from the second block of data received by the processing device.17. The method of claim 16, further comprising buffering, by buffer circuitry coupled to the processing device, the second block of data prior to performance of the second operation such that the second block of data is available to processing device to perform the second operation upon completion of the first operation.18. The method of claim 16, further comprising:requesting, by the processing device, information stored in an address space of a processing device different than the processing device;transferring the requested information from the processing device different than the processing device to the processing device.19. The method of claim 16, further comprising transferring the data extracted from the first block of data to circuitry external to the processing device in response to completion of the operation to extract data from the first block of data.20. The method of claim 16, wherein performing the first operation to extract data from the first block of data further comprises:storing the first block of data in a first partition of a memory resource coupled to the processing device; andselectively transferring a portion of data associated with the first block of data to a second partition of the memory device, wherein the portion of data includes the data to be extracted from the block of data.21. The method of any one of claims 16-20, further comprising:generating a logical record corresponding to at least one of the data extracted from the first block of data and the second block of data; andtransferring the logical word to circuitry external to the processing device. |
COMPUTING TILETechnical Field[0001] The present disclosure relates generally to semiconductor memory and methods, and more particularly, to apparatuses, systems, and methods for a computing tile.Background[0002] Memory devices are typically provided as internal,semiconductor, integrated circuits in computers or other electronic systems. There are many different types of memory including volatile and non-volatile memory. Volatile memory can require power to maintain its data (e.g., host data, error data, etc.) and includes random access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), synchronous dynamic random access memory (SDRAM), and thyristor random access memory (TRAM), among others. Non-volatile memory can provide persistent data by retaining stored data when not powered and can include NAND flash memory, NOR flash memory, and resistance variable memory such as phase change random access memory (PCRAM), resistive random access memory (RRAM), and magnetoresistive random access memory (MRAM), such as spin torque transfer random access memory (STT RAM), among others.[0003] Memory devices may be coupled to a host (e.g., a host computing device) to store data, commands, and/or instructions for use by the host while the computer or electronic system is operating. For example, data, commands, and/or instructions can be transferred between the host and the memory device(s) during operation of a computing or other electronic system.Brief Description of the Drawings[0004] Figure 1 is a functional block diagram in the form of a computing system including an apparatus including a storage controller and a number of memory devices in accordance with a number of embodiments of the present disclosure. [0005] Figure 2 is a functional block diagram in the form of an apparatus including a storage controller in accordance with a number of embodiments of the present disclosure.[0006] Figure 3 is another functional block diagram in the form of an apparatus including a storage controller in accordance with a number of embodiments of the present disclosure.[0007] Figure 4A is yet another functional block diagram in the form of an apparatus including a storage controller in accordance with a number of embodiments of the present disclosure.[0008] Figure 4B is yet another functional block diagram in the form of an apparatus including a storage controller in accordance with a number of embodiments of the present disclosure.[0009] Figure 4C is yet another functional block diagram in the form of an apparatus including a storage controller in accordance with a number of embodiments of the present disclosure.[0010] Figure 5 is a block diagram in the form of a computing tile in accordance with a number of embodiments of the present disclosure.[0011] Figure 6 is another block diagram in the form of a computing tile in accordance with a number of embodiments of the present disclosure.[0012] Figure 7 is a flow diagram representing an example method for storage device operation orchestration in accordance with a number of embodiments of the present disclosure.Detailed Description[0013] The present disclosure includes apparatuses, systems, and methods for a computing tile. An example apparatus includes a computing tile comprising a processing device and a memory resource. The computing tile is configured to receive a command to initiate an operation to reduce a size of a block of data from a first size to a second size. Responsive to receipt of the command, the computing tile can receive a block of data from a memory device coupled to the apparatus. Responsive to receipt of the block of data, the computing tile can perform an operation on the block of data to extract predetermined data from the block of data to reduce a size of the block of data from a first size to a second size. [0014] Memory devices may be used to store important or critical data in a computing device and can transfer such data between a host associated with the computing device. However, as the size and quantity of data stored by memory devices increases, transferring the data to and from the host can become time consuming and resource intensive. For example, when a host requests large blocks of data from a memory device, an amount of time and/or an amount of resources consumed in obliging the request can increase in proportion to the size and/or quantity of data associated with the blocks of data.[0015] As storage capability of memory devices increases, these effects can become more pronounced as more and more data are able to be stored by the memory device and are therefore available to be transferred to or from the host. In addition, blocks of requested data can include data that is not relevant or needed by the host. For example, in some approaches, irrelevant data may be transferred to the host with a block of data that includes relevant data. This can lead to a need for further processing on the host end to extract the relevant data from the block of data, which can incur additional processing time and/or consume additional processing resources.[0016] For example, in some approaches, when a block of data that includes a large quantity of information such as a block of data that includes multiple columns of information, all of the information included in the block of data may be transferred to the host despite the host desiring only certain columns of data included in the block of data. In the case of large blocks of data, the processing time and/or resource consumption associated with processing the blocks of data to extract relevant information can become excessive, thereby reducing the efficacy of the host or computing device.[0017] As a non-limiting example, the host may request specific data that is stored in a database by a memory device. The host may only be interested in in the first two columns of data from the database but not the third column of data. In some approaches, the memory device may transfer all three columns of data to the host and the host may perform additional processing on the data to obtain only the relevant first two columns. In such examples, additional time, bandwidth, and/or processing resources may be consumed not only in transferring an entire column of data to the host that the host is not going to use, but also in host operations to remove the irrelevant data (e.g., the third column in this example).[0018] In contrast, embodiments herein allow for the relevant data to be extracted from a block of data by a storage controller (e.g., by circuitry coupled to or provided on the memory device) prior to transfer of the data to the host.For example, embodiments herein can allow for operations, such as filtering operations, in which an amount of data to be transferred to the host is reduced prior to transfer of said data to the host, to be performed on blocks of data prior to the data being transferred to the host. In relation to the above non-limiting example, this can allow for the host to receive only the first two columns of data (e.g., the relevant data) instead of the relevant data and the irrelevant data. This can allow for a reduction in time, bandwidth, and/or processing resources consumed not only in transferring irrelevant data to the host, but also can reduce time, bandwidth, and/or processing resources consumed by host operations to remove the irrelevant data in comparison to some approaches.[0019] Similarly, embodiments herein allow for the relevant data to be extracted from a block of data by a storage controller (e.g., by circuitry coupled to or provided on the memory device) prior to transfer of the data to a memory device coupled to the storage controller. For example, embodiments herein can allow for operations, such as filtering operations, in which an amount of data to be transferred to the memory device(s) is reduced prior to transfer of said data to the memory device(s), to be performed on blocks of data prior to the data being transferred to the memory device(s).[0020] In the following detailed description of the present disclosure, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration how one or more embodiments of the disclosure may be practiced. These embodiments are described in sufficient detail to enable those of ordinary skill in the art to practice the embodiments of this disclosure, and it is to be understood that other embodiments may be utilized and that process, electrical, and structural changes may be made without departing from the scope of the present disclosure.[0021] As used herein, designators such as“X,”“Y,”“N,”“M,”“A,” B. “C,”“D,” etc., particularly with respect to reference numerals in the drawings, indicate that a number of the particular feature so designated can be included. It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only, and is not intended to be limiting. As used herein, the singular forms“a,”“an,” and“the” can include both singular and plural referents, unless the context clearly dictates otherwise.In addition,“a number of,”“at least one,” and“one or more” (e.g., a number of memory banks) can refer to one or more memory banks, whereas a“plurality of’ is intended to refer to more than one of such things. Furthermore, the words “can” and“may” are used throughout this application in a permissive sense (i.e., having the potential to, being able to), not in a mandatory sense (i.e., must). The term“include,” and derivations thereof, means“including, but not limited to.” The terms“coupled” and“coupling” mean to be directly or indirectly connected physically or for access to and movement (transmission) of commands and/or data, as appropriate to the context. The terms“data” and“data values” are used interchangeably herein and can have the same meaning, as appropriate to the context.[0022] The figures herein follow a numbering convention in which the first digit or digits correspond to the figure number and the remaining digits identify an element or component in the figure. Similar elements or components between different figures may be identified by the use of similar digits. For example, 104 may reference element“04” in Figure 1, and a similar element may be referenced as 204 in Figure 2. A group or plurality of similar elements or components may generally be referred to herein with a single element number. For example, a plurality of reference elements 110-1, 110-2, . . ., 110-N may be referred to generally as 110. As will be appreciated, elements shown in the various embodiments herein can be added, exchanged, and/or eliminated so as to provide a number of additional embodiments of the present disclosure. In addition, the proportion and/or the relative scale of the elements provided in the figures are intended to illustrate certain embodiments of the present disclosure and should not be taken in a limiting sense.[0023] Figure 1 is a functional block diagram in the form of a computing system 100 including an apparatus including a storage controller 104 and a number of memory devices 116-1, . . ., 116-N in accordance with a number of embodiments of the present disclosure. As used herein, an“apparatus” can refer to, but is not limited to, any of a variety of structures or combinations of structures, such as a circuit or circuitry, a die or dice, a module or modules, a device or devices, or a system or systems, for example. In the embodiment illustrated in Figure 1, memory devices 116-1... 116-N can include a one or more memory modules (e.g., single in-line memory modules, dual in-line memory modules, etc.). The memory devices 116-1, . . ., 116-N can include volatile memory and/or non-volatile memory. In a number of embodiments, memory devices 116-1, ... , 116-N can include a multi-chip device. A multi-chip device can include a number of different memory types and/or memory modules. For example, a memory system can include non-volatile or volatile memory on any type of a module.[0024] The memory devices 116-1, . . ., 116-N can provide main memory for the computing system 100 or could be used as additional memory or storage throughout the computing system 100. Each memory device 116-1, . . ., 116-N can include one or more arrays of memory cells, e.g., volatile and/or non volatile memory cells. The arrays can be flash arrays with a NAND architecture, for example. Embodiments are not limited to a particular type of memory device. For instance, the memory device can include RAM, ROM, DRAM, SDRAM, PCRAM, RRAM, and flash memory, among others.[0025] In embodiments in which the memory devices 116-1, . . ., 116-N include non-volatile memory, the memory devices 116-1, . . ., 116-N can be flash memory devices such as NAND or NOR flash memory devices.Embodiments are not so limited, however, and the memory devices 116-1, . . ., 116-N can include other non-volatile memory devices such as non-volatile random-access memory devices (e.g., NVRAM, ReRAM, FeRAM, MRAM, PCM),“emerging” memory devices such as 3-D Crosspoint (3D XP) memory devices, etc., or combinations thereof.[0026] As illustrated in Figure 1, a host 102 can be coupled to a storage controller 104, which can in turn be coupled to the memory devices 116-1... 116- N. In a number of embodiments, each memory device 116-1... 116-N can be coupled to the storage controller 104 via a channel (e.g., channels 107-1, ... , 107-N). In Figure 1, the storage controller 104, which includes an orchestration controller 106, is coupled to the host 102 via channel 103 and the orchestration controller 106 is coupled to the host 102 via a channel 105. The host 102 can be a host system such as a personal laptop computer, a desktop computer, a digital camera, a smart phone, a memory card reader, and/or intemet-of-thing enabled device, among various other types of hosts, and can include a memory access device, e.g., a processor (or processing device). One of ordinary skill in the art will appreciate that“a processor” can intend one or more processors, such as a parallel processing system, a number of coprocessors, etc.[0027] The host 102 can include a system motherboard and/or backplane and can include a number of processing resources (e.g., one or more processors, microprocessors, or some other type of controlling circuitry). The system 100 can include separate integrated circuits or the host 102, the storage controller 104, the orchestration controller 106, the network-on-chip (NoC) 108, and/or the memory devices 116-1, . . ., 116-N can be on the same integrated circuit. The system 100 can be, for instance, a server system and/or a high performance computing (HPC) system and/or a portion thereof. Although the example shown in Figure 1 illustrate a system having a Von Neumann architecture, embodiments of the present disclosure can be implemented in non-Von Neumannarchitectures, which may not include one or more components (e.g., CPU, ALU, etc.) often associated with a Von Neumann architecture.[0028] The storage controller 104 can include an orchestration controller106, a network on a chip (NoC) 108, a plurality of computing tiles 110-1, . . ., 110-N, which are described in more detail in connection with Figures 5 and 6, herein, and a media controller 112. The orchestration controller 106 can include circuitry and/or logic configured to allocate and de-allocate resources to the computing tiles 110-1, . . ., 110-N during performance of operations described herein. In some embodiments, the orchestration controller 106 can be an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), or other combination of circuitry and/or logic configured to orchestrate operations performed by the computing tiles 110-1, . . ., 110-N. For example, the orchestration controller 106 can include circuitry and/or logic to control the computing tiles 110-1, . . ., 110-N to perform operations on blocks of received data to reduce an amount of data included in the block of data.[0029] The orchestration controller 106 can be configured to request a block of data from one or more of the memory devices 116-1, . . ., 116-N and cause the computing tiles 110-1, . . ., 110-N to perform an operation (e.g., a filtering operation) on the block of data. The operation may be performed to reduce a total amount of data (e.g., a number of bits of data) associated with the block of data. The orchestration controller 104 can be further configured to cause the block of data that has been operated on (e.g., a filtered block of data) to be transferred to an interface (e.g., communication paths 103 and/or 105) and/or the host 102.[0030] In some embodiments, the orchestration controller 106 can be one of the plurality of computing tiles 110. For example, the orchestration controller 106 can include the same or similar circuitry that the computing tiles 110-1, . . ., 110-N include, as described in more detail in connection with Figure 4B, herein. However, in some embodiments, the orchestration controller 106 can be a distinct or separate component from the computing tiles 110-1, . . ., 110-N, and may therefore include different circuitry than the computing tiles 110, as shown in Figure 1.[0031] The NoC 108 can be a communication subsystem that allows for communication between the orchestration controller 106 and the computing tiles 110-1, . . ., 110-N. The NoC 108 can include circuitry and/or logic to facilitate the communication between the orchestration controller 106 and the computing tiles 110-1, . . ., 110-N. In some embodiments, as described in more detail in connection with Figure 2, herein, the NoC 108 can receive an output from the computing tiles 110-1, . . ., 110-N and transfer the output from the computing tiles 110-1, . . ., 110-N to the orchestration controller 106 and/or the host 102, and vice versa. For example, the NoC 108 may be configured to receive data that has been subjected to a filtering operation by the computing tiles 110-1, . . ., 110-N and transfer the filtered data to the orchestration controller 106 and/or the host 102. In some embodiments, as described in more detail in connection with Figure 4B, herein, the NoC 108 can include at least a portion of the orchestration controller 106. For example, the NoC 108 can include the circuitry that comprises the orchestration controller 106, or a portion thereof.[0032] Although a NoC 108 is shown in Figure 1, embodiments are not limited to utilization of a NoC 108 to provide a communication path between the orchestration controller 106 and the computing tiles 110-1, . . ., 110-N. For example, other communication paths such as a storage controller crossbar (XBAR) may be used to facilitate communication between the computing tiles 110-1, . . ., 110-N and the orchestration controller 106. [0033] The media controller 112 can be a“standard” or“dumb” media controller. For example, the media controller 112 can be configured to perform simple operations such as copy, write, read, error correct, etc. for the memory devices 116-1, . . ., 116-N. However, in some embodiments, the media controller 112 does not perform processing (e.g., operations to manipulate data) on data associated with the memory devices 116-1, . . ., 116-N. For example, the media controller 112 can cause a read and/or write operation to be performed to read or write data from or to the memory devices 116-1, . . ., 116-N via the communication paths 107-1, . . ., 107-N, but the media controller 112 may not perform processing on the data read from or written to the memory devices 116- 1, . . ., 116-N. In some embodiments, the media controller 112 can be a non volatile media controller, although embodiments are not so limited.[0034] The embodiment of Figure 1 can include additional circuitry that is not illustrated so as not to obscure embodiments of the present disclosure. For example, the storage controller 104 can include address circuitry to latch address signals provided over I/O connections through I/O circuitry. Address signals can be received and decoded by a row decoder and a column decoder to access the memory devices 116-1, . . ., 116-N. It will be appreciated by those skilled in the art that the number of address input connections can depend on the density and architecture of the memory devices 116-1, . . ., 116-N.[0035] Figure 2 is a functional block diagram in the form of an apparatus including a storage controller 204 in accordance with a number of embodiments of the present disclosure. The storage controller 204 can be analogous to the storage controller 104 illustrated in Figure 1. As shown in Figure 2, the storage controller 204 can include a media controller 212, a plurality of computing tiles 210-1, . . ., 210-N, a network on chip (NoC) 208, and an orchestration controller 206.[0036] The media controller 212 can be configured to retrieve blocks of data 211A-1, . . ., 21 U-N, 211B-1, . . ., 21 1B-N, 21 1C-1, . . ., 211c-N, 21 ID-1 , . . ., 21 1D-N, 211E-1 , . . ., 21 1E-N from a memory device (e.g., memory device(s) 116-1, . . ., 116-N illustrated in Figure 1) coupled to the storage controller 204 in response to a request from the orchestration controller 206. The media controller can subsequently cause the blocks of data 211A-1 , . . ., 211A-N, 21 IB- 1, . . ., 211B-N, 21 1C-1, . . , 211c-N, 21 1D-1, . . ., 211o-N, 211E-1, . . , 211E-N to be transferred to the computing tiles 210-1, 210-N and/or the orchestration controller 206.[0037] Similarly, the media controller 212 can be configured to receive blocks of data 211A- 1 , . . ., 21 U-N, 211B-1 , . . ., 211B-N, 211 C-1 , . . ., 211c-N,21 ID-1, . . ., 211D-N, 211E-1, . . ., 211E-N from the computing tiles 210 and/or the orchestration controller 206. The media controller can subsequently cause the blocks of data 21 U-l, . . ., 21 U-N, 211B-1 , . . ., 21 1B-N, 211 C-1, . . ., 211c- N, 21 ID-1 , . . ., 211D-N, 21 1E-1 , . . ., 211E-N to be transferred to a memory device coupled to the storage controller 204.[0038] The blocks of data 211 can be approximately 4 kilobytes in size(although embodiments are not limited to this particular size) and can be processed in a streaming manner by the computing tiles 210-1, . . ., 210-N in response to one or more commands generated by the orchestration controller 206. For example, as described in more detail in connection with Figures 5 and 6, herein, because the computing tiles 210 can process a second block of data 211 in response to completion of a process on a preceding block of data 211, the blocks of data 211 can be continuously streamed through the computing tiles 210 while the blocks of data 211 are being processed by the computing tiles 210. In some embodiments, the blocks of data 211 can be processed in a streaming fashion through the computing tiles 210 in the absence of an intervening command from the orchestration controller 206. That is, in some embodiments, the orchestration controller 206 can issue a command to cause the computing tiles 210 to process blocks of data 211 received thereto and blocks of data 211 that are subsequently received by the computing tiles 210 can be processed in the absence of an additional command from the orchestration controller 206.[0039] In some embodiments, processing the blocks 211 of data can include reducing a size and/or quantity of data associated with the blocks of data 211. For example, the computing tiles 210-1, . . ., 211-N can, in response to commands from the orchestration controller 206, perform filtering operations on the blocks of data 211 to remove unwanted data, extract relevant data, or otherwise parse the blocks of data 211 to reduce a size or quantity of data associated therewith.[0040] In a non-limiting example, the blocks of data 211 can include one or more comma-separated value (CSV) files. If particular strings or particular data are desired from the CSV file(s), the orchestration controller 206 can send a command to the computing tiles 210 to cause the computing tiles 210 to receive blocks of data 211 containing the CSV files from, for example, a memory device coupled to the storage controller 204. The computing tiles 210 can perform operations on the CSV file(s) to extract the relevant information, as described in more detail in connection with Figure 5, herein, and subsequently transfer the relevant data out of the computing tiles 210 to circuitry external to the computing tiles 210 (e.g., to the orchestration controller 204, the NoC 208, and/or a host, such as the host 102 illustrated in Figure 1, herein).[0041] In another non-limiting example in which two columns of data A and B are requested from a block of data (e.g., the block of data 21 1A-1) containing three columns of data A, B, and C, the block of data containing all three columns can be transferred to the computing tiles 210 in response to a command from the orchestration controller 206. The computing tiles 210 can selectively process the block of data to extract the relevant columns (e.g., column A and column B) from the block of data, and can subsequently transfer the filtered data out of the computing tiles 210 to circuitry external to the computing tiles 210 (e.g., to the orchestration controller 206, the NoC 208, and/or a host, such as the host 102 illustrated in Figure 1, herein).[0042] The orchestration controller 206 can be further configured to send commands to the computing tiles 210-1, . . ., 210-N to allocate and/or de-allocate resources available to the computing tiles 210-1, . . ., 210-N for use in processing the blocks of data 211. In some embodiments, allocating and/or de allocating resources available to the computing tiles 210-1, . . ., 210-N can include selectively enabling some of the computing tiles 210 while selectively disabling some of the computing tiles 210. For example, if less than a total number of computing tiles 210 are required to process the blocks of data 211, the orchestration controller 206 can send a command to the computing tiles 210 that are to be used for processing the blocks of data 211 to enable only those computing tiles 210 desired to process the blocks of data 211.[0043] The orchestration controller 206 can, in some embodiments, be further configured to send commands to synchronize performance of operations performed by the computing tiles 210. For example, the orchestration can send a command to a first computing tile (e.g., the computing tile 210-1) to cause the first computing tile to perform a first operation, and the orchestration controller 206 can send a command to a second computing tile (e.g., the computing tile 210-2) to perform a second operation using the second computing tile.Synchronization of performance of operations performed by the computing tiles 210 by the orchestration controller 206 can further include causing the computing tiles 210 to perform particular operations at particular time or in a particular order.[0044] In some embodiments, the filtered blocks of data can be converted into logical records 213-1, . . ., 213-N subsequent to processing of the blocks of data 211 by the computing tiles 210. The logical records 213 can comprise data records that are independent of their physical locations. For example, the logical records 213 may be data records that point to a location in at least one of the computing tiles 210 where physical data corresponding to the processed (e.g., the filtered) block of data is stored.[0045] As described in more detail in connection with Figure 5 and 6, herein, the processed or filtered block of data 211 can be stored in a partition of a computing tile memory (e.g., the computing tile memory 538 illustrated in Figure 5 or the computing tile memory 638 illustrated in Figure 6) that is different than a partition in which the block of data is stored prior to processing as part of the operation to process or filter the block of data to extract relevant data or otherwise reduce a size or quantity of bits associated with the block of data. In some embodiments, the logical records 213 can point to that location such that the processed or filtered data can be accessed from the computing tiles 210 and transferred to circuitry external to the computing tiles 210.[0046] In some embodiments, the orchestration controller 206 can receive and/or send blocks of data 211E-1, . . ., 21 1E-N directly to and from the media controller 212. This can allow the orchestration controller 206 to transfer blocks of data 21 1E-1 , . . ., 211E-N that are not processed by the computing tiles 210 to and from the media controller 212.[0047] For example, if the orchestration controller 206 receives unprocessed blocks of data 21 1E-1 , . . ., 21 1E-N from a host (e.g., the host 102 illustrated in Figure 1) coupled to the storage controller 204 that are to be stored by memory device(s) (e.g., the memory devices 116 illustrated in Figure 1) coupled to the storage controller 204, the orchestration controller 206 can cause the unprocessed blocks of data 211E-1, . . ., 21 1E-N to be transferred to the media controller 212, which can, in turn, cause the unprocessed blocks of data 211E-1, .. ., 21 1E-N to be transferred to memory device(s) coupled to the storage controller 204.[0048] Similarly, if the host requests an unprocessed (e.g., a full) block of data (e.g., a block of data that is not processed by the computing tiles 210), the media controller 212 can cause full blocks of data 211E-1 , . . ., 211E-N to be transferred to the orchestration controller 206, which can subsequently transfer the unprocessed blocks of data 211E-1, . . ., 21 1E-N to the host.[0049] Figure 3 is another functional block diagram in the form of an apparatus including a storage controller 304 in accordance with a number of embodiments of the present disclosure. The storage controller 304 can be analogous to the storage controller 104 illustrated in Figure 1 or the storage controller 204 illustrated in Figure 2, herein. As shown in Figure 3, the storage controller 304 can include a media controller 312, a plurality of computing tiles 310-1, . . ., 310-N, a network on chip (NoC) 308, and an orchestration controller 306.[0050] The media controller 312 can be configured to retrieve blocks of data 311A-1, . . ., 31 U-N, 311B-1, . . ., 31 1B-N, 31 1C-1, . . ., 311c-N, 311D-1 , . . ., 31 1D-N, 311E-1 , . . ., 31 1E-N and/or logical records 313A-1, . . ., 313A-N, 313B-1,. . ., 313B-N, 313C-1, . . ., 313c-N, 313D-1, . . ., 313D-N, 313E-1, . . ., 313E-N from a memory device (e.g., memory device(s) 116-1, . . ., 116-N illustrated in Figure 1) coupled to the storage controller 304 in response to a request from the orchestration controller 306. The media controller can subsequently cause the blocks of data 311A-1, . . ., 31 U-N, 311B-1 , . . ., 311B-N, 311 C-1, . . ., 311c-N, 311D-1, . . ., 311D-N, 311E-1, . . ., 311E-N and/or logical records 313A-1, . . .,313A-N, 313B-1 , . . ., 313B-N, 313C-1, . . ., 313c-N, 313D-1 , . . ., 313D-N, 313E-1 ,. . ., 313E-N to be transferred to the computing tiles 310-1, . . ., 310-N and/or the orchestration controller 306.[0051] Similarly, the media controller 312 can be configured to receive blocks of data 311A-1, . . ., 31 1A-N, 311B-1 , . . ., 311B-N, 311 C-1, . . ., 311c-N, 311D-1, . . ., 311D-N, 311E-1, . . ., 311E-N and/or logical records 313A-1, . . .,313A-N, 313B-1 , . . ., 313B-N, 313C-1, . . ., 313c-N, 313D-1 , . . ., 313D-N, 313E-1 ,. . ., 313E-N from the computing tiles 310 and/or the orchestration controller 306. The media controller can subsequently cause the blocks of data 311A-1, . .311A-N, 311B-1, . . 311B-N, 311 C-1, . . 311c-N, 311D-1 , . . 31 1D-N, 311E-1, . . 311E-N and/or logical records 313A-1, . . 313A-N, 313B-1, . . 313B-N,313c-l, . . 313c-N, 313D-1, . . 313D-N, 313E-1, . . 313E-N to be transferred to a memory device coupled to the storage controller 304.[0052] The blocks of data 311 can be approximately 4 kilobytes in size and can be processed in a streaming manner by the computing tiles 310-1, . . ., 310-N in response to one or more commands generated by the orchestration controller 306. In some embodiments, processing the blocks 311 of data can include reducing a size and/or quantity of data associated with the blocks of data 311. For example, the computing tiles 310-1, . . ., 310-N can, in response to commands from the orchestration controller 306, perform filtering operations on the blocks of data 311 to remove unwanted data, extract relevant data, or otherwise parse the blocks of data 311 to reduce a size or quantity of data associated therewith. For example, the computing tiles 310-1, . . ., 310-N can, in response to commands from the orchestration controller 306, process blocks of data 311, generate logical records 313, and/or transfer the logical records to a location external to the computing tiles 310.[0053] Figures 4A-4C illustrate various examples of a functional block diagram in the form of an apparatus including a storage controller 404 in accordance with a number of embodiments of the present disclosure. In Figures 4A-4C, a media controller 412 is in communication with a plurality of computing tiles 410, a NoC 408, and an orchestration controller 406, which is communication with input/output (I/O) buffers 422. Although eight (8) discrete computing tiles 410 are shown in Figures 4A-4C, it will be appreciated that embodiments are not limited to a storage controller 404 that includes eight discrete computing tiles 410. For example, the storage controller 404 can include one or more computing tiles 410, depending on characteristics of the storage controller 404 and/or overall system in which the storage controller 404 is deployed.[0054] As shown in Figures 4A-4C, the media controller 412 can include a direct memory access (DMA) component 418 and a DMA communication subsystem 419. The DMA 418 can facilitate communication between the media controller 418 and memory device(s), such as the memory devices 116-1, . . ., 116-N illustrated in Figure 1, coupled to the storage controller 404 independent of a central processing unit of a host, such as the host 102 illustrated in Figure 1. The DMA communication subsystem 419 can be a communication subsystem such as a crossbar (“XBAR”), a network on a chip, or other communication subsystem that allows for interconnection and interoperability between the media controller 412, the storage device(s) coupled to the storage controller 404, and/or the computing tiles 410.[0055] In some embodiments, the NoC 408 can facilitate visibility between respective address spaces of the computing tiles 410. For example, each computing tile 410-1, . . ., 8 can, responsive to receipt of a file, store the file in a memory resource (e.g., in the computing tile memory 548 or the computing tile memory 638 illustrated in Figures 5 and 6, herein) of the computing tile 410. The computing tiles 410 can associate an address (e.g., a physical address) corresponding to a location in the computing tile 410 memory resource in which the file is stored. In addition, the computing tile 410 can break the address associated with the file into logical blocks.[0056] In some embodiments, the zeroth logical block associated with the file can be transferred to a processing device (e.g., the reduced instruction set computing (RISC) device 536 or the RISC device 636 illustrated in Figures 5 and 6, herein). A particular computing tile (e.g., computing tile 410-2) can be configured to recognize that a particular set of logical addresses are accessible to that computing tile 410-2, while other computing tiles (e.g., computing tile 410- 3, 410-4, etc.) can be configured to recognize that different sets of logical addresses are accessible to those computing tiles. Stated alternatively, a first computing tile (e.g., the computing tile 410-2) can have access to a first set of logical addresses associated with that computing tile 410-2, and a second computing tile (e.g., the computing tile 410-3) can have access to a second set of logical address associated therewith, etc.[0057] If data corresponding to the second set of logical addresses (e.g., the logical addresses accessible by the second computing tile 410-3) is requested at the first computing tile (e.g., the computing tile 410-2), the NoC 408 can facilitate communication between the first computing tile (e.g., the computing tile 410-2) and the second computing tile (e.g., the computing tile 410-3) to allow the first computing tile (e.g., the computing tile 410-2) to access the data corresponding to the second set of logical addresses (e.g., the set of logical addresses accessible by the second computing tile 410-3). That is, the NoC 408 can facilitate communication between the computing tiles 410 to allows address spaces of the computing tiles 410 to be visible to one another.[0058] In some embodiments, communication between the computing tiles 410 to facilitate address visibility can include receiving, by an event queue (e.g., the event queue 532 and 632 illustrated in Figures 5 and 6) of the first computing tile, a message requesting access to the data corresponding to the second set of logical addresses, loading the requested data into a memory resource (e.g., the computing tile memory 538 and 638 illustrated in Figures 5 and 6, herein) of the first computing tile, and transferring the requested data to a message buffer (e.g., the message buffer 534 and 634 illustrated in Figures 5 and 6, herein). Once the data has been buffered by the message buffer, the data can be transferred to the second computing tile via the NoC 408.[0059] In other embodiments, an application requesting data that is stored in the computing tiles 410 can know which computing tiles 410 include the data requested. In this example, the application can request the data from the relevant computing tile 410 and/or the address may be loaded into multiple computing tiles 410 and accessed by the application requesting the data via the NoC 408.[0060] As shown in Figure 4A, the orchestration controller 406 comprises discrete circuitry that is physically separate from the NoC 408. The NoC 408 can be a communication subsystem that is provided as one or more integrated circuits that allows communication between the computing tiles 410, the media controller 412, and/or the orchestration controller 406. Non-limiting examples of a NoC 408 can include a XBAR or other communications subsystem that allows for interconnection and/or interoperability of the orchestration controller 406, the computing tiles 410, and/or the media controller 412.[0061] As described above, responsive to receipt of a command generated by the orchestration controller 406 and/or the NoC 408, performance of operations to extract relevant data from blocks of data streamed through the computing tiles 410 can be realized. [0062] As shown in Figure 4B, the orchestration controller 406 is resident on one of the computing tiles 410-1 among the plurality of computing tiles 410-1, . . 410-8. As used herein, the term“resident on” refers to something that is physically located on a particular component. For example, the orchestration controller 406 being“resident on” one of the computing tiles 410 refers to a condition in which the orchestration controller 406 is physically coupled to a particular computing tile. The term“resident on” may be used interchangeably with other terms such as“deployed on” or“located on,” herein.[0063] As described above, responsive to receipt of a command generated by the computing tile 410-1 /orchestration controller 406 and/or the NoC 408, performance of operations to extract relevant data from blocks of data streamed through the computing tiles 410 can be realized.[0064] As shown in Figure 4C, the orchestration controller 406 is resident on the NoC 408. In some embodiments, providing the orchestration controller 406 as part of the NoC 408 results in a tight coupling of the orchestration controller 406 and the NoC 408, which can result in reduced time consumption to perform operations using the orchestration controller 406.[0065] As described above, responsive to receipt of a command generated by the orchestration controller 406 and/or the NoC 408, performance of operations to extract relevant data from blocks of data streamed through the computing tiles 410 can be realized.[0066] Figure 5 is a block diagram in the form of a computing tile 510 in accordance with a number of embodiments of the present disclosure. As shown in Figure 5, the computing tile 510 can include queueing circuitry, which can include a system event queue 530 and/or an event queue 532, and a message buffer 534 (e.g., outbound buffering circuitry). The computing tile 510 can further include a processing device such as a reduced instruction set computing (RISC) device 536, a computing tile memory 538 portion, and a direct memory access buffer 539 (e.g., inbound buffering circuitry). The RISC device 536 can be a processing resource that can employ a reduced instruction set architecture (ISA) such as a RISC-V ISA, however, embodiments are not limited to RISC-V IS As and other processing devices and/or IS As can be used.[0067] The system event queue 530, the event queue 532, and the message buffer 534 can be in communication with an orchestration controller such as the orchestration controller 106, 206, 306, and 406 illustrated in Figures 1-4, respectively. In some embodiments, the system event queue 530, the event queue 532, and the message buffer 534 can be in direct communication with the orchestration controller, or the system event queue 530, the event queue 532, and the message buffer 534 can be in communication with a network on a chip such as the NoC 108, 208, and 308 illustrated in Figures 1-3, respectively, which can further be in communication with the orchestration controller.[0068] The system event queue 530, the event queue 532, and the message buffer 534 can receive messages and/or commands from the orchestration controller and/or can send messages and/or commands to the orchestration controller to control operation of the computing tile 510 to perform operations on blocks of data (e.g., blocks of data 211 and 311 illustrated in Figures 2 and 3, herein) that are processed by the computing tile 510. In some embodiments, the commands and/or messages can include messages and/or commands to allocate or de-allocate resources available to the computing tile 510 during performance of the operations. In addition, the commands and/or messages can include commands and/or messages to synchronize operation of the computing tile 510 with other computing tiles deployed in a storage controller (e.g., the storage controller 104, 204, 304, and 404 illustrated in Figure 1-4, respectively).[0069] For example, the system event queue 530, the event queue 532, and the message buffer 534 can facilitate communication between the computing tile 510 and the orchestration controller to cause the computing tile 510 to process blocks of data to reduce a size and/or quantity of data associated with the blocks of data. In a non-limiting example, the system event queue 530, the event queue 532, and the message buffer 534 can process commands and/or messages received from the orchestration controller to cause the computing tile 510 to perform a filtering operation on the block of data to selectively remove portions of the data prior to transferring a reduced data object out of the computing tile 510. This can allow for relevant data to be extracted from the block of data prior to the data being transferred to circuitry external to the computing tile 510 such as the orchestration controller, a NoC, or a host (e.g., the host 102 illustrated in Figure 1, herein). [0070] The system event queue 530 can receive interrupt messages from the orchestration controller or NoC. The interrupt messages can be processed by the system event queue 532 to cause a command or message sent from the orchestration controller or the NoC to be immediately executed. For example, the interrupt message(s) can instruct the system event queue 532 to cause the computing tile 510 to abort operation of pending commands or messages and instead execute a new command or message received from the orchestration controller or the NoC. In some embodiments, the new command or message can involve a command or message to initiate an operation to process, using the computing tile 510, one or more blocks of data to extract relevant information therefrom, or to otherwise decrease a size or amount of data associated with the block of data.[0071] The event queue 532 can receive messages that can be processed serially. For example, the event queue 532 can receive messages and/or commands from the orchestration controller or the NoC and can process the messages received in a serial manner such that the messages are processed in the order in which they are received. Non-limiting examples of messages that can be received and processed by the event queue can include request messages from the orchestration controller and/or the NoC to initiate processing of a block of data (e.g., a remote procedure call on the computing tile 510), request messages from other computing tiles to provide or alter the contents of a particular memory location in the computing tile memory 538 of the computing tile that receives the message request (e.g., messages to initiate remote read or write operations amongst the computing tiles), synchronization message requests from other computing tiles to synchronize processing of blocks of data among the computing tiles, etc.[0072] The message buffer 534 can comprise a buffer region to buffer data to be transferred out of the computing tile 510 to circuitry external to the computing tile 510 such as the orchestration controller, the NoC, and/or the host. In some embodiments, the message buffer 534 can operate in a serial fashion such that data is transferred from the buffer out of the computing tile 510 in the order in which it is received by the message buffer 534 The message buffer 534 can further provide routing control and/or bottleneck control by controlling a rate at which the data is transferred out of the message buffer 534 For example, the message buffer 534 can be configured to transfer data out of the computing tile 510 at a rate that allows the data to be transferred out of the computing tile 510 without creating data bottlenecks or routing issues for the orchestration controller, the NoC, and/or the host.[0073] The RISC device 536 can be in communication with the system event queue 530, the event queue 532, and the message buffer 534 and can handle the commands and/or messages received by the system event queue 530, the event queue 532, and the message buffer 534 to facilitate performance of operations on the blocks of data received by the computing tile 510. For example, the RISC device 536 can include circuitry configured to process commands and/or messages to cause a size or quantity of data associated with a block of data received by the computing tile 510 to be reduced. The RISC device 536 may include a single core or may be a multi-core processor.[0074] The computing tile memory 538 can, in some embodiments, be a memory resource such as random-access memory (e.g., RAM, SRAM, etc.). Embodiments are not so limited, however, and the computing tile memory 538 can include various registers, caches, buffers, and/or memory arrays (e.g., 1T1C, 2T2C, 3T, etc. DRAM arrays). The computing tile memory 538 can be configured to receive blocks of data from, for example, a memory device such as the memory devices 116-1, . . ., 116-N illustrated in Figure 1, herein. In some embodiments, the computing tile memory 538 can have a size of approximately 256 kilobytes (KB), however, embodiments are not limited to this particular size, and the computing tile memory 538 can have a size greater than, or less than,256 KB.[0075] The computing tile memory 538 can be partitioned into one or more addressable memory regions. As shown in Figure 5, the computing tile memory 538 can be partitioned into addressable memory regions so that various types of data can be stored therein. For example, one or more memory regions can store instructions (“INSTR”) 541 used by the computing tile memory 538, one or more memory regions can store a block of data 543-1, . . ., 543-N (e.g., a block of data retrieved from the memory device(s)), and/or one or more memory regions can serve as a local memory (“LOCAL MEM.”) 545 portion of the computing tile memory 538. Although twenty (20) distinct memory regions are shown in Figure 5, it will be appreciated that the computing tile memory 538 can be partitioned into any number of distinct memory regions.[0076] As discussed above, the blocks of data can be retrieved from the memory device(s) in response to messages and/or commands generated by the orchestration controller (e.g. ,the orchestration controller 106, 206, 306, 406 illustrated in Figures 1-4, herein). In some embodiments, the commands and/or messages can be processed by a media controller such as the media controller 112, 212, 312, or 412 illustrated in Figures 1-4, respectively. Once the blocks of data are received by the computing tile 510, they can be buffered by the DMA buffer 539 and subsequently stored in the computing tile memory 538.[0077] As a result, in some embodiments, the computing tile 510 can provide data driven performance of operations on blocks of data received from the memory device(s). For example, the computing tile 510 can begin performing operations on blocks of data (e.g., operations to reduce a size of the block of data, to extract relevant information from the block of data, to remove irrelevant information from the block of data, etc.) received from the memory device(s) in response to receipt of the block of data.[0078] For example, because of the non-deterministic nature of data transfer from the memory device(s) to the computing tile 510 (e.g., because some blocks of data may take longer to arrive at the computing tile 510 dude to error correction operations performed by a media controller prior to transfer of the block of data to the computing tile 510, etc.), data driven performance of the operations on block of data can improve computing performance in comparison to approaches that do not function in a data driven manner.[0079] In some embodiments, the orchestration controller can send a command or message that is received by the system event queue 530 of the computing tile 510. As described above, the command or message can be an interrupt that instructs the computing tile 510 to request a block of data and perform an operation on the block of data to reduce the size or a quantity of data associated with the block of data. However, the block of data may not immediately be ready to be sent from the memory device to the computing tile 510 due to the non-deterministic nature of data transfers from the memory device(s) to the computing tile 510. However, once the block of data is received by the computing tile 510, the computing tile 510 can immediately begin performing the operation to reduce the size or quantity of data associated with the block of data. Stated alternatively, the computing tile 510 can begin performing operations on the block of data responsive to receipt of the block of data without requiring an additional command or message to cause performance of the operation on the block of data.[0080] In some embodiments, the operation can be performed by selectively moving data around in the computing tile memory 538 to extract relevant data from the block of data or to remove irrelevant data from the block of data. In a non-limiting example in which two columns of data A and B are requested from a block of data corresponding to a database and containing three columns of data A, B, and C, the block of data containing all three columns can be transferred to a first block (e.g., block 543-1) of the computing tile memory 538.[0081] The RISC device 536 can execute instructions to cause the first two columns A and B (e.g., the requested or relevant data) of the block of data containing the three columns to be selectively moved to a different partition of the computing tile memory (e.g., to block 543-N). At this stage, the“filtered” block of data (e.g., block 543-N) that contains only the relevant or requested columns A and B can be transferred to the message buffer 534 to be transferred to circuitry external to the computing tile 510.[0082] As the filtered block of data, which can be referred to as a“resultant block of data,” is transferred to the message buffer 534, a subsequent block of data can be transferred from the DMA buffer 539 to the computing tile memory 538 and an operation to reduce a size or quantity of data associated with the subsequent block of data can be initiated in the computing tile memory 538. By having a subsequent block of data buffered into the computing tile 510 prior to completion of the operation on the preceding block of data, blocks of data can be continuously streamed through the computing tile in the absence of additional commands or messages from the orchestration controller to initiate operations on subsequent blocks of data. In addition, by preemptively buffering subsequent blocks of data into the DMA buffer 539, delays due to the non-deterministic nature of data transfer from the memory device(s) to the computing tile 510 can be mitigated as the blocks of data are operated on while being streamed through the computing tile 510. [0083] In another non-limiting example, the block of data can include one or more comma-separated value (CSV) files. If particular strings or particular data are desired from the CSV file, the block of data containing the entire CSV file can be stored in a particular partition (e.g., block 543-1) of the computing tile memory 538. The RISC device 536 can execute instructions to cause the particular strings or particular data (e.g., the requested or relevant data) to be moved to a different partition (e.g., block 543-N) of the computing tile memory 538. At this stage, the“filtered” block of data (e.g., block 543-N) that contains only the relevant or requested strings or data can be transferred to the message buffer 534 to be transferred to circuitry external to the computing tile 510.[0084] As the filtered block of data is transferred to the message buffer534, a subsequent block of data can be transferred from the DMA buffer 539 to the computing tile memory 538 and an operation to reduce a size or quantity of data associated with the subsequent block of data can be initiated in the computing tile memory 538[0085] When the data (e.g., the data that has been operated on) is to be moved out of the computing tile 510 to circuitry external to the computing tile 510 (e.g., to the NoC, the orchestration controller, and/or the host), the RISC device 536 can send a command and/or a message to the orchestration controller, which can, in turn send a command and/or a message to request the data from the computing tile memory 538.[0086] Responsive to the command and/or message to request the data, the computing tile memory 538 can transfer the data to a desired location (e.g., to the NoC, the orchestration tile, and/or the host). For example, responsive to a command to request the data that has been operated on, the data that has been operated on can be transferred to the message buffer 534 and subsequently transferred out of the computing tile 510. In some embodiments, the data transferred from the computing tile memory 538 to the NoC, the orchestration controller, and/or the host can be data that has had an operation performed thereon to reduce an original size of the data (e.g., to reduce the size of the block of data received by the computing tile 510 from the memory device(s)) by removing irrelevant data from the block of data and/or by extracting relevant data from the block of data. [0087] Figure 6 is another block diagram in the form of a computing tile610 in accordance with a number of embodiments of the present disclosure. As shown in Figure 6, the computing tile 610 can include a system event queue 630, an event queue 632, and a message buffer 634. The computing tile 610 can further include an instruction cache 635, a data cache 637, a processing device such as a reduced instruction set computing (RISC) device 636, a computing tile memory 638 portion, and a direct memory access buffer 639. The computing tile 610 shown in Figure 6 can be analogous to the computing tile 510 illustrated in Figure 5, however, the computing tile 610 illustrated in Figure 6 further includes the instruction cache 635 and/or the data cache 637.[0088] The instruction cache 635 and/or the data cache 637 can be smaller in size than the computing tile memory 638. For example, the computing tile memory can be approximately 256 KB while the instruction cache 635 and/or the data cache 637 can be approximately 32 KB in size.Embodiments are not limited to these particular sizes, however, so long as the instruction cache 635 and/or the data cache 637 are smaller in size than the computing tile memory 638.[0089] In some embodiments, the instruction cache 635 can store and/or buffer messages and/or commands transferred between the RISC device 636 to the computing tile memory 638, while the data cache 637 can store and/or buffer data transferred between the computing tile memory 638 and the RISC device 636.[0090] Figure 7 is a flow diagram representing an example method 750 for storage device operation orchestration in accordance with a number of embodiments of the present disclosure. At block 752, the method 750 can include receiving, by a processing device (e.g., a processing devicecorresponding to a computing tile), a command to initiate performance of an operation involving blocks of data stored in a memory device coupled to the computing tile. The processing device can be a processing device such as the RISC computing device 536/636 illustrated in Figures 5 and 6, herein, and can be part of a computing tile such as computing tiles 110, 210, 310, 410, 510, and 610 illustrated in Figures 1-6, herein. The memory device can be analogous to the memory device(s) 116-1, . . ., 116-N illustrated in Figure 1, herein. In some embodiments, the command to initiate performance of the operation can be generated by an orchestration controller such as the orchestration controller 106, 206, 306, or 406 illustrated in Figures 1-4, herein.[0091] At block 754, the method 750 can include receiving, responsive to the initiation command, a first block of data from the memory device at the computing tile (e.g., from a memory resource coupled to a processing device of the computing tile). In some embodiments, the first block of data can be transferred from the memory device to the storage controller using a media controller such as the media controller 112, 212, 312, or 412 illustrated in Figures 1-4, herein. As described above, in some embodiments, receiving the command to initiate performance of the operation can include receiving the command to initiate performance of the operation by a processing device, such as the RISC device 536 and 636 illustrated in Figures 5 and 6, corresponding to the computing tile.[0092] At block 756, the method 750 can include performing, responsive to receipt of the block of data, a first operation to extract data from the first block of data received by the processing device and/or the computing tile. In some embodiments, performing the first operation can include performing the first operation by a memory resource (e.g., the computing tile memory 538 and 638 illustrated in Figures 5 and 6, herein) corresponding to the processing device and/or the computing tile. In some embodiments, performing the first operation to extract data from the first block of data can include storing the first block of data in a first partition of a memory resource of the computing tile (e.g., in a memory resource coupled to the processing device) and/or selectively transferring a portion of data associated with the first block of data to a second partition of the memory device. The portion of data can include the data to be extracted from the block of data. Stated differently, the portion of data can include data that has been filtered such that relevant data is retained and irrelevant data is discarded due to performance of the operation.[0093] At block 758, the method 750 can include receiving a second block of data from the memory device at the processing device of the computing tile while the computing tile is performing the first operation. In some embodiments, the second block of data can be transferred from the memory device to the storage controller using a media controller such as the media controller 112, 212, 312, or 412 illustrated in Figures 1-4, herein. [0094] At block 760, the method 750 can include performing, responsive to completion of the first operation, a second operation to extract data from the second block of data received by the processing device of the computing tile without receiving an additional initiation command. For example, as described above, the computing tile can operate in a data driven manner such that blocks of data are streamed and processed through the processing device and/or the computing tile in the absence of additional commands after the initiation command is received by the processing device of the computing tile. In some embodiments, performing the second operation can include performing the second operation by a memory resource corresponding to the computing tile (e.g., a memory resource coupled to the processing device of the computing tile).[0095] The method 750 can further include buffering, by the processing device of the computing tile, the second block of data prior to performance of the second operation such that the second block of data is available to the computing tile to perform the second operation upon completion of the first operation. The buffering can be performed by a buffer resident on the computing tile such as the DMA buffer 539 and 639 illustrated in Figures 5 and 6, herein.[0096] The method 750 can further include transferring the data extracted from the first block of data to circuitry external to the processing device and/or the computing tile in response to completion of the operation to extract data from the first block of data. In some embodiments, a logical record corresponding to the extracted data can be transferred to the circuitry external to the processing device and/or the computing tile. For example, the method 750 can include generating a logical record corresponding to at least one of the data extracted from the first block of data and the second block of data and transferring the logical word to circuitry external to the computing tile, as described above in connection with Figures 2 and 3.[0097] In some embodiments, the method can include requesting, by the processing device of the computing tile, information stored in an address space of a computing tile different than the computing tile and/or transferring the requested information from the computing tile different than the computing tile to the computing tile. For example, as described above in connection with Figures 4A-4C, the computing tiles can be configured that address spaces of the computing tiles are visible to other computing tiles in a storage controller.[0098] Although specific embodiments have been illustrated and described herein, those of ordinary skill in the art will appreciate that an arrangement calculated to achieve the same results can be substituted for the specific embodiments shown. This disclosure is intended to cover adaptations or variations of one or more embodiments of the present disclosure. It is to be understood that the above description has been made in an illustrative fashion, and not a restrictive one. Combination of the above embodiments, and other embodiments not specifically described herein will be apparent to those of skill in the art upon reviewing the above description. The scope of the one or more embodiments of the present disclosure includes other applications in which the above structures and processes are used. Therefore, the scope of one or more embodiments of the present disclosure should be determined with reference to the appended claims, along with the full range of equivalents to which such claims are entitled.[0099] In the foregoing Detailed Description, some features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the disclosed embodiments of the present disclosure have to use more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment. |
Techniques are described to multiply two numbers, A and B. In general, multiplication is performed by using Karatsuba multiplication on the segments of A and B and adjusting the Karatsuba multiplication based on the values of the most significant bits of A and B. |
1.A computer program provided on a computer-readable storage medium, comprising instructions for causing a circuit to multiply two numbers, A and B, said program for:Divide A into a plurality of segments ax and an additional set ah of at least one bit, where x represents a segment ordinal number and h represents a position of the additional set of the at least one bit;Split B into multiple segments bx and at least one additional bit bh;Performing Karatsuba multiplication on said segments of A and B;The Karatsuba multiplication is adjusted based on the values of ah and bh.2.The computer program of claim 1, wherein the Karatsuba multiplication includes determining:22sa1b1 + 2s [(a1 + a0) (b1 + b0) -a1b1-a0b0] + a0b0Where a0, a1, b0, b1 are the segments of A and B, and their respective widths are s, where s is a positive integer.3.The computer program according to claim 1, wherein the instructions for adjustment include instructions for the following operations:Add the value of ahB [b1: b0] 2h;Add the value of bhA [b1: b0] 2h; andIf ah and bh are both equal to 1, add the value of ahbh22h.4.The computer program of claim 1, wherein the Karatsuba multiplication includes a recursive Karatsuba multiplication performed on at least the smaller segments of A and B.5.The computer program according to claim 1, wherein A and B are divided into one selected from the group consisting of two, three, and five segments.6.The computer program of claim 1, wherein ah and bh include a single bit.7.The computer program according to claim 1, further comprising using the instruction for performing Karatsuba multiplication and the instruction for adjusting to determine a value of gemod.8.The computer program according to claim 7, wherein M is a modulus of the public key.9.A computer-implemented method of multiplying two numbers A and B, the method includes:Divide A into a plurality of segments ax and an additional set ah of at least one bit, where x represents a segment ordinal number and h represents a position of the additional set of the at least one bit;Split B into multiple segments bx and at least one additional bit bh;Performing Karatsuba multiplication on said segments of A and B;The Karatsuba multiplication is adjusted based on the values of ah and bh.10.The method of claim 9, wherein the Karatsuba multiplication includes determining:22sa1b1 + 2s [(a1 + a0) (b1 + b0) -a1b1-a0b0] + a0b0Where a0, a1, b0, b1 are the segments of A and B, and their respective widths are s, where s is a positive integer.11.The method of claim 9, wherein the adjusting comprises:Add the value of ahB [b1: b0] 2h;Add the value of bhA [b1: b0] 2h; andIf ah and bh are both equal to 1, add the value of ahbh22h.12.The method of claim 9, wherein the Karatsuba multiplication comprises a recursive Karatsuba multiplication performed on at least the smaller segments of A and B.13.The method according to claim 9, wherein A and B are divided into one selected from the following group: 2 segments, 3 segments, and 5 segments.14.The method of claim 9, wherein ah and bh include a single bit.15.The method according to claim 9, further comprising determining a value of gemod M using an instruction for performing Karatsuba multiplication and an instruction for adjustment.16.The method of claim 15 wherein M is the modulus of the public key.17.A system including:A circuit for multiplying two numbers, A and B, said circuit being used to:Split A into multiple segments ax and at least one additional set ah, where x represents the ordinal number of the segment and h represents the position of the additional set of the at least one bit;Split B into multiple segments bx and at least one additional bit bh;Performing Karatsuba multiplication on said segments of A and B;The Karatsuba multiplication is adjusted based on the values of ah and bh.18.The system of claim 17, wherein the Karatsuba multiplication includes determining:22sa1b1 + 2s [(a1 + a0) (b1 + b0) -a1b1-a0b0] + a0b0Where a0, a1, b0, b1 are the segments of A and B, and their respective widths are s, where s is a positive integer.19.The system of claim 17, wherein the circuit for adjustment comprises a circuit that performs the following operations:Add the value of ahB [b1: b0] 2h;Add the value of bhA [b1: b0] 2h; andIf ah and bh are both equal to 1, add the value of ahbh22h.20.The system of claim 17, wherein the Karatsuba multiplication includes a recursive Karatsuba multiplication performed on at least the smaller segments of A and B.21.The system of claim 17, wherein ah and bh include a single bit.22.The system of claim 17, wherein the circuit comprises a programmable circuit for executing instructions to perform multiplication on A and B.23.The system of claim 22, further comprising a plurality of programmable cores integrated on the same die as the circuit and communicatively coupled to the circuit. |
Multiply two numbersBackground techniqueCryptography protects data from unwanted access. Cryptography usually involves performing mathematical operations (encryption) on the data, which makes the original data (plaintext) incomprehensible (ciphertext). The inverse mathematical operation (decryption) recovers the original data from the ciphertext. In addition to encryption and decryption, cryptography covers a wide range of applications. For example, cryptography is often used for authentication (ie, to reliably determine the identity of a communication subject), generation of digital signatures, and the like.Current cryptographic techniques rely heavily on intensive mathematical operations. For example, many schemes use a modular arithmetic called modular exponentiation, which involves raising a large number to several powers and reducing it relative to the modulus (i.e., when divided by a given modulus The remainder of the time). Mathematically, modular exponentiation can be expressed as "gemod (modulus) M", where e is the exponent and M is the modulus.Conceptually, multiplication and modular reduction are direct operations. However, the number sizes used in these systems are often very large and significantly exceed the processor's inherent word length. For example, cryptographic protocols may require modulo operations on numbers of 1024 to 4096 bits or longer, while many processors have inherent word lengths of only 32 or 64 bits. Performing operations on such large numbers can be very expensive in terms of time and computing resources.Brief description of the drawingsFigures 1 and 2 show Karatsuba multiplication.FIG. 3 is a flowchart showing an example implementation of Karatsuba multiplication.Figures 4 and 5 show that the number N is folded into the number N ', where N≡N'.Figure 6 shows the determination of N mod.Figure 7 shows iterative folding of the logarithm N.Figure 8 depicts an architecture that performs Karatsuba multiplication and / or modular reduction.Detailed DescriptionAs mentioned above, various cryptographic operations involve multiplication and / or modular reduction of very large numbers. Described herein are various techniques that can reduce the load of these computationally intensive operations and speed operations of cryptographic systems. These techniques can also be applied to more general purposes, non-password, computing settings. One such technique involves improving the efficiency of a technique called multiplication of large numbers called Karatsuba multiplication. Another technique involves improving the efficiency of modular reduction.Karatsuba multiplicationVarious methods have been developed to perform multiplication of two numbers. A common method called textbook multiplication involves segmenting the operands and performing multiplication operations on these smaller segments. As an example, two n-bit wide numbers A and B can be represented as a set of smaller sub-segments, such as:A = a1 2s + a0 [1]B = b1 2s + b0 [2]The a0 and b0 terms represent the s least significant bits of A and B, and a1 and b1 represent the remaining higher significant bits. In this notation, the subscript x in ax and bx represents a segment ordinal number within a number (for example, a0 represents the least significant bit of A, a1 represents the next most significant bit, and so on).With regular textbook multiplication, A and B can be calculated using four smaller multiplications:Ax B = a1b1 22s + (a0b1 + b0a1) 2s + a0b0 [3]A multiplication technique called Karatsuba multiplication reduces the number of segment multiplications. For example, for A and B above, the term in [3]:(a0b1 + b0a1) [4]The result can be calculated as:[(a0 + a1) (b0 + b1)]-a1b1-a0b0 [5]Because a1b1 and a0b0 constitute the other terms in Equation [3], using the values of a1b1 and a0b0 in Equation [5] does not represent additional computational costs. In equation [3] replacing equation [4] with equation [5], Karatsuba multiplication of AxB can be calculated as:Axb = a1b1 22s + ([(a0 + a1) (b0 + b1)]-a1b1-a0b0) 2s + a0b0 [6]The replacement uses two additions and a single multiplication to swap the two multiplications. In most cases, this represents a significant improvement in computational efficiency.In the above example, Karatsuba will multiply the number divided into two segments (ie, "two-item Karatsuba multiplication"). However, Karatsuba can also be applied to other numbers of segments. For example, for the numbers A and B, the three-term Karatsuba multiplication can be defined as:A = a2 22s + a1 2s + a0 [7]B = b2 22s + b1 2s + a0 [8]Axb = a2b2 24s + a1b1 22s + a0b0 + [(a2 + a1) (b2 + b1) -a2b2-a1b1] 23s + [(a2 + a0) (b2 + b0) -a2b2-a0b0] 22s + [(a0 + a1 ) (b0 + b1) -a0b0-a1b1] 2s [9]Where A and B are each divided into three s-bit segments.Similar to two-item Karatsuba multiplication [6], three-item Karatsuba multiplication [9] replaces the difference with a multiplication operation (eg, axbx) on the same ordinal number segment and an addition (eg, ax + ay) on the same number of segments Multiplication between ordinal segments (for example, axby). Various equations have also been defined for five Karatsuba multiplications. A common property of these Karatsuba equations is that they require at most (t2 + t) / 2 multiplications, where t is the number of terms.Karatsuba multiplication can be implemented using recursion. For example, in two Karatsuba multiplications:Ax B = a1b1 22n + ((a0 + a1) (b0 + b1) -a1b1-a0b0) 2n + a0b0 [6]In, each smaller segment multiplication can be performed using Karatsuba again. For example, performing Karatsuba multiplication of AxB involves Karatsuba multiplication of a1b1, a0b0, (a0 + a1) (b0 + b1). These multiplications may involve Karatsuba multiplication of even smaller sub-segments. For example, determining a1b1 may involve splitting a1 and b1 into multiple items of sub-segments.However, a potential problem with this approach is that operands of different sizes are produced. That is, both the (a0 + a1) term and the (b0 + b1) term can generate a carry from the addition operation. Subsequent multiplications of the results of (a0 + a1) and (b0 + b1) may overflow into the extra intrinsic word. This could hurt a large part of the efficiency achieved by Karatsuba.To address the "carry" problem, Figure 1-3 illustrates an example implementation that performs a Karatsuba multiplication on the least significant bits of two operands and then corrects the result based on the most significant bits. In more detail, FIG. 1 shows two operands A100 and B102 being multiplied. In this example, each operand is n + 1 bits wide, where n is twice the inherent word length s of a processor. In this example, each operand can be split into two items and an additional high bit. For example, the s least significant digits of A constitute a0, the next s significant digits constitute a1, and the most significant digits of A constitute ah.As shown, Karatsuba multiplication can be performed on a term of size s using:22s a1b1 + 2s [(a1 + a0) (b1 + b0) -a1b1-a0b0] + a0b0 [10]This result can then be adjusted based on the values of the most significant bits ah and bh. For example, as shown, the result can increase2nah B [b1: b0] 106[11]with2nbh A [a1: a0] 108[12]In other words, if ah is "1", the result increases by n bits of b1: b0 shifted by n bits. Similarly, if bh is "1", the result increases by n bits of a1: a0 shifted by n bits. These adjustments can be implemented as additions, for example:Result = result + 2nahB [b1: b0]Result = result + 2nbhA [a1: a0]Or as a branch followed by addition:if (ah) then result = result + 2nB [b1: b0]if (bh) then result = result + 2nA [a1: a0]Finally, if both ah and bh are "1", the result is increased by 2n (ie, ah bh). This can be achieved by using branches, for example:if (ah bh) then result = result + 22nThis combination of addition and one or more branch statements prevents the carry from being passed down to the lower-level recursion.FIG. 2 shows the operation of the above-mentioned program for multiplying A 100 with a value of 469 and B 102 with a value of 369. As shown, the Karatsuba multiplication of A [2s-1: 0] and B [2s-1: 0] produces a value of 24069 with the most significant bits ah and bh excluded. This value is first adjusted to 78597 for ah and then 107525 for bh. Finally, since both ah and bh are "1", the most significant bit 22n = 16 is added to produce the final answer 173061. Similarly, the value of axbx can be determined through the recursive application of Karatsuba technology. By cutting off the ah and bh bits, recursive operations are performed on operands of the same suitable size.Figure 3 shows the implementation of the Karatsuba technique in a recursive scheme. As mentioned above, the Karatsuba multiplication of operands A and B is performed by a multiplication 114 of A [n: 0] and B [n: 0] followed by adjustments 116 of the most significant bits ah and bh of A and B 116 of. The result value is returned up to the 118 recursive stack.Karatsuba multiplication is especially needed when the length of the operand is much larger than the processor's inherent word length. For example, the processor's inherent word length may be only s compared to longer operands. As n approaches s, Karatsuba's efficiency decreases and textbook multiplication becomes more attractive. Thus, as shown in FIG. 3, the program can use textbook multiplication 120, 122 or Karatsuba 104, 106, 108 depending on the current depth 112 of the recursion. In practice, performing the last two levels (e.g., L = 2) of recursion using textbook multiplication provides the best overall performance.Although Figures 1-3 depict example implementations, many variations are possible. For example, each Karatsuba term is depicted as s-bit width in Figures 1-3. However, these terms need neither have the same bit width nor occupy a single inherent word. Similarly, although ah and bh are described as a single bit, in other implementations, ah and bh may include multiple bits.As mentioned above, different Karatsuba equations have been defined for different numbers of terms (e.g., 2, 3, and 5). A canonical Karatsuba factorization is a number with one of the following six lengths:n = 2kn = 3 · 2kn = 32 · 2kn = 33 · 2kn = 34 · 2kn = 5 · 2kWhere n is the length of the number and k is an integer.To optimize the Karatsuba factorization, numbers can be filled with 0 to fit the larger canonical form. To discern which canonical Karatsuba decomposition will use this operation, w can be calculated for each and the smallest one selected:The value of w can be calculated for different values of n. For example, these results can be used to form a lookup table indicating the amount by which a given number will be filled based on the minimum w value for a given n.Use of folded die reductionIn addition to multiplication, many cryptographic schemes involve modular reduction (for example, the calculation of N mod M). To reduce the cost of modular reduction operations, some systems use a technique called Barrett's modular reduction. Essentially, Barrett calculates the value of the quotient,q = floor (floor (N / 2m) μ / M) [13]Where m is the width of the modulus M, and μ is a constant determined by:μ = floor (22n / M). [14]Where n is the width of the number N. The value of N mod M can then be determined by calculating N-qM, and finally subtracting M if necessary to ensure that the final value is less than M. What affects Barrett's efficiency is the ability to access pre-calculated values of μ. That is, the μ value can be determined based only on the size of N without accessing a specific value of N.Techniques such as Barrett's modular reduction can reduce the cost of modular reduction. Figures 4-6 illustrate techniques that can further reduce the cost of modular reduction calculations. In particular, FIG. 4 illustrates a technique of "folding" the number N 202 into a number N'206 of a smaller width. Although its width is smaller, the folding operation still determines N 'so that N'modM is the same as Nmod. Traditional operations such as classic Barrett modular reductions can then be performed on smaller N's. By "shortening" operand N, subsequent operations involve smaller numbers that can reduce the multiplications used to determine the modulus remainder. In addition, the larger the number N, the more significant the efficiency. For example, sampling tests estimate that for N with a size of 512 bits, the speed increases by 27%, and for N with a size of 4096 bits, the speed jumps by 177%.In more detail, FIG. 4 depicts a number N202 having a width n and a modulus M200 having a width m. To determine N mod, the "fold" operation 212 generates N 'from N. As shown, the fold 212 occurs at a fold point f, which depicts N as a higher significant portion NH and a lower significant portion NL. For example, the folding point f may be selected to fall at the midpoint of the length of the modulus and the length of N. For example, assuming that the width of N is 2m (twice the modulus width), the folding point may be at the position of the bit identified by 21.5m. Such a folding point can minimize the resulting width of N '. That is, moving the folding point in either direction of expanding or shortening NH or NL actually increases the size of N '.Based on the folding point, N ′ can be determined as:N ’= NH 2fmod M + NL 212[15]This smaller N 'can then be used to perform modular reduction using, for example, classic Barrett technology.As shown, the determination 212 of N 'involves a 2fmod M term 208 (referred to as M'). The value of 2fmodM can be pre-calculated without considering the specific N value. Pre-calculating this value for various values of M and f speeds up the real-time calculation of N 'by shifting expensive multiplications to a critical period of less time. Pre-calculated values for the values of M and f can be stored in a table in memory for fast access. As described above, multiplication of NH (2fmodm) may be performed using, for example, Karatsuba multiplication.For illustration, FIG. 5 shows an example of folding, where N is an 8-bit wide number (1111, 1100b) with a value of 252, and M is a 4-bit number (1101b) with a value of 13. As shown in the figure, the folding point is selected as f = 21.5m = 26. The calculation of N 'yields a value of 96. As shown in the figure, N and its corresponding value N 'after folding produce the same modulus remainder 5 for the modulus 13. Modular reduction of N & apos; may be performed using any of a variety of modular reduction methods such as Barrett.Figure 6 depicts an example of a complete determination of N mod using the techniques described above. In this example, the width of N 202 is n = 4s, and the width of M204 is m = 2s. As shown, the folding point f is 23s. As shown in the figure, the pre-calculated value of M '= 23smodM222 can be used to determine (M') (NH) 224. Although FIG. 6 shows NH as the value of floor (N / 23s), the value of NH can be obtained more quickly by setting NH = N [4s-1: 3s]. The value of (M ') (NH) 224 is added to NL226 to complete the calculation of N'. Similarly, although FIG. 6 shows NL as N mod 23s, the value of NL can be obtained faster by setting NH = N [3s-1: 0].After determining N ', the classical Barrett reduction can be used to calculate N'mod M. In this case, Barrett reductions 230, 234 are calculated as:R = N’-floor (floor (N ’/ 22s) (μ / 2s)) M [16]Among them, μ is determined as floor (23s / M). Similar to the value of M ', the value of µ can be pre-calculated for various values of s and M. Also, this pre-computation can shift expensive operation time to a period that does not require real-time operation.As a result, R236 can be larger than the modulus M 200. In this rather rare case, the subtraction R = R-M can be used to ensure that R <M.A single folding operation can significantly improve the efficiency and real-time performance of modular reduction. As shown in FIG. 7, repeated folding can provide further efficiency with respect to the total number of multiplications and ALU operations (e.g., addition, subtraction, and shift) used. As shown, N202 is also folded into N'204. As a result, the width of N 'will usually be f. If the width of N 'is f + 1, the subtraction operation N' = N '-(M 2m) can be used to "trim" N', although this is not necessary. As shown in the figure, the additional folding operation converts N 'to N "206, as well, where N" modM = N'modM. This second fold also improves computational efficiency.The folding points used in different folding iterations were moved from 21.5m in the first iteration to 21.25m in the second iteration. More generally, the folding point for a given iteration can be determined as 2 (1 + 2 ^ -i) m, where i is the iteration number.Although Figure 7 depicts two folds, there may be additional folds. However, additional folding may diminish returns and / or actually increase the number of multiplication operations.Example implementation of modular exponentiationThe above techniques can be used to perform various cryptographic operations. For example, the Karatsuba multiplication and folding techniques described above can be combined to perform modular exponentiation.Similarly, modular exponentiation involves determining gemod M. Performing modular exponentiation is the core of various cryptographic algorithms. For example, in RSA, the public key is formed by the public exponent e-public and the modulus M. The private key is formed by the private exponent e-private and the modulus M. To encrypt a message (for example, a packet or a packet payload), the following operations are performed:Ciphertext = plaintext e-publicmod M [17]To decrypt the message, perform the following operations:Plaintext = ciphertext e-privatemod M [18].A program for performing modular exponentiation processes the bits in the exponent e in order from left to right. Starting with an initial value of A = 1, the program will encounter a square of the value of each "0" bit (ie, A = A * A). For each "1" bit, the program both squares the value and multiplies it by g (ie, A = A * A * g). The final result can be used for modular reduction operations. For example, to determine 31010bmod5, the program operates as follows, where g = 3, e = “1010”, and M = 5: A 1Index level 1-1: 1 * 1 * 3 = 3The index is 2-0. 3 * 3 = 9Index position 3-1: 9 * 9 * 3 = 243The index is 4-0, which is 243 * 243 = 59049.A. Mod. The following is the case:Instead of performing modular reduction at the end when very large numbers may have accumulated, the modular reduction may be interleaved within each multiplication operation, such as after processing each exponent bit or every few exponent bits. For example, to calculate 31010bmod5, the program can proceed as follows: A 1Index position 1-1: 1 * 1 * 3 = 3A. Modifications 3Index 2-0: 3 * 3 = 9A. Modifications 4Index 3-1: 4 * 4 * 3 = 48A. Modifications 3The index is 4-0. 3 ^ 2 = 9A. Modifications 4Regardless of the specific implementation, using the above-mentioned Karatsuba multiplication technique for both square and "g" multiplication can significantly speed up modulo exponentiation.In addition, by using folding, reduction operations consume relatively little processing resources.Additional computing efficiency can be obtained by storing reused values. For example, in this example, the value of g is involved in two different multiplications. In a real-world example of a 2048-bit exponent, the number of multiplications using g will be much larger than this. To improve the efficiency of Karatsuba multiplication involving g, different values of gi = (gH (i) + gL (i)) can be stored in a table for reuse, where i represents the depth of Karatsuba recursion. This cache saves a significant number of loops that perform the same addition redundantly. If modulus reduction using the same modulus occurs multiple times, caching of other commonly used values such as M 'and µ used in folding can also enhance performance.Additional optimizations can be used when performing multiplication of non-uniformly sized numbers, such as multiplying a number of 1k size by a number of 2k size. This multiplication can occur when determining Barrett's qM value and when determining NH2fmodm. To utilize Karatsuba, a 1k * 2k multiplication can be broken down into two 1k * 1k operations, such as q * mh and q * ml. Because q is used in both operations, the value of (qh + q1) does not need to be determined twice, instead it can be stored for later use.Again, the above is just an example, and Karatsuba and folding techniques can be used to perform a variety of other cryptographic operations as well as other general-purpose mathematical applications.These techniques can be implemented in various systems in various ways. For example, these techniques may be implemented in dedicated digital or analog hardware (e.g., determined by the programming techniques described above in a hardware description language such as Verilog (tm)), in firmware, and / or implemented as an ASIC (Application Specific Integrated Circuit) or programmable Gate Array (PGA). These techniques may also be implemented as a computer program provided on a computer-readable medium for execution by a processor. For example, the processor may be a general-purpose processor.As shown in FIG. 8, these techniques can be implemented by computer programs, where these computer programs are executed by the processor module 300 which can perform offloaded cryptographic operations. As shown, the module 300 includes a plurality of programmable processing units 306-312 and a dedicated hardware multiplier 316. The processing units 306-312 run programs on data downloaded from the shared memory logic 304 controlled by the core 302. Other processors and / or processor cores may issue commands to the module 300 to specify data and operations to be performed. For example, the processor core may issue a command to the module 300 for performing a modular exponentiation on the values of g, e, and M stored in the RAM 314. The core 302 may respond by issuing instructions to the common memory logic 304, where these instructions are used to download the modular exponentiation program to the processing units 306-312 and download the data being operated from the RAM 314 to the common memory 304 and finally Download to processing units 306-312. The processing units 306-312 then execute these program instructions. In particular, the processing units 306-312 may use a multiplier 316 to perform a multiplication such as a Karatsuba multiplication for performing a square and "g" multiplication. Once completed, the processing units 306-312 may return the results to the common memory logic 304 for delivery to the requesting core. The processor module 300 may be integrated with the programmable core on the same or different dies.Also, FIG. 8 merely illustrates the use of an example architecture to implement the Karatsuba and folding techniques described above. However, these techniques can be used in a variety of other architectures, such as those with conventional general-purpose processors that are programmed.Other embodiments are within the scope of the following claims. |
Embodiments include an autonomous core perimeter, configured to save the state of a core of a multi-core processor prior to the processor package being placed into a lowpower state. The autonomous core perimeter of each core is configured to save an image of a microcontroller firmware to an external store if it has not been previously saved by another core, along with the unique working state information of that cores microcontroller. Upon restore, the single microcontroller firmware image is retrieved from the external store and pushed to each core along with each cores unique working state. |
CLAIMSWhat is claimed is:1. A multi-core processor, comprising:two or more cores, each core including a microcontroller and coupled to an autonomous core perimeter logic; andcircuitry in communication with each autonomous core perimeter logic adapted to, based on receipt of a signal to place the processor into a low power state:halt the microcontroller of at least one of the two or more cores, save firmware code from the microcontroller of a first one of the two or more cores, andsave state information from the microcontroller of each of the two or more cores; andwherein the circuitry is further adapted to, based on receipt of a signal to return the processor from the low power state:restore the firmware code to all of the cores; andrestore the respective state information to each core.2. The processor of claim 1, wherein the circuitry is in communication with a memory unit, and is to store the firmware code and state information to the memory unit.3. The processor of claim 2, wherein the circuitry is to communicate with the memory unit over an in-die interface.4. The processor of claim 3, wherein the circuitry is further to communicate with the memory unit with a bubble generation first in first out (FIFO) structure.5. The processor of claim 1, wherein the circuitry comprises a power management agent.6. The processor of any of claims 1-5, wherein the autonomous core perimeter logic comprises a fabric interface logic.7. The processor of any of claims 1-5, wherein the circuitry is to resume the microcontroller after the firmware code and respective state information have been saved.8. The processor of any of claims 1-5, wherein the processor comprises a System on a Chip (SoC).9. A non-transitory computer readable medium (CRM) containing instructions executable by a circuitry in a processor, that when executed cause the circuitry to:halt a microcontroller contained in a core perimeter logic, the core perimeter logic associated with a first processing core of multiple processing cores, wherein each of the
multiple processing cores is associated with a core perimeter logic and shares a common microcontroller firmware code;save state information from the microcontroller of the perimeter logic;determine whether the microcontroller firmware code has been saved; and if the microcontroller firmware code has not been saved, save the microcontroller firmware code from the microcontroller of the perimeter logic.10. The CRM of claim 9, wherein the instructions are to further cause the circuitry to resume the microcontroller once at least the state information has been saved.11. The CRM of claim 10, wherein the instructions are to be executed by the circuitry following receipt of a signal to place the processor into a low power state.12. The CRM of claim 11, wherein the instructions are to further cause the circuitry to resume the microcontroller following receipt of a signal to abort placing the processor into a low power state.13. The CRM of claim 9, wherein the instructions are to cause the circuitry to save the state information and microcontroller firmware code to a memory unit.14. The CRM of claim 13, wherein the instructions, following receipt of a signal to wake the processor from the low power state, are to further cause the circuitry to: retrieve the firmware code and the state information for the perimeter logic from the memory unit;restore the firmware code and the state information to the microcontroller of the perimeter logic; andresume the microcontroller.15. The CRM of any of claims 9-14, wherein the instructions are to further cause the circuitry to receive an in-die interface fabric interface logic data block that includes locations within a memory unit to store the firmware code and state information; and store the firmware code and the state information for the perimeter logic from the memory unit to the memory unit locations.16. A system for managing power states on a multi-core processor, comprising: multiple cores, each core coupled to an autonomous core perimeter;circuitry adapted to store firmware code and state information of each autonomous core perimeter; anda memory unit in data communication with the circuitry;wherein the circuitry is adapted to save to the memory unit the firmware code if not previously saved and state information from a first autonomous core perimeter of the
multiple cores, and save to the memory unit state information for each remaining autonomous core perimeter of the multiple cores, based on receipt of a signal to place the processor into a low power state.17. The system of claim 16, wherein the autonomous core perimeter comprises a fabric interface logic.18. The system of claim 16 or 17, wherein the circuitry comprises a power management agent.19. The system of claim 18, wherein the power management agent is in communication with the memory unit over an in-die interface.20. The system of claim 16 or 17, wherein the circuitry is adapted to, based on receipt of a signal to return the processor from the low power state:restore the firmware code stored from the first autonomous core perimeter to each autonomous core perimeter of the multiple cores; andrestore the state information to each respective autonomous core perimeter of the multiple cores.21. The system of claim 20, wherein the circuitry is to further halt each autonomous core perimeter based on receipt of the signal to place the processor into a low power state, and is to resume each autonomous core perimeter based on receipt of the signal to return the processor from the low power state.22. The system of claim 16 or 17, wherein the firmware code and state information are associated with a microcontroller, the microcontroller comprising part of each autonomous core perimeter.23. An integrated circuit, comprising:multiple processing means;memory means; andmeans, coupled to each of the multiple processing means and coupled to the memory means, to store firmware code and state information associated with each processing means into the memory means;wherein, following receipt of a signal to place the integrated circuit into a low power state, the means to store firmware code and state information is to:store the firmware code from one of the multiple processing means into the memory means if not previously stored, andstore the state information from each of the multiple processing means into the memory means.24. The integrated circuit of claim 23, wherein, following receipt of a signal to resume the integrated processor from the low power state, the means to store firmware code and state information is to:retrieve the firmware code from the memory means and load it into each of the multiple processing means;retrieve the state information for each of the multiple processing means from the memory means; andload the state information of each of the multiple processing means into its respective processing means.25. The integrated circuit of claim 23 or 24, wherein each of the multiple processing means includes a controller means, the controller means associated with the state information of its respective processing means. |
AUTONOMOUS CORE PERIMETER FOR UOW POWER PROCESSOR STATESRELATED APPLICATIONThis application claims priority to U.S. Application 16/370,950, entitled “AUTONOMOUS CORE PERIMETER FOR LOW POWER PROCESSOR STATES,” filed March 30, 2019.TECHNICAL FIELDEmbodiments described herein generally relate to the field of computer processors. In particular, apparatuses and systems that allow the cores of a multi-core processor to be placed into or returned from a low power state are disclosed.BACKGROUNDModem processor architecture often makes use of one or more internal processing cores, where each processing core can include a core processing logic and various associated supporting blocks, timers, busses, and similar structures. The core processing logic may process a simplified set of micro-operations, and may employ amicroarchitecture that provides logic to convert the processor’s external -facing instruction set architecture (ISA, e.g. x86-64) to the internal micro-operations used by the core processing logic. Further still, many modem processors are configured to provide a variety of power levels, to enable various power saving modes. The microarchitecture, in addition to converting between an ISA and internal micro-operations, may coordinate or otherwise facilitate transitioning each processing core into a requested power level.BRIEF DESCRIPTION OF THE DRAWINGSFig. 1 is a block diagram of some of the components of an example system, such as a multi-core processor, that implements an autonomous core perimeter, according to various embodiments.Fig. 2 is a block diagram of an example core from the system in Fig. 1, according to various embodiments.Fig. 3 is a flowchart of various operations that may be executed by the example system of Fig. 1 when transitioning to a package low power state, according to various embodiments.Fig. 4 is a flowchart of various operations that may be executed by the example system of Fig. 1 when transitioning to a package high power state, according to various embodiments.Fig. 5 illustrates a computer readable medium, which may be used to implement
one or more components of the system in Fig. 1 and/or one or more operations of Figs. 3 or 4, according to various embodiments.Fig. 6 illustrates an example system configured to employ the apparatuses and methods described herein, in accordance with various embodiments.DESCRIPTION OF EMBODIMENTSModem processors can include multiple processing cores, where each processing core is capable of entering into various low power states. As the various states progress to more aggressive power saving, increasing numbers of processor components may be powered down. Further, in addition to each core having multiple power states, the overall package (such as in a multi-core processor package) may also have multiple power states. Deeper/more aggressive package states may power down entire cores via power gating mechanisms. A multi-core processor may be configured with multiple power rails to power different components, and one or more (e.g., all) of these various power rails, in some implementations, are capable of being power gated.Many modem processors that employ a microarchitecture use a microcontroller associated with each core to handle various tasks for the core, such as decoding ISA operations to internal micro operations, managing core register files (that may include internal or transient registers), cache management, and providing various other internal core functions. In some embodiments, a firmware may be executed by the core’s microcontroller to enable the tasks; in some instances, this firmware can be considered to be a stripped-down operating system (OS). As with many operating systems, the firmware may also require some form of local storage to maintain information for various working states, e.g. temporary register files, transient machine states and statuses, buffers to allow instruction reordering, etc. In embodiments, the microcontroller includes a firmware that is pre-loaded at time of manufacture or prior to system assembly, and loads automatically upon processor initialization. This firmware may be called“microcode”. Further, some implementations may also allow an updated firmware, which may be called“acode” or“a- code”, to be dynamically loaded into each core (as opposed to the fixed microcode stored in a read-only memory), to allow for improvements, patches, and other tuning of the firmware (and consequently, the core) over the lifetime of the processor. In some examples, the a-code firmware may be updated via an operating system driver as a machine OS, such as Microsoft Windows® or macOS®, starts up.Each core of a multi-core processor communicates with internal and external modules over a variety of busses. Further, the various components of a multi-core
processor may operate at different clock frequencies. For example, an individual core may be capable of running, and executing instructions, at a clock speed of several gigahertz (GHz). Other components of each core may run at slower clock speeds, in the range of several hundred megahertz (MHz). These various components can be tied together via one or more internal busses. Depending upon the components interconnected by a given bus, the bus may operate at a speed from several hundred MHz to several GHz. As a general principle, a given bus needs to operate at a speed that allows all components connected via the bus to reliably communicate across the bus. Thus, busses that interconnect internal core components that operate in the GHz range may be able to operate in a GHz range, while busses that interconnect one or more components that operate in the MHz range may need to operate in a MHz range.The width of a given bus, e.g. serial, 8 bit, 64 bit, 256 bit etc., can vary depending upon various factors such as capabilities of connected components, bus speed, bus transmission type (e.g. serial or parallel), and available die space. For a given clock speed, a wide parallel bus can typically transmit more data than a narrow or serial bus.Conversely, narrower busses, serial busses, and/or shorter length busses can typically be driven at a higher clock speed compared to wider and/or longer busses. Busses that interconnect internal components within a core typically are relatively higher speed and/or wider busses to allow quick, low latency transfer of data within a core. Busses that interface a core with external components, e.g. inter-core communication, communication with components outside of the processor die, such as external cache memory, main system memory, and input/output (I/O) subsystems, typically run at speeds that may be a fraction of the speed of an internal core bus.Because of these bus limitations, communications between a given core and external components typically incur significant latencies compared to intra-core communications that may be handled on a comparatively high speed/wide bus. Relying upon storage external to the core for maintaining working state information and/or firmware would thus result in unacceptably slow processor performance. Consequently, each processor core may rely upon storage positioned within a core, such as a dynamic random access memory (DRAM) or another suitable memory file or unit, to maintain data of both working state information as well as a dynamically loaded firmware image. The storage can be positioned on a wide/fast internal bus to minimize latency.Due to its nature, DRAM and similar memory types often require constant power to ensure stored contents are retained; loss of power results in data loss. The power rail
supplying the memory may be power gated as the core, and subsequently themicroprocessor package, is placed into a deeper power saving state. Consequently, the working state and/or firmware image may need to be preserved to storage outside of the core when the core is transitioned from a higher power state to a lower power state where core execution is paused or otherwise halted, if the power rail supplying the memory will be gated. Failure to do so can result in the processor effectively being reset upon power restoration, with the firmware image needing to be reloaded, and the processor reinitialized. Such a process would, at best, result in unacceptable delays every time the processor was placed into a power saving mode, and at worst result in a processor that could not be placed into a low power mode without incurring a system reboot.Saving the working state and/or firmware to an external storage while the processor is powered allows a processor to be placed into a low power state. The in-core storage can be powered down (with a resultant loss of information), and subsequently restored to its working state upon power-up without needing to fully reinitialize each core. However, as mentioned above, accessing and restoring information from storage external to a core or processor package incurs significant latencies. While this latency is often tolerable on a relatively infrequent basis, a system designer employing such amicroprocessor may need to forego placing the microprocessor into a low-power state, which could otherwise help preserve battery life in a portable device, to achieve an acceptable performance, but at the expense of a greater power draw (and, in the case of mobile implementations, associated reduced battery life).Latency times on a save and subsequent restore are typically related to the amount of data that must be retrieved from external storage and restored to each core. As discussed above, the data may comprise two main components: the working state information of each processor core, and a copy of the firmware image. Of these two components, the working state information, in most implementations, is unique to each processor core, while the firmware image is identical across all cores. Further, the working state information comprises a relatively small amount of data compared to the firmware image. Limiting the amount of data to be transferred to a minimum amount can help keep latency times at a minimum. Thus, by limiting data transfer to the unique working state information for each core, but only a single copy of the firmware image, latency times for saves and restores bracketing a deep power save state can be kept at a minimum, thus allowing more frequent placement of the processor into a deep power save state while still maintaining acceptable performance.
Disclosed embodiments include systems and apparatuses directed to an autonomous core perimeter. The autonomous core perimeter is associated with a core of a multi-core microprocessor, and is adapted to interface between core structures that hold the microcontroller state information and firmware image, and one or more external (to the core) busses and memory units. The autonomous core perimeter, when the core is signaled to transition to a lower power state, coordinates saving the microcontroller state information. Further, the autonomous core perimeter determines whether the firmware image has been saved and, if not already saved by another core, saves the firmware image. Similarly, when the core is signaled to return to a higher power state, the autonomous core perimeter coordinates retrieving and restoring the microcontroller state information and a copy of the firmware image, allowing the core to resume execution. In someembodiments, the firmware image may be able to be retrieved from an external store once, and contemporaneously be read into each processor core, to prevent multiple transfers of the firmware image. Each core of a multi-core processor, in some embodiments, includes its own associated discrete autonomous core perimeter. In other embodiments, multiple cores may attach to a single autonomous core perimeter, which is adapted to coordinate storage and retrieval of the unique state information of each attached core, along with a single copy of the firmware image which is distributed to all attached cores on a return to a higher power state.In the description herein, various aspects of the illustrative implementations are described using terms commonly employed by those skilled in the art to convey the substance of their work to others skilled in the art. However, it will be apparent to those skilled in the art that embodiments of the present disclosure may be practiced with only some of the described aspects. For purposes of explanation, specific numbers, materials, and configurations are set forth in order to provide a thorough understanding of the illustrative implementations. However, it will be apparent to one skilled in the art that embodiments of the present disclosure may be practiced without the specific details. In other instances, well-known features are omitted or simplified in order not to obscure the illustrative implementations.In the following detailed description, reference is made to the accompanying drawings that form a part hereof, wherein like numerals designate like parts throughout, and in which is shown by way of illustration embodiments in which the subject matter of the present disclosure may be practiced. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the
scope of the present disclosure. Therefore, the following detailed description is not to be taken in a limiting sense, and the scope of embodiments is defined by the appended claims and their equivalents.For the purposes of the present disclosure, the phrase“A or B” means (A), (B), or (A and B). For the purposes of the present disclosure, the phrase“A, B, or C” means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B, and C).The description may use perspective-based descriptions such as top/bottom, in/out, over/under, and the like. Such descriptions are merely used to facilitate the discussion and are not intended to restrict the application of embodiments described herein to any particular orientation.The description may use the phrases“in an embodiment,” or“in embodiments,” which may each refer to one or more of the same or different embodiments. Furthermore, the terms“comprising,”“including,”“having,” and the like, as used with respect to embodiments of the present disclosure, are synonymous.Fig. 1 depicts an example system 100 that includes multiple autonomous core perimeter logics according to various embodiments. In one embodiment, system 100 comprises a multi-core processor with a plurality of cores 102a to 102f (collectively or without regard to a specific core, core 102). Each core 102a to 102f in the embodiment is coupled to an autonomous core perimeter logic 103 (ACP 103), respectively. Each core 102 also includes a microcontroller 112, which is also coupled so as to be incommunication with, and may be a part of, ACP 103. Each core 102a to 102f is coupled by a circuitry 104a to 104f (collectively or without regard to a specific core, circuitry 104; abbreviated to Cx in Fig. 1) to an in-die (or intra-die) interface (IDI) 106. In embodiments, each of circuitry 104a to 104f is adapted to halt the microcontroller 112 of each of the cores 102a to 102f, save firmware code from the microcontroller 112 of a first one of the plurality of cores 102a to 102f, and save state information from the microcontroller 112 of each of the cores 102a to 102f, based on or triggered by a signal to place the processor into a low power state. It should be understood that, although six cores 102a to 102f and corresponding circuitry 104a to 104f are depicted, this number is arbitrary. Various embodiments may have any number of cores 102a to 102f as well as circuitry 104a to 104f.Each core 102 in system 100 may act as a processing core, executing one or more threads of software instructions loaded from storage external to system 100. In some embodiments, each core 102 may be application-specific, such as an embedded
microcontroller. In other embodiments, each core 102 may be of a general purpose nature, such as may be used in a general purpose computer (which may be implemented as a computing device 1300, described herein with respect to Fig. 6) like a server, desktop, or laptop. Each core 102 may implement a specific instruction set architecture (ISA), such as Intel’s x86-64 instruction set or ARM’s instruction set. Each core 102 in system 100 may execute the same type of ISA, so that system 100, when implemented as a microprocessor for a computer, can execute multiple software threads written for the ISA. In other embodiments, one or more cores 102 may to execute a different ISA from other cores, so that system 100 is capable of simultaneously or nearly simultaneously executing software written for two or more different IS As. In still other embodiments, one or more cores 102 may be application-specific or function-specific, where system 100 has one or more cores 102 for general purpose execution, and one or more cores 102 that are dedicated to a specific function, such as OS management, hardware management, management of various internal structures of system 100, or similar specific functionality.Each core 102, in embodiments, is capable of being placed into multiple power states. For example, a given core 102 may include a CO state, where the core is active and either processing or idle but ready to process, a C3 state, where the core is powered down, but core perimeter components remain powered and ready to transition the core back to a CO state, and a C6 state, where the core as well as at least some core perimeter components are also powered down. Depending upon the embodiment, a portion of the core perimeter may remain powered in a C6 state to allow the portion to repower the remainder of the core 102 upon a wake-up signal, or the entire core perimeter may be powered down along with core 102. Where the entire core and core perimeter are powered down, the core 102 may need to rely on external logic to bring the core out of a C6 state. Other power states may be possible depending upon the requirements of a given implementation and available power rails, where various blocks of core 102 can be placed in varying modes of activity or power savings.Each core 102, in embodiments, includes an autonomous core perimeter logic, or simply core perimeter, ACP 103, that is comprised of components dedicated to a particular core 102, but do not perform the actual processing of ISA instructions. ACP 103 may include a power management agent 110, a microcontroller 112, and local storage such as a random access memory (RAM) 114. Each core 102 may include other components, such as the main processing block. These and other components of each core 102 will be described in greater detail herein with respect to Fig. 2.
In the depicted embodiment, the ACP 103 of each core 102 is coupled to a circuitry 104, which in turn couples the core 102 and ACP 103 with the IDI 106, to allow communications between the core 102, ACP 103, and other components of system 100, including devices external to system 100 such as input/output (I/O) ports, expansion ports, discrete graphics processors (dGPUs), and other components of a computer system, such as computing device 1300. Circuitry 104, in embodiments, communicatively ties the ACP103 to IDI 106. IDI 106, as will be discussed below, provides a relatively high speed (in terms of clock speed) and wide pathway between a core 102 and other components, including a memory store 108, as compared to other fabrics and busses that may be present within system 100. Circuitry 104 may, in embodiments, coordinate the transfer of firmware and state data to or from core 102, to varying degrees, either by itself or in cooperation with other components of core 102 and/or ACP 103. By tying the ACP 103 to IDI 106, in embodiments, saving of firmware and state information from a core 102 and its ACP 103 can be accomplished with minimal latency, compared to transfer of data over a sideband bus or another channel that may have a significantly slower clock and/or narrower bus width.Circuitry 104 may be a part of ACP 103, in some embodiments, may be a standalone component or module within core 102, may be a part of another module within core 102 (which itself may be considered a part of ACP 103) or may be a combination of any of the foregoing. Circuitry 104, in some embodiments, is configured to autonomously handle or otherwise coordinate saving of the microcontroller 112 firmware (if not already saved, as will be discussed herein) and state information following notification that the package of system 100 is being placed or may be placed into a low-power state that would result in the microcontroller 112 being depowered. If there is a delay between notification and actual powering down of system 100, each circuitry 104 may be able to save the core firmware (if not already stored) and state information of its associated core 102 prior to package power down, thereby avoiding imposing undesirable latency in the transition of system 100 to a low power state. As will be described in greater detail herein, circuitry104 can also cause core 102 to at least partially resume execution following saving of state but prior to power down, rather than holding core 102 in a halted state, in the event that a power down of system 100 is aborted.As depicted in the embodiment of Fig. 1, circuitry 104 may, specifically, tie a power management agent 110 of its associated core 102 to IDI 106. In such anembodiment, the power management agent 110 may be considered to be a part of ACP
103. In still other embodiments, circuitry 104 may include its own control logic that may run a type of firmware or software. Circuitry 104 may coordinate with power management agent 110 to transition one or more components of its associated core 102 to different power states. In some embodiments, circuitry 104 coordinates depowering one or more components of power management agent 110 (such as microcontroller 112).Circuitry 104 may save the firmware and state information into a memory store 108 designated for a low power state of one or more cores and/or the system 100 package. Although depicted as within system 100, in some embodiments memory store 108 is located external to the system 100 package or otherwise on a separate power rail or power domain from the other components of system 100, to ensure that powering down of system 100 will not power down memory store 108. Memory store 108, depicted as coupled to IDI 106 to provide low latency and high bandwidth storage of firmware and state information, may be of a dynamic RAM (DRAM) type, requiring continuous power to refresh memory contents. In some embodiments, memory store 108 may be a portion of a main system memory on a computer or device using system 100 for a processor, and may be shared with an operating system and/or associate running applications and processes. Memory store 108 may be a portion of main system memory stolen or otherwise allocated from an operating system or running process, and set aside for use for when system 100 is transitioned to a low power state. In other embodiments, memory store 108 may be a separate and/or dedicated memory unit specifically for saving firmware and working state information of each core 102.IDI 106, in embodiments, is an interface and internal communications bus for system 100 that allows relatively high-speed low-latency data transfers between various components of system 100, such as between cores 102, any cache memories, and/or other components of system 100 that require high bandwidth with low latency. In one embodiment, IDI 106 runs at a clock speed ranging from several hundred megahertz up to several gigahertz, and may match the maximum clock speed of a given core 102. IDI 106 may also be comparatively wide; in one embodiment, IDI 106 is 256 bits wide. Other embodiments may use a narrower or wider bus width, depending upon the needs of a specific implementation. Compared to other internal busses that may be present within system 100, IDI 106 can be significantly faster. Other internal busses may have a maximum clock speed of several hundred megahertz, and/or a width less than 64 bits, 32 bits, 16 bits, or smaller, depending upon the intended purpose of the bus. The wide bandwidth of IDI 106 can allow firmware and state information to be transferred to an
external memory, such as memory store 108, with minimal latency.Power management agent 110, in embodiments, is responsible for transitioning its associated core 102 between power states, such as states CO, C3, and (in some implementations) C6 as described above. As such, power management agent 110 may be configured to power gate, e.g. turn on or off, various components of core 102. Power management agent 110 may include microcontroller 112, as well as a storage 114 (depicted as a RAM unit). Microcontroller 112, in embodiments, is responsible for providing at least some of the functionality of power management agent 110. In other embodiments, microcontroller 112 may also or alternatively provide functionality to the processing core of core 102, described further herein with respect to Fig. 2. Storage 114 may be used by microcontroller 112 and/or power management agent 110 to store both microcontroller firmware as well as working state information, e.g. register values, internal states of the microcontroller 112, temporary data, etc. Still further, power management agent 110 may include a finite state machine (not depicted) to coordinate and transition between power states and the steps necessary to transition. Storage 114 may also be used by this finite state machine to track the current machine state.In addition to each core 102a to 102f having multiple power states, system 100 as a whole may have multiple power states. For example, one embodiment of system 100 may include a PkgCO state, where all components of the package are powered (or capable of being powered), a PkgC3 state, where some components, such as each core 102a to 102f and possibly some components external to each core (e.g. uncore), are powered down, and a PkgC6 state, where substantially all package components are powered down, effectively turning the entirety of system 100 off. In some embodiments, either a package control unit (PCU) 116, another component, or a portion thereof, may remain with minimal power to allow the package to be waked from a PkgC6 state. In other embodiments, such as where PkgC6 effectively shuts the entirety of system 100 off, system 100 may need to be waked from a PkgC6 state by some circuitry or component external to system 100.The PCU 116, in embodiments, can act to coordinate various functions of system100, such as management of various busses, package power state transitions, signaling of component power state, clock control and alteration, and other necessary tasks for the operation of system 100. PCU 116 may sit outside of the various cores 102a to 102f, and so constitute part of the“uncore” of system 100, namely the various components on system 100 that are external to, but may support, one or more cores 102. In the depicted embodiment, the PCU 116 communicates with the various components of system 100 via
IDI 106. In other embodiments, PCU 116 may communicate with one or more components over other busses, instead of or in addition to the IDI 106. In still other embodiments, PCU 116 may be in direct communication with one or more components of system 100.System 100 may be implemented as a single physical package, such as a System on a Chip (SoC) configuration. An SoC configuration may be advantageous in implementing a mobile product that uses system 100. In addition to the various components depicted in Fig. 1, in such a SoC or other embodiment of system 100, other logic blocks are present, such as a memory manager, graphics subsystem, peripheral bus manager, I/O manager, power regulator or manager, and/or any other logic block to enable a single physical package to supply all or substantially all functionality of a computer system utilizing system 100. These components are omitted for ease of understanding the disclosed embodiments. Alternatively, system 100 may be one component of a system with multiple physical packages, such as a general purpose multi-core processor along with a supporting chipset. The chipset can include a northbridge chip and a southbridge chip, along with other components such as memory, a memory management unit (if not integrated into the northbridge chip), a graphics subsystem, a peripheral management unit, and other components appropriate to a given implementation.Turning to Fig. 2, the components of a core 102 are depicted in greater detail. In the depicted embodiment, core 102 includes a nucleus core 202. Other components that comprise the core perimeter include a fabric interface logic (FIL) 204 and associated bubble generating first-in-first-out (FIFO) BGF 214, the power management agent (PMA) 206 and associated microcontroller 216 and RAM 218, as discussed above with respect to Fig. 1, one or more power delivery rails 208, a phase locked loop (PLL) 210, and a digital thermal sensor (DTS) 212. As indicated, these components may comprise at least part of ACP 103, described above. Other depicted components and connections will be discussed below.Nucleus core 202, in embodiments, include the logics and other various components that carry out execution of one or more software threads. These structures can vary depending upon the particulars of a given processor implementation. Nucleus core202 may include structures such as one or more arithmetic logic units, floating point units, translation lookaside buffers, branch predictors, register files, multiplexers, decoders, caches, and other such components. The various structures may be organized into one or more multi-stage pipelines to optimize instruction throughput. The nucleus core 202 may
be capable of being run at speeds of several gigahertz, and may achieve instruction throughputs better than one operation per clock cycle (e.g. superscalar performance).Nucleus core 202, in embodiments, communicates with one or more components of ACP 103 as well as IDI 106 via FIL 204. This connection is depicted via connector 224. FIL 204 may be configured to provide a connection“fabric”, where various components are communicatively coupled via a mesh of connections, potentially enabling connected components to directly communicate, e.g. point to point, through FIL 204. FIL 204 may also connect to PMA 206 via connector 222. Although not depicted, FIL 204 may further connect to other components within core 102 to facilitate in-corecommunications. These other connections may be made via other internal busses, which may run at varying speeds and have varying data widths. FIL 204 may, in such embodiments, coordinate buffering of data transfer between components that run at different clock speeds.Included within FIL 204, in the embodiment of Fig. 2, is BGF 214, the bubble generating FIFO (first in first out). BGF 214 is configured to allow data coming to or from various internal busses of core 102 to operate at differing clock speeds and/or data widths. In this respect, BGF 214 may include buffering capabilities, allowing data to be stored temporarily between bursts from a high bandwidth bus, such as IDI 106, until the data can be fully transferred onto a low bandwidth bus; similarly, it may store data transmitted from a low bandwidth bus until a sufficient amount is obtained to allow it to be burst transferred onto a high bandwidth bus, such as IDI 106.PMA 206, as discussed above, can handle managing the core power states, e.g. CO, C3, and C6, including transitioning between the various power states, as well as power gating internal components, such as nucleus core 202, PLL 210, DTS 212, and/or other modules. PMA 206, in embodiments, is connected to FIL 204 via connector 222.Connector 222 may comprise an internal bus, which may be of the same or a different bandwidth from IDI 106. Where connector 222 runs slower and/or is narrower than IDI 106, data to or from PMA 206 via connector 222 may pass through BGF 214 to reach IDI 106, where BGF 214 handles translating between clock domains and bandwidth differences, as discussed above. PMA 206 also includes microcontroller 216 and RAM 218, similar to microcontroller 112 and RAM 114 depicted with respect to Fig. 1. In the embodiment depicted in Fig. 2, PMA 206 also may communicate via a sideband interface (SI) 220. SI 220 may connect to similar structures as IDI 106, but allow for out-of-band signaling without consuming bandwidth of IDI 106, particularly when the signaling is of a
relatively small payload size. SI 220 may connect within core 102, and/or may connect to one or more uncore components, such as package control unit 116, other power control or management modules, etc.Microcontroller 216, in embodiments, coordinates the functioning of one or more components of core 102. For example, microcontroller 216 may provide control signaling to nucleus core 202. Depending upon the specific architecture of nucleus core 202, microcontroller 216 may also provide instruction translation and/or decoding, where instructions in the ISA of core 102 are translated into one or more micro-operations for execution by nucleus core 202. For example, some implementations of nucleus core 202 may employ a simplified or reduced instruction set offering only primitive operations, but that can be executed at high speed. Instructions of the ISA for system 100 are broken down into these primitive operations by or under the control of microcontroller 216 prior to processing by nucleus core 202. Likewise, microcontroller 216 may coordinate formatting any data or other results of execution by nucleus core 202 into data or structures conforming to the ISA for system 100. Microcontroller 216, as suggested above with respect to Fig. 1, may also coordinate and/or control operations of other components of core 102, such as one or more components of ACP 103. These functions can include power transitioning via PMA 206, configuration and management of FIL 204 (and associated BGF 214), clock speeds (via PLL 210), throttling of the performance of nucleus core 202 based on sensed conditions (such as over-temperature conditions detected by DTS 212), management of various in-core busses (such as connectors 222 and 224), and any other suitable tasks for managing operations of core 102.Although depicted as a part of PMA 206, in other embodiments microcontroller 216 may be a separate module or component of core 102. In still other embodiments,RAM 218 may be a part of microcontroller 216, or may be a discrete component or separate module of core 102.As discussed above, microcontroller 216 may utilize a storage such as RAM 218 during execution. When core 102 is halted, including halting microcontroller 216, the contents of RAM 218 may need to be preserved to ensure that microcontroller 216 can resume execution from the point of halting, thus allowing core 102 to resume execution from its halt point following being placed into a power saving state such as C6 or PkgC6.Depending upon the specific implementation of RAM 218, RAM 218 may require continuous power to maintain its contents (e.g. DRAM). While non-volatile memory storage may also be used, it may not offer the same performance as a DRAM. Where
RAM 218 is implemented with DRAM, its contents must be copied to external storage, powered separate from core 102 (and potentially system 100, as discussed above) prior to fully powering down core 102. Fully powering down core 102 in such implementations also results in RAM 218 being depowered, and thus losing its contents. If the contents of RAM 218 are not preserved, then the microcontroller 216 will be unable to resume its execution from prior to powering down. As a result, core 102 will need to be reinitialized, introducing potential latency and/or data loss.RAM 218 may also include a firmware image for microcontroller 216. As microcontroller 216, in embodiments, is essentially a specific-purpose computer, it may run a form of a minimal or application-specific operating system via firmware, that governs how core 102 operates. This firmware may, in some embodiments, be hard coded or burned into microcontroller 216, or another appropriate structure within core 102. Additionally, some embodiments may allow a new or updated firmware to be loaded into core 102, as discussed above. This new or updated firmware may, in some embodiments, be dynamically loaded by a computer’s BIOS, firmware, or operating system following and/or as part of powering up and initializing system 100, along with core 102. In some embodiments, this dynamically loaded firmware is placed into a portion of RAM 218. As with the working state information, this firmware image must be stored external to core 102 prior to powering down of RAM 218. Failure to do so would require the computer or its operating system to reload the new firmware following reinitialization of core 102, which may not be feasible in some implementations, and so require the entirecomputer/operating system to be rebooted.Power delivery rail 208 may comprise one or more power rails to supply power to various components within core 102. Where power delivery rail 208 includes multiple rails, each rail may carry different power specifications, e.g. different voltages, different current capacities, etc., depending upon the requirements of components connected to a given rail. Further, multiple rails (either carrying the same power or power of varying specifications) may be employed to allow subsets of components of core 102 to be power gated. For example, nucleus core 202 may be placed on a single power rail 208, FIL 204 may be placed on another rail, and PMA 206 (with microcontroller 216) may be placed on yet another rail. PMA 206 and/or microcontroller 216 may be configured to power gate the various rails of power delivery rail 208. In such embodiments, PMA 206 can power gate nucleus core 202, such as when core 102 is placed into a C3 state, while maintaining power to FIL 204, PMA 206, microcontroller 216, and RAM 218. In such a state,
incoming messages can be processed by FIL 204 without the need to power up nucleus core 202, and PMA 206 with microcontroller 216 can maintain control over power gates.PLL 210, a phase locked loop, provides clock services for core 102, inembodiments. These clock services may include varying clock speeds for different components. For example, nucleus core 202 may require a speed up to several gigahertz, while FIL 204 may only require a clock speed of several hundred megahertz.Microcontroller 216 may require yet another clock speed. Further, PLL 210 may allow the clock speed provided to various components to be boosted or throttled depending upon specific performance requirements for core 102.DTS 212, the digital thermal sensor, may be equipped to core 102 to monitor its internal temperature condition. When nucleus core 202 and/or other components of core 102 are heavily loaded and/or subject to a high clock speed, they may generate more heat than can be feasibly dissipated by the package of system 100. Consequently, the internal temperature will rise as heat builds up, and may exceed the thermal limits of system 100, potentially resulting in damage to system 100 or one or more of its components. DTS 212, upon detecting a temperature condition approaching or exceeding design limits, can cause the speed of nucleus core 202 (and/or other components) to be throttled at least temporarily, to bring heat generation down to a level where it can be safely dissipated by the package of system 100. In some embodiments, this throttling is handled via microcontroller 216, which accepts data from DTS 212 as an input, and in turn controls PLL 210 to throttle the speed of nucleus core 202. In other embodiments, DTS 212 may be directly coupled to PLL 210 in a control or feedback loop, where a sensed over temperature condition will automatically cause PLL 210 to throttle clock speeds.System 100 (and associated cores 102a to 102f) as will be understood, may be embodied as a general purpose processor, suitable for use in various consumer devices such as phones, tablets, watches, servers, laptops, desktops, network devices, embedded systems, and other similar implementations. Example processors may include, but are not limited to, various microprocessors such as general-purpose processors that may be used for general-purpose computing, and/or microprocessors that are purpose-built, such as specifically for processing of digital signals, and more specifically for processing of digital audio signals. Examples may include processors of the iAPX family, ARM family, MIPS family, SPARC family, PA-RISC family, POWER family, or any other suitable processor architecture now known or later developed. Still other embodiments may use an application-specific integrated circuit (ASIC) or field-programmable gate array (FPGA)
for at least part of the components, such as FIL 204, microcontroller 214, PMA 206, and other components of ACP 103.It should also be understood that in some embodiments of system 100, the various components may use a variety of different arrangements, including different types, so long as a given implementation maintains any necessary functionality. For example, portions of system 100 may be implemented as software (such as firmware for microcontroller 112/216) with other portions implemented in hardware. It should be appreciated that the various blocks in Figs. 1 and 2 are simply logical depictions of functions; the actual implementation of the blocks can vary from embodiment to embodiment, with functions of different blocks potentially being split or combined into one or more software and/or hardware modules. Some of the components may be omitted or moved to other locations, depending upon a given implementation.In Fig. 3, the operations of an example method 300 for saving microcontroller firmware and working state information when potentially transitioning the package of a processor to a low power state are depicted. The operations of method 300 may be performed in whole or in part, and may be performed by one or more components of system 100 and/or a core 102, such as by one or more components of an autonomous core perimeter 103, including a PMA 110/206. Some operations or portions of operations may be performed by a system package, which, in embodiments, may comprise system 100 and its physical packaging, e.g. a system package may be a single physical package, such as a SoC. The following should be read in light of the foregoing discussion of Figs. 1 and 2, including the foregoing description of the functionality of the various components of system 100 and core 102.Starting with operation 302, a signal to save state is received, such as by a component of ACP 103. The signal may be sent by a component internal to system 100, such as PCU 116, and/or may originate from outside of system 100, such as by an external power manager or system BIOS or firmware. The signal may be received via an in-die interface, or may be received via a sideband or out of band bus or signaling channel.In operation 304, the microcontroller is halted, such as by ACP 103. A PMA 206 may coordinate halting the microcontroller. Halting the microcontroller, at least temporarily, may be desirable to ensure that the working state of the microcontroller does not change while it is in the process of being saved.In operation 306, it is determined whether the firmware image for themicrocontroller has been saved to an external store, such as memory store 108. As
discussed above, the firmware image, particularly a-code that is dynamically loaded upon system start up, is typically identical across all cores, and further requires significantly more storage than the working state of each microcontroller. Thus, it is redundant, unnecessary, and wasteful of storage resources to store identical copies of the firmware from each core. Furthermore, the greater the amount of data that must be transferred outside of the core to an external storage, the greater amount of latency that is imposed when transitioning system 100 to a low power state. This latency can be saved by only saving a single copy of the firmware image, such as from the first core (in a multi-core system) to save its state. In operation 306, a flag or other signaling mechanism within system 100 may be utilized to indicate whether one of the cores has saved a copy of the firmware image. Some examples of possible signaling include setting a register or flag that is accessible to all cores in system 100, asserting a line, such as on an internal bus, that indicates to all cores that the firmware image is saved, pushing a flag or notification to all cores via an internal bus, or any other method of signaling the ACP of each core that the firmware image has been saved, and need not be saved again.If the answer to operation 306 is“YES”, indicating that the firmware image has not yet been saved to an external storage, method 300 proceeds to operation 308, where the shared firmware image is pushed to the external storage. This may be accomplished by ACP 103, which formats the firmware image and places it onto the IDI 106, using a circuitry 104. As discussed above, in embodiments, the image may be formatted and placed onto the IDI 106 via FIL 204, through BGF 214. Once the firmware image has been saved, the other cores are signaled to this fact, as discussed above, so that further saves are not attempted. ACP 103 and/or FIL 204, in embodiments, may obtain the address or addresses in the external storage to push the firmware image and (in operation 310 below) the working state information.This address information may be obtained using any suitable technique, such as obtaining the address from a memory manager, a package control unit, an operating system, the memory storage unit, or another source. In some embodiments, this address information may be received over IDI 106, as a data block or other suitable format appropriate to a given implementation of the IDI and any supporting circuitry. An initial address information may be obtained prior to storing of the firmware image. This initial address information, in embodiments, may be obtained by ACP 103 and received over IDI 106.Depending upon the implementation, the firmware may only need to be saved once
while the computer system employing system 100 is powered on. For example, where the firmware image is loaded on boot-up and otherwise never changes, a copy of the firmware image may be retained, such as by an operating system, in a system storage. In other implementations, the firmware image may only be saved once, upon the first time the state information of a first core is saved. In either such implementations, the“YES” path may never be followed for subsequent transitions of the system to a low power state, as the firmware image simply remains in system memory at least for the duration that the computer system remains powered.Following completion of operation 308, or if the results of operation 306 lead down the“NO” path (e.g. the firmware is already saved or doesn’t need to be saved), the working state information of the core is similarly pushed to the external storage, via the same mechanisms as the firmware image described above with respect to operation 306.Once the working state information is saved, in operation 312 the microcontroller may be unhalted. As the transitioning of the system to a low power state may be aborted, the microcontroller may be required to bring its core back from a halted or low power stage if the system transition to a low power state is aborted. If the system completes transition to a low power state, the microcontroller may be subsequently power gated. In some embodiments, operation 312 may be omitted, such as where the system immediately proceeds to powering down the package.It may be understood that the working state of the microcontroller may change between the time the working state is stored, in operation 310, and the microcontroller is finally power gated. However, these changes can be ignored. If the microcontroller is power gated, its working state will be restored to the state pushed to the external storage, which is the expected point based on when the signal to save state is received in operation 302. The microcontroller is not expected to incur any significant state changes between saving of the working state and power gating. Conversely, if the transition to a low power state is aborted, then the core and associated microcontroller will continue with execution as normal, and the working state pushed to the external storage can be ignored, as it will be overwritten by a new working state upon the next execution of operation 302.The firmware image and working states are, in embodiments, stored into a storage unit that is external to system 100, and so can allow system 100 to enter a deep power saving state, where it is fully or nearly fully powered down. The storage unit, as discussed above, remains powered. As discussed above with respect to Fig. 1, the storage unit may be a portion of main system memory stolen or otherwise allocated from an operating
system and/or applications (particularly when the application or applications are being slept).While method 300 is depicted as being performed by a single core, method 300 may be performed by each core in a system 100, either serially, in parallel, or a combination of serial and parallel execution.Turning to Fig. 4, the operations of an example method 400 for restoring microcontroller firmware and working state information when potentially transitioning the package of a processor to a low power state are depicted. The operations of method 400 may be performed in whole or in part, and may be performed by one or more components of system 100 and/or a core 102, such as by one or more components of an autonomous core perimeter 103, including a PMA 110/206. As with method 300, some operations or portions of operations may be carried out at a package or system package level, particularly where system 100 is implemented as a SoC, in a single package. The following should be read in light of the foregoing discussion of Figs. 1 and 2, including the foregoing description of the functionality of the various components of system 100 and core 102.Starting in operation 402, a signal to wake the system package, such as system 100, is received. Depending upon how deep the package is placed into a power saving state, this signal may need to come from a source external to the system. In other embodiments, an external signal may first be sent to a package control unit, which in turn signals each core in the system to begin restoring state and transitioning to a higher power level. The mechanics by which these signals are handled may vary depending upon the specifics of a given implementation, and which components within a system handle power gating and powering the system package. Part of operation 402 may include powering at least a portion of a core perimeter in each core, such as an ACP 103, which may then assume responsibility for executing the remaining operations of method 400 upon its associated core.Following receiving a wake up signal, in operation 404, the shared firmware is retrieved from the external storage, along with the core’s unique working state information. Depending upon the specifics of a given implementation, one core of multiple cores may coordinate retrieval of the shared firmware, which may be placed onto an in-die interface or otherwise buffered into the system. In this way, the shared firmware image need only be retrieved from the external storage once; it may then be copied internally within the system to all cores.
In operation 406, the firmware is pushed to each core, and specifically, may be pushed into the storage associated with each microcontroller of each core. This pushing may be handled by the autonomous core perimeter (including the circuitry connecting the ACP to the IDI). In other embodiments, this pushing may be at least partially handled by an uncore structure (e.g. component that is not located within a particular core). As with storage, the ACP or other structure handling restoring the firmware may obtain the address or addresses within the external storage to locate the shared firmware image from a suitable source (and which may be transmitted over an IDI, such as IDI 106 in a data block or other suitable format), as described above with respect to operation 306.In operation 408, similar to operation 406, the unique working state is pushed to each core, in similar fashion to the shared firmware image. As with operation 406, the address of each unique working state may be obtained and provided to each core’s ACP, to separately pull the working state information from the external memory.Finally, in operation 410, once the shared firmware image and unique working state information has been pushed to each core and placed into each microcontroller’s associated storage, each core may be transitioned to a higher power, more operative state.As will be appreciated by one skilled in the art, the present disclosure may be embodied as methods or computer program products. Accordingly, the present disclosure, in addition to being embodied in hardware as earlier described, may take the form of an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to as a“circuit,”“module” or“system.” Furthermore, the present disclosure may take the form of a computer program product embodied in any tangible or non-transitory medium of expression having computer-usable program code embodied in the medium.Fig. 5 illustrates an example computer-readable non-transitory storage medium that may be suitable for use to store instructions that cause an apparatus, in response to execution of the instructions by the apparatus, to practice selected aspects of the present disclosure. As shown, non-transitory computer-readable storage medium 1202 may include a number of programming instructions 1204. Programming instructions 1204 may be configured to enable a device, e.g., system 100 and/or one or more cores 102, in response to execution of the programming instructions, to implement (aspects of) the methods 300 and/or 400 described above. Further, some aspects of the various components of a core 102 may be implemented via microcontroller 112 executing programming instructions 1204. The firmware image may be implemented with programming instructions 1204. In alternate
embodiments, programming instructions 1204 may be disposed on multiple computer- readable non-transitory storage media 1202 instead. In still other embodiments, programming instructions 1204 may be disposed on computer-readable transitory storage media 1202, such as, signals.Any combination of one or more computer usable or computer readable medium(s) may be utilized. The computer-usable or computer-readable medium may be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. More specific examples (a non- exhaustive list) of the computer-readable medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a transmission media such as those supporting the Internet or an intranet, or a magnetic storage device. Note that the computer- usable or computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted, or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory. In the context of this document, a computer-usable or computer- readable medium may be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The computer- usable medium may include a propagated data signal with the computer-usable program code embodied therewith, either in baseband or as part of a carrier wave. The computer usable program code may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable,RF, etc.Computer program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the“C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user’s computer and partly on a remote computer or entirely on the remote computer or server.In the latter scenario, the remote computer may be connected to the user’s computer
through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer programinstructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.These computer program instructions may also be stored in a computer-readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.Fig. 6 illustrates an example computing device 1300 that may employ the apparatuses and/or methods described herein (e.g., system 100, core 102, method 300 and/or method 400), in accordance with various embodiments. As shown, computing device 1300 may include a number of components, such as one or more processor(s) 1304(one shown) and at least one communication chip 1306. In various embodiments, the one or more processor(s) 1304 each may include one or more processor cores. In various embodiments, the at least one communication chip 1306 may be physically and electrically coupled to the one or more processor(s) 1304. In further implementations, the communication chip 1306 may be part of the one or more processor(s) 1304. In various
embodiments, computing device 1300 may include printed circuit board (PCB) 1302. For these embodiments, the one or more processor(s) 1304 and communication chip 1306 may be disposed thereon. In alternate embodiments, the various components may be coupled without the employment of PCB 1302.Depending on its applications, computing device 1300 may include other components that may or may not be physically and electrically coupled to the PCB 1302. These other components include, but are not limited to, memory controller 1305, volatile memory (e.g., dynamic random access memory (DRAM) 1308), non-volatile memory such as read only memory (ROM) 1310, flash memory 1312, storage device 1311 (e.g., a hard-disk drive (HDD)), an I/O controller 1314, a digital signal processor (not shown), a crypto processor (not shown), a graphics processor 1316, one or more antenna 1318, a display (not shown), a touch screen display 1320, a touch screen controller 1322, a battery 1324, an audio codec (not shown), a video codec (not shown), a global positioning system (GPS) device 1328, a compass 1330, an accelerometer (not shown), a gyroscope (not shown), a speaker 1332, a camera 1334, and a mass storage device (such as hard disk drive, a solid state drive, compact disk (CD), digital versatile disk (DVD)) (not shown), and so forth. In various embodiments, the processor 1304 may be integrated on the same die with other components to form a System on Chip (SoC).In some embodiments, the one or more processor(s) 1304, flash memory 1312, and/or storage device 1311 may include associated firmware (not shown) storing programming instructions configured to enable computing device 1300, in response to execution of the programming instructions by one or more processor(s) 1304, to practice all or selected aspects of the methods described herein. In various embodiments, these aspects may additionally or alternatively be implemented using hardware separate from the one or more processor(s) 1304, flash memory 1312, or storage device 1311.In various embodiments, one or more components of the computing device 1300 may include the system 100 or core 102, and/or may implement one or more operations of method 300 and/or method 400 described herein. For example, the system 100 or core 102 may be implemented in processor 1304, communication chip 1306, I/O controller 1314, memory controller 1305, and/or another component of computing device 1300.The communication chips 1306 may enable wired and/or wireless communications for the transfer of data to and from the computing device 1300. The term“wireless” and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated
electromagnetic radiation through a non-solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. The communication chip 1306 may implement any of a number of wireless standards or protocols, including but not limited to IEEE 702.20, Long Term Evolution (LTE), LTE Advanced (LTE- A), General Packet Radio Service (GPRS), Evolution Data Optimized (Ev-DO), Evolved High Speed Packet Access (HSPA+), Evolved High Speed Downlink Packet Access (HSDPA+), Evolved High Speed Uplink Packet Access (HSUPA+), Global System for Mobile Communications (GSM), Enhanced Data rates for GSM Evolution (EDGE), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Digital Enhanced Cordless Telecommunications (DECT), WorldwideInteroperability for Microwave Access (WiMAX), Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The computing device 1300 may include a plurality of communication chips 1306. For instance, a first communication chip 1306 may be dedicated to shorter range wireless communications such as Wi-Fi and Bluetooth, and a second communication chip 1306 may be dedicated to longer range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others.In various implementations, the computing device 1300 may be a laptop, a netbook, a notebook, an ultrabook, a smartphone, a computing tablet, a personal digital assistant (PDA), an ultra-mobile PC, a mobile phone, a desktop computer, a server, a printer, a scanner, a monitor, a set-top box, an entertainment control unit (e.g., a gaming console or automotive entertainment unit), a digital camera, an appliance, a portable music player, or a digital video recorder. In further implementations, the computing device 1300 may be any other electronic device that processes data.The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure.In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each
block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms“a,”“an” and“the” are intended to include plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or“comprising,” when used in this specification, specific the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operation, elements, components, and/or groups thereof.Embodiments may be implemented as a computer process, a computing system or as an article of manufacture such as a computer program product of computer readable media. The computer program product may be a computer storage medium readable by a computer system and encoding a computer program instructions for executing a computer process.The corresponding structures, material, acts, and equivalents of all means or steps plus function elements in the claims below are intended to include any structure, material or act for performing the function in combination with other claimed elements are specifically claimed. The description of the present disclosure has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the disclosure in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill without departing from the scope and spirit of the disclosure. The embodiment was chosen and described in order to best explain the principles of the disclosure and the practical application, and to enable others of ordinary skill in the art to understand the disclosure for embodiments with various modifications as are suited to the particular use contemplated.EXAMPLESThe following examples pertain to further embodiments.Example 1 includes a multi-core processor, comprising two or more cores, each core including a microcontroller and coupled to an autonomous core perimeter logic; and circuitry in communication with each autonomous core perimeter logic adapted to, based on receipt of a signal to place the processor into a low power state, halt the microcontroller
of at least one of the two or more cores, save firmware code from the microcontroller of a first one of the two or more cores, and save state information from the microcontroller of each of the two or more cores; and the circuitry is further adapted to, based on receipt of a signal to return the processor from the low power state, restore the firmware code to all of the cores; and restore the respective state information to each core.Example 2 includes the subject matter of example 1, or some other example herein, wherein the circuitry is in communication with a memory unit, and is to store the firmware code and state information to the memory unit.Example 3 includes the subject matter of example 1 or 2, or some other example herein, wherein the circuitry is to communicate with the memory unit over an in-die interface.Example 4 includes the subject matter of any of examples 1-3, or some other example herein, wherein the circuitry comprises a power management agent.Example 5 includes the subject matter of any of examples 1-4, or some other example herein, wherein the circuitry is further to communicate with the memory unit with a bubble generation first in first out (FIFO) structure.Example 6 includes the subject matter of any of examples 1-5, or some other example herein, wherein the autonomous core perimeter logic comprises a fabric interface logic.Example 7 includes the subject matter of any of examples 1-6, or some other example herein, wherein the processor comprises a System on a Chip (SoC).Example 8 includes the subject matter of any of examples 1-7, or some other example herein, wherein the circuitry is to resume the microcontroller after the firmware code and respective state information have been saved.Example 9 includes a non-transitory computer readable medium (CRM) containing instructions executable by a circuitry in a processor, that when executed cause the circuitry to halt a microcontroller contained in a core perimeter logic, the core perimeter logic associated with a first processing core of multiple processing cores, wherein each of the multiple processing cores is associated with a core perimeter logic and shares a common microcontroller firmware code; save state information from the microcontroller of the perimeter logic; determine whether the microcontroller firmware code has been saved; and if the microcontroller firmware code has not been saved, save the microcontroller firmware code from the microcontroller of the perimeter logic.Example 10 includes the subject matter of example 9, or some other example
herein, wherein the instructions are to further cause the circuitry to resume the microcontroller once at least the state information has been saved.Example 11 includes the subject matter of example 9 or 10, or some other example herein, wherein the instructions are to cause the circuitry to save the state information and microcontroller firmware code to a memory unit.Example 12 includes the subject matter of any of examples 9-11, or some other example herein, wherein the instructions are to be executed by the circuitry following receipt of a signal to place the processor into a low power state.Example 13 includes the subject matter of any of examples 9-12, or some other example herein, wherein the instructions are to further cause the circuitry to, following receipt of a signal to wake the processor from the low power state, retrieve the firmware code and the state information for the perimeter logic from the memory unit; restore the firmware code and the state information to the microcontroller of the perimeter logic; and resume the microcontroller.Example 14 includes the subject matter of any of examples 9-13, or some other example herein, wherein the instructions are to further cause the circuitry to resume the microcontroller following receipt of a signal to abort placing the processor into a low power state.Example 15 includes the subject matter of any of examples 9-14, or some other example herein, wherein the instructions are to further cause the circuitry to receive an in die interface fabric interface logic data block that includes locations within a memory unit to store the firmware code and state information; and store the firmware code and the state information for the perimeter logic from the memory unit to the memory unit locations.Example 16 includes a system for managing power states on a multi-core processor, comprising multiple cores, each core coupled to an autonomous core perimeter; circuitry adapted to store firmware code and state information of each autonomous core perimeter; and a memory unit in data communication with the circuitry; wherein the circuitry is adapted to save to the memory unit the firmware code if not previously saved and state information from a first autonomous core perimeter of the multiple cores, and save to the memory unit state information for each remaining autonomous core perimeter of the multiple cores, based on receipt of a signal to place the processor into a low power state.Example 17 includes the subject matter of example 16, or some other example herein, wherein the autonomous core perimeter comprises a fabric interface logic.
Example 18 includes the subject matter of example 16 or 17, or some other example herein, wherein the circuitry comprises a power management agent.Example 19 includes the subject matter of example 18, or some other example herein, wherein the power management agent is in communication with the memory unit over an in-die interface.Example 20 includes the subject matter of any of examples 16-19, or some other example herein, wherein the circuitry is adapted to, based on receipt of a signal to return the processor from the low power state, restore the firmware code stored from the first autonomous core perimeter to each autonomous core perimeter of the multiple cores; and restore the state information to each respective autonomous core perimeter of the multiple cores.Example 21 includes the subject matter of example 20, or some other example herein, wherein the circuitry is to further halt each autonomous core perimeter based on receipt of the signal to place the processor into a low power state, and is to resume each autonomous core perimeter based on receipt of the signal to return the processor from the low power state.Example 22 includes the subject matter of any of examples 16-21, or some other example herein, wherein the firmware code and state information are associated with a microcontroller, the microcontroller comprising part of each autonomous core perimeter.Example 23 includes an integrated circuit, comprising multiple processing means; memory means; and means, coupled to each of the multiple processing means and coupled to the memory means, to store firmware code and state information associated with each processing means into the memory means; wherein, following receipt of a signal to place the integrated circuit into a low power state, the means to store firmware code and state information is to store the firmware code from one of the multiple processing means into the memory means if not previously stored, and store the state information from each of the multiple processing means into the memory means.Example 24 includes the subject matter of example 23, or some other example herein, wherein, following receipt of a signal to resume the integrated processor from the low power state, the means to store firmware code and state information is to retrieve the firmware code from the memory means and load it into each of the multiple processing means; retrieve the state information for each of the multiple processing means from the memory means; and load the state information of each of the multiple processing means into its respective processing means.
Example 25 includes the subject matter of example 23 or 24, or some other example herein, wherein each of the multiple processing means includes a controller means, the controller means associated with the state information of its respective processing means. |
Systems, methods, and apparatuses for data speculation execution (DSX) are described. In some embodiments, a hardware apparatus for performing DSX comprises a hardware decoder to decode an instruction, the instruction to include an opcode and an operand to store a portion of a fallback address, execution hardware to execute the decoded instruction to initiate a data speculative execution (DSX) region by activating DSX tracking hardware to track speculative memory accesses and detect ordering violations in the DSX region, and storing the fallback address. |
1.A device comprising:A hardware decoder for decoding an instruction, the instruction including an opcode and an operand for storing a portion of a back-off address; andExecution hardware to execute the decoded instructions to initiate the DSX region by activating data speculative execution (DSX) tracking hardware to track speculative memory accesses and detect a sequencing violation in the DSX region, and storing the back-off address.2.The apparatus of claim 1 wherein a portion of said backoff address is a shift value and said execution hardware is to compare said shift value with an instruction pointer phase of an instruction immediately following said decoded instruction plus.3.The apparatus of claim 1, wherein a portion of the fallback address is a full address.4.The apparatus of claim 1, wherein the operand for storing a portion of the backoff address is an immediate value.5.The apparatus of claim 1, wherein the operand for storing a portion of the backoff address is a register.6.The apparatus of claim 1, wherein the execution hardware is further for:Determine that a restricted transactional memory (RTM) transaction is occurring and process the RTM transaction.7.The apparatus of claim 1, further comprising:DSX nested counter to store the value corresponding to the number of DSX regions that do not have the end of the corresponding DSX region.8.A method comprising:Using a hardware decoder to decode an instruction comprising an opcode and an operand for storing a portion of a back-off address; andExecute the decoded instructions to initiate the DSX region by activating data speculation execution (DSX) tracking hardware to track speculative memory accesses and detect a sequencing violation in the DSX region, and storing the back-off address.9.The method of claim 8, wherein a portion of the backoff address is a displacement value, and the displacement value is added by the execution hardware to an instruction pointer of an instruction following the decoded instruction .10.The method of claim 8, wherein a portion of the back-off address is a complete address.11.The method of claim 8, wherein the operand for storing a portion of the backoff address is an immediate value.12.The method of claim 8, wherein the operand for storing a portion of the backoff address is a register.13.The method of claim 8, wherein the performing further comprises:Determine that a restricted transactional memory (RTM) transaction is occurring and process the RTM transaction.14.The method of claim 8, further comprising:A value corresponding to the number of the beginning of the DSX region that does not have the end of the corresponding DSX region is stored.15.A non-transitory machine-readable medium storing instructions that, when executed by a machine, cause the circuit to be manufactured, the circuit comprising:A hardware decoder for decoding an instruction, the instruction including an opcode and an operand for storing a portion of a back-off address; andExecution hardware to execute the decoded instructions to initiate the DSX region by activating data speculative execution (DSX) tracking hardware to track speculative memory accesses and detect a sequencing violation in the DSX region, and storing the back-off address.16.The non-transitory machine-readable medium of claim 15, wherein a portion of the back-off address is a shift value, the execution hardware is to compare the shift value with a value of the next immediate instruction Instruction instruction pointer addition.17.The non-transitory machine-readable medium of claim 15, wherein a portion of the back-off address is a full address.18.The non-transitory machine-readable medium of claim 15, wherein the operand for storing a portion of the backoff address is an immediate value. |
System, apparatus and method for speculative execution of dataTechnical fieldThe field of the invention relates generally to computer processor architectures, and more specifically to speculative execution.Background techniqueIt is known to be difficult to vectorize the loop of dependencies, including possible cross-iterations. An example of this type of loop is:for(i=0;i<N;i++){A [i] = B [C [i]];}The immature (and incorrect) vectorization of this cycle would be:However, if the compiler that generated the vectorized version of the loop can not first learn about the address or alignment for A, B, and C, the above vectorization is not safe.BRIEF DESCRIPTION OF THE DRAWINGS FIGThe present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings in which like references indicate similar elements and in which:Figure 1 is an embodiment of an exemplary block diagram of a processor core capable of performing data speculation extensions (DSX) in hardware;Figure 2 shows an example of a speculative instruction execution according to an embodiment;Figure 3 shows a detailed embodiment of the DSX trace hardware;Figure 4 illustrates an exemplary method of DSX error speculation detection performed by the DSX trace hardware;5 (A) - (B) illustrate an exemplary method of DSX error speculation detection performed by the DSX trace hardware;Figure 6 shows an embodiment of the execution of instructions for starting a DSX;Figure 7 shows some exemplary embodiments of the YBEGIN instruction format;Figure 8 shows a detailed embodiment of the execution of instructions, such as a YBEGIN instruction;Figure 9 shows an example of pseudocode showing the execution of an instruction such as a YBEGIN instruction;Figure 10 shows an embodiment of the execution of instructions for starting a DSX;Figure 11 shows some exemplary embodiments of the YBEGIN WITH STRIDE instruction format;Figure 12 shows a detailed embodiment of the execution of instructions, such as a YBEGIN WITH STRIDE instruction;Figure 13 shows an embodiment of the execution of instructions for continuing the DSX without terminating it;Figure 14 shows some exemplary embodiments of the YCONTINUE instruction format;Figure 15 shows a detailed embodiment of the execution of an instruction, such as a YCONTINUE instruction;Figure 16 shows an example of pseudocode showing the execution of an instruction such as a YCONTINUE instruction;Figure 17 shows an embodiment of the execution of instructions for aborting DSX;Figure 18 shows some exemplary embodiments of the YABORT instruction format;Figure 19 shows a detailed embodiment of the execution of instructions, such as a YABORT instruction;Figure 20 shows an example of pseudocode showing the execution of an instruction such as a YABORT instruction;Figure 21 shows an embodiment of the execution of instructions for testing the status of a DSX;Figure 22 shows some exemplary embodiments of the YTEST instruction format;Figure 23 shows an example of pseudocode showing the execution of an instruction, such as a YTEST instruction;Figure 24 shows an embodiment of the execution of instructions for ending the DSX;Figure 25 shows some exemplary embodiments of the YEND instruction format;Figure 26 shows a detailed embodiment of the execution of instructions, such as the YEND instruction;Figure 27 shows an example of pseudo-code displaying execution of an instruction, such as a YEND instruction;28A-28B are block diagrams illustrating a generic vector friendly instruction format and instruction templates thereof according to embodiments of the present invention;29A-D show a special vector friendly instruction format 2900 that specifies the location, size, order of interpretation and fields, and values for some of those fields, in the sense that the vector friendly instruction format 2900 is private.Figure 30 is a block diagram of a register architecture in accordance with one embodiment of the present invention;31A is a block diagram illustrating both an example, ordered pipeline and an example register renamed, out-of-order issue / execute pipeline in accordance with an embodiment of the invention;31B is a block diagram illustrating an exemplary embodiment of an ordered architecture core to be included in a processor and an exemplary register rename abeamination / execution architecture core according to an embodiment of the present invention;32A-B show a block diagram of a more specific exemplary in-order core architecture that will be one of several logical blocks (including other cores of the same type and / or different types) in the chip;Figure 33 is a block diagram of a processor that may have more than one core, may have an integrated memory controller, and may have an integrated graphics device according to an embodiment of the present invention;Figure 34 shows a block diagram of a system according to an embodiment of the invention;Figure 35 shows a block diagram of a first more specific exemplary system in accordance with an embodiment of the present invention;Figure 36 shows a block diagram of a second more specific exemplary system in accordance with an embodiment of the present invention;Figure 37 shows a block diagram of a SoC according to an embodiment of the present invention;38 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set into binary instructions in a target instruction set in accordance with an embodiment of the present invention.detailed descriptionIn the following description, numerous specific details are set forth. However, it should be understood that embodiments of the invention may be practiced without these specific details. In other instances, well-known circuits, structures, and techniques have not been shown in detail in order not to obscure the understanding of this description.Reference in the specification to "one embodiment," "an embodiment," "example embodiment," etc., indicates that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily all comprise the particular Features, structures or characteristics. In addition, such phrases do not necessarily refer to the same embodiment. Furthermore, when a particular feature, structure, or characteristic is described in connection with the embodiment, it is considered that it is within the knowledge of one having ordinary skill in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.Throughout this specification, a technique referred to as speculative execution of data speculation extensions (DSX) is described in detail. This manual includes DSX hardware and new DSX-compliant instructions.DSX is essentially similar to a restricted transactional memory (RTM) implementation, but simpler. For example, the DSX area does not require an implied fence. Instead, keep the normal load / store collation. In addition, the DSX region does not have any configuration in the processor that sets forced atomic behavior for loading, whereas in RTM atomically handles the loading and storage of transactions committed when the transaction is completed. In addition, the loads are not buffered because they are in the RTM. However, when speculation is no longer needed, the storage is buffered and committed immediately. Depending on the embodiment, these stores may be buffered in a dedicated speculative execution store or in a shared register or memory location. In some embodiments, it is speculated that vectorization occurs only on a single thread, which means that there is no need to block interference from other threads.In the previously described vectorized loop, a dynamic check will be required for safety reasons. For example, it is guaranteed that write A does not overlap with elements in B or C read at a later iteration in a scalar loop for a given vector iteration. The examples that follow deal with vectorization scenarios by using the speculation. The speculative version indicates that each loop iteration should be speculatively performed (eg, using the instructions detailed below), and the hardware should assist in performing address checking. Instead of relying solely on the hardware responsible for address checking, which requires very expensive hardware, detailed methods use software to provide information to assist in hardware, to achieve cheaper hardware solutions without affecting execution time or over-provisioning programmers or compilers burden.Unfortunately, in the case of vectorization, there can be a sort violation. Looking back to the scalar loop example detailed above:for(i=0;i<N;i++){A [i] = B [C [i]];}During the first four iterations of this cycle, the following memory operations occur in the following order:Read C [0]Read B [C [0]]Write A [0]Read C [1]Read B [C [1]]Write A [1]Read C [2]Read B [C [2]]Write A [2]Read C [3]Read B [C [3]]Write A [3]The distance between visits to the same array (the number of operations) is three, and this is also the number of speculative memory instructions in the loop (which becomes SIMD) when it is vectorized. This distance is called "span." It is also the number of memory instructions in the loop, and address checking is performed on memory instructions when the loop is vectorized. In some embodiments, this span is passed to address tracking hardware (described in detail below) via special instructions at the beginning of the loop. In some embodiments, this instruction also clears the address tracking hardware.This article details new instructions (DSX memory instructions) used in DSX in situations such as vectorized loop execution. Each DSX memory instruction, such as loaded, stored, aggregated, and distributed, includes operands that are used during DSX to indicate a location within the DSX execution (eg, a location in the loop being executed). In some embodiments, the operand is an immediate (eg, 8-bit immediate) number that has the encoded order in immediate data. In other embodiments, the operands are registers or memory locations that store the encoded sequence of values.In addition, in some embodiments, these instructions have different opcodes than their normal counterparts. These instructions can be scalar or superscalar (eg, SIMD or MIMD). An example of some of these instructions is shown below where the mnemonic for the opcode includes "S" (underscored below) to indicate that it is a speculative version, and imm8 is a location for indicating execution (eg, the loop being executed In the position) of the immediate operand:VMOV S DQA32zmm1 {k1} {z}, mV, imm8 // Speculate SIMD loadVMOV S xmm1, m32, imm8 // Speculative scalar loadVSCATTER S DPS vm32z {k1}, zmm1, imm8 // Speculative dispersionOf course, other instructions may also vary with detailed operands and opcode mnemonics (and basic opcodes), such as logical (AND, OR, XOR, etc.) and data manipulation (add, subtract, etc.) instructions.In the vector version of the above scalar example (assuming four packed data elements of SIMD width), the order of memory operations is:Reading C [0], C [1], C [2], C [3]Reading B [C [0]], B [C [1]], B [C [2]], B [C [3]]Write A [0], A [1], A [2], A [3]If, for example, B [C [1]] overlaps with A [0], this sequence may result in incorrect execution. In the original scalar order, a read of B [C [1]] occurs after the write to A [0], but in a vectorized execution, it happens earlier.Use speculative memory instructions for operations in loops that may lead to incorrect execution to help solve this problem. As will be detailed, each speculative memory instruction informs the DSX trace hardware (described in detail below) that it is within the loop body:for(i=0;i<N;i+=SIMD_WIDTH){zmm0 = vmovsdqu32 & C [i], 0 // Tell the address tracker that this is instruction 0k1 = kxnor k1, k1zmm1 = vgathersdd B, zmm0, k1,1 // Tell the address tracker that this is instruction 1vmovsdqu & A [i], zmm1,2 // Tell the address tracker that this is instruction 2}The scalar memory operation can be reconstructed by combining the loop position information provided by each speculative memory operation with the span. As speculative memory instructions are executed, the DSX hardware tracker calculates an identifier (id) for each element (id = serial number + number of spans * number of elements within a SIMD operation). The hardware tracker uses the sequence number, the calculated id, and the address and size of each packed data element to determine if there is a sort violation (ie, if the element overlaps with another element and is read or written out of order).Each memory operation that includes each vector memory instruction is expanded, the span is accumulated for each expansion, and the resulting number is assigned as "ids" to produce:Read C [0] // id = 0Read C [1] // id = 3Read C [2] // id = 6Read C [3] // id = 9Read B [C [0]] // id = 1Read B [C [1]] // id = 4Read B [C [2]] // id = 7Read B [C [3]] // id = 10Write A [0] // id = 2Write A [1] // id = 5Write A [2] // id = 8Write A [3] // id = 11Sorting the above memory operations by id sorts the reconstructed raw scalar memory.Figure 1 is an embodiment of an exemplary block diagram of a processor core capable of performing data speculation extensions (DSX) in hardware.The processor core 106 may include a fetch unit 102 for fetching instructions for execution by the core 106. For example, instructions can be fetched from L1 cache or memory. The core 106 may also include a decode unit 104 for decoding fetched instructions, including the instructions detailed below. For example, the decoding unit 104 may decode the fetched instruction into a plurality of micro operations (micro ops).In addition, the core 106 may include a scheduling unit 107. Scheduling unit 107 may perform various operations associated with storing decoded instructions (eg, received from decoding unit 104) until the instructions are ready for dispatch, eg, until all of the source values from the operands of the decoded instruction Have become available. In one embodiment, scheduling unit 107 may schedule and / or publish (or dispatch) the decoded instructions to one or more execution units 108 for execution. Execution unit 108 may include a memory execution unit, an integer execution unit, a floating point execution unit, or other execution units. The retirement unit 110 may retire the executed instruction after the executed instruction is submitted. In embodiments, retiring these executed instructions results in submitting the processor state through execution of the instructions, deallocating the physical registers used by the instructions, and so on.The memory ordering buffer (MOB) 118 may include a load buffer, a memory buffer, and logic for storing pending memory operations that have not been loaded or written back to the main memory. In some embodiments, the MOB 118 or a circuit similar thereto stores speculative storage (writes) to the DSX region. In various embodiments, the core may include a local cache, eg, a private cache that may include one or more cachelines 124 (eg, cachelines 0 through W, and managed by cache circuitry 139) Cache 116). In one embodiment, each row of cache 116 may include a DSX read bit 126 and / or a DSX write bit 128 for each thread executing on core 106. Bits 126 and 128 may be set or cleared to indicate (load and / or store) access to the corresponding cache line through the DSX memory access request. Note that while each cache line 124 is shown as having respective bits 126 and 128 in the embodiment of FIG. 1, other configurations are possible. For example, DSX read bit 126 (or DSX write bit 128) may correspond to a selected portion of cache 116 (eg, a cache block or other portion of cache 116). Also, bits 126 and / or 128 may be stored at locations other than cache 116.To assist in performing the DSX operation, core 106 may include a DSX nesting counter 130 for storing values corresponding to the number of DSX encounters that have been encountered without the end of a matching DSX. The counter 130 may be implemented as any type of storage device (eg, a hardware register) or as a variable stored in a memory (eg, system memory or cache 116). The core 106 may also include a DSX nested counter circuit 132 for updating the value stored in the counter 130. The core 106 may include a DSX checkpoint circuit 134 for a checkpoint operation (or storage) of the state of a plurality of components of the core 106 and a DSX checkpoint circuit 134 for storing information by using it or stored in another location, such as the register 140 A back-off address (eg, at a given DSX abort) to recover the state of the plurality of components of the core 106. In addition, the core 106 may include one or more additional registers 140 corresponding to multiple DSX memory access requests, for example, DSX Status and Control Register (DSXSR) to store an indication of whether the DSX is active, a DSX instruction pointer (DSXXIP) (Eg, an instruction pointer that may be an instruction that points to the beginning (or immediately before) of the corresponding DSX), and / or a DSX Stack Pointer (DSXSP) (eg, may be a pointer to one or more components of memory core 106 Stack pointers for multiple state stack heads). These pointers can also be MSR 150.DSX address tracking hardware 152 (sometimes referred to simply as DSX tracking hardware) tracks speculative memory accesses and detects ordering violations in the DSX. Specifically, the tracking hardware 152 includes an address tracker that absorbs information to reconstruct and then implement the original scalar memory order. Typically, the inputs are the number of speculative memory instructions that need to be tracked in the loop body and some information about each of these instructions, such as: (1) the serial number, (2) the address the instruction accesses, and (3) the instruction Cause read memory or write memory. If two speculative memory instructions access overlapping portions of memory, the hardware tracker 152 uses this information to determine whether the original scalar order of memory operations has been changed. If so, and if one of the two operations is a write, the hardware triggers a false guess. Although FIG. 1 shows DSX tracking hardware 152 thereon, in some embodiments the hardware is part of other core components.Figure 2 shows an example of a speculative instruction execution according to an embodiment. At 201, speculative instructions are fetched. For example, speculative memory instructions, such as those detailed above, are fetched. In some embodiments, the instruction includes an opcode indicating its speculative nature and an operand indicating the ordering in the DSX. The sort operand can be an immediate value or a register / memory location.The fetched speculative instruction is decoded at 203.A determination is made at 205 as to whether the decoded speculative instruction is part of DSX. For example, did the DSX be indicated in the DSX Status and Control Register (DSXSR) detailed above? When the DSX is inactive, the instruction either becomes nop or is executed as a normal non-speculative instruction at 207 according to an embodiment.When the DSX is active, speculation is speculatively performed (eg, not submitted) at 209 and the DSX tracking hardware is updated.Figure 3 shows a detailed embodiment of the DSX address tracking hardware. This hardware tracks speculative memory instances. Typically, elements analyzed by the DSX trace hardware (eg, SIMD elements) are divided into sections called chunks that do not exceed the size of "B" bytes.The shift circuit 301 shifts the address of the block (such as the start address). In most embodiments, the shift circuit 301 performs a right shift. Typically, log 2 B is shifted to the right. The shifted address is subjected to a hash function performed by the hash function unit circuit 303.The output of the hash function is an index to the hash table 305. As described above, the hash table 305 includes a plurality of buckets 307. In some embodiments, hash table 305 is a Bloom filter. The hash table 305 is used to detect erroneous speculation, and is used to record the address, access type, serial number, and id number of the data that was presumed to be accessed. The hash table 305 includes N "sets", where each set includes M entries 309. Each entry 309 holds the valid bit, the sequence number, id number, and access type of the element of the speculative memory instruction previously executed. In some embodiments, each entry 309 also includes a corresponding address (shown as a dashed box in the figure). When the DSX initiates an instruction (eg, YBEGIN and variants as detailed below), all valid bits are cleared, the "speculative activity" flag is set, and the speculative activity flag is cleared when the instruction ends DSX.Collision checking circuit 311 checks for each entry 309 for the element under test (or its block) 315 for collisions. In some embodiments, there is a conflict when the entry 309 is valid and at least one of the following: i) the type of access in entry 309 is a write or ii) the type of access being tested is a write, and one of the following: i ) Entry 309 is less than the sequence number of the element under test 315 and the id number in entry 309 is greater than the id number of the element under test 315 or ii) the sequence number in entry 309 is greater than the sequence number of the element under test 315, And the id number in entry 309 is less than the id number of the element under test 315.In other words, there is a conflict in the following situations:(Entry is valid) AND (entry type == entry in the entry) OR object type == write) AND (((Seq # in the entry) AND (entry in the entry Id #>) OR ((Seq #> Seq # in test) AND (id # in test)))Note that in most embodiments, there is no test of address overlap. This overlap is implied by hitting an entry in the hash table. When there is no overlap of addresses, hits can occur because the overlap from the hash function and / or from the check is too coarse (ie, B is too large). However, there will be a hit when there is an overlap of addresses. Thus correctness is guaranteed, but false-positive may exist (ie the hardware can detect erroneous inferences when there is no false inference). In an embodiment, a block address is stored in each entry 309, and an additional condition for testing for false speculation (ie, logically ANDed with the above condition) is applied, where the address in entry 309 is equal to the value of the tested The address in element 315.The OR gate 313 (or equivalent) logically ORs the result of the collision check. When the result of the OR operation is 1, erroneous speculation may have occurred and the OR gate 313, by its output, indicates that erroneous speculation may have occurred.The total storage for this embodiment is M * N entries. This means that it is possible to track up to M * N speculatively accessed data elements. However in practice, the loop may have more access to some of the N sets than the others in the N sets. If the space in any collection is exhausted, in some embodiments false positives are triggered to ensure correctness. Increasing M alleviates this problem, but may force the existence of more copies of the conflict checking hardware. In order to perform all M collisions checking at the same time (as done in some embodiments) there are M copies of collision checking logic.Selecting B, N, M, and hash functions in some way allows the structure to be organized in much the same way as the L1 data cache. Specifically, let B be the size of the cache line, N as the number of sets in the L1 data cache, M as the L1 data cache relevance, and the hash function as the least significant bit of the address After moving). This structure will have the same number of entries and the same organization as the L1 data cache, which simplifies its implementation.Finally, it is noted that alternative embodiments use separate Bloom filters for reading and writing to avoid having to store access type information and to avoid having to check the type of access during collision checking. In contrast, for reading, the embodiment performs a conflict check for a "write" filter only, and if there is no false guess, inserts the element into a "read" filter. Similarly, for writing, the embodiment performs a conflict check on both "read" and "write" filters and, if there is no false guess, inserts the element into a "write" filter.Figure 4 shows an exemplary method of DSX error speculation detection performed by the DSX trace hardware. At 401, start the DSX or submit a previous speculative iteration. For example, execute the YBEGIN instruction. Execution of this instruction clears the valid bits in entry 309 and sets the speculative activity flag in the status register, such as the DSX Status register detailed above, if it has not been set. The speculative memory instruction is executed after the DSX begins and the data elements being tested are provided.At 403, the tested data elements from the speculative memory instruction are divided into blocks that do not exceed B bytes. Access the hash table at B-byte granularity (ie, drop the low end of the address). If the elements are large and / or not aligned, they can cross the B-byte boundary, and if so, the element is divided into multiple blocks.Perform the following (405-421) operation on each block. Move the start of the block to the right by 2 log 2 B. The shifted address is hashed at 407 to generate an index value.Using the index value, a lookup of the corresponding set of hash tables is made at 409 and all entries of the set are read out at 411.For each read entry, a collision check is performed at 413 for the elements under test, such as the elements under test described above. Perform an OR operation on all conflict checking at 415. If any of the checks indicate a conflict at 417 (such that OR is 1) then an indication of the error is made at 419. The DSX is usually aborted at this point. If there is no false guess, an invalid entry in the collection is found at 421, and the invalid entry is populated with the information of the element being tested and marked as valid. If there are no invalid entries, then false speculation is triggered.5 (A) - (B) illustrate an exemplary method of DSX error speculation detection performed by DSX trace hardware. At 501, start the DSX or submit a previous guess iteration. For example, execute the YBEGIN instruction.At 503, execution of the instruction resets the trace hardware by clearing the valid bits in entry 309 and sets the speculative activity flag in the status register, such as the DSX status register detailed above, if it has not been set.At 505, a speculative memory instruction is executed. Examples of these instructions are detailed above. The counter, which is the element under test (e) from the speculative instruction, is set to zero at 507 and the id (id = sequence number + span * e) is calculated at 509.A determination is made at 511 as to whether any previous writes overlap with the counter value e. This acts as a dependency check on previously stored (written). For overlapping writes, a conflict check is performed at 513. In some embodiments, the conflict check ascertains whether: i) the sequence number in entry 309 is less than the sequence number of the element under test 315 and the id number in entry 309 is greater than the id number of the element under test 315, or ii) the entry The sequence number in 309 is greater than the sequence number of the element under test 315 and the id number in the entry 309 is less than the id number of the element under test 315.If there is a conflict, false speculation is triggered at 515. If not, or if there is no overlapping previous write, a determination is made at 517 as to whether the speculative memory instruction is a write.If so, a determination is made at 519 as to whether any previous reads overlap with the counter value e. This acts as a dependency check on the previous load (read). For overlapped reads, a conflict check is performed at 521. In some embodiments, the conflict check ascertains whether: i) the sequence number in entry 309 is less than the sequence number of the element under test 315 and the id number in entry 309 is greater than the id number of the element under test 315, or ii) the entry The sequence number in 309 is greater than the sequence number of the element under test 315 and the id number in the entry 309 is less than the id number of the element under test 315.If there is a conflict, erroneous speculation is triggered at 523. If not, or if there is no overlapping previous reads, counter e is incremented at 525.At 526, a determination is made as to whether counter e is equal to the number of elements in the speculative memory instruction. In other words, have you evaluated all the elements? If not, another id is calculated at 509. If so, then at 527 the hardware waits for another instruction to be executed. When the next instruction is another speculative memory instruction, the counter is reset at 507. When the next instruction is YBEGIN, hardware is reset at 503, and so on. When the next instruction is YEND, DSX is disabled at 529.YBEGIN instructionFigure 6 shows an embodiment of the execution of instructions for starting a DSX. As will be detailed herein, this instruction is referred to as "YBEGIN" and is used to indicate the beginning of a DSX region. Of course, the instruction can be called another name. In some embodiments, one or more hardware cores of a hardware device, such as a central processing unit (CPU), a graphics processing unit (GPU), an acceleration processing unit (APU), a digital signal processor (DSP) carried out. In other embodiments, the execution of the instruction is a simulation.At 601, the YBEGIN instruction is received / retrieved. For example, instructions are fetched from memory into the instruction cache or fetched from the instruction cache. The fetched instruction can take one of several forms detailed below.Figure 7 shows some exemplary embodiments of the YBEGIN instruction format. In an embodiment, the YBEGIN instruction includes an opcode (YBEGIN) and a single operand for providing a displacement for a back-off address where the program execution should jump to handle erroneous speculation, such as 701 Show. In essence, the displacement value is part of the fallback address. In some embodiments, the displacement value is provided as an immediate operand. In other embodiments, the displacement value is stored in a register or memory location operand. Depending on the YBEGIN implementation, use the implied operand of the DSX status register, nested count register, and / or RTM status register. As detailed previously, the DSX status register may be a dedicated register, a flag in a register that is not dedicated to DSX status, such as an overall status register like a flag register, and so on.In another embodiment, the YBEGIN instruction includes not only the opcode and the displacement operand, but also an explicit operand for the DSX state, such as a DSX status register, as shown at 703. Depending on the YBEGIN implementation, use the nested operand of the nested count register and / or RTM status register. As detailed previously, the DSX status register may be a dedicated register, a flag in a register that is not dedicated to DSX status, such as an overall status register like a flag register, and so on.In another embodiment, the YBEGIN instruction includes not only the opcode and the shift operand, but also an explicit operand for the DSX nested count, such as the DSX nested count register, as shown at 705. As detailed previously, the DSX nested count can be a dedicated register, not a flag in a DSX nested count register, such as an overall status register. Depending on the YBEGIN implementation, use the implied operand of the DSX status register and / or RTM status register. As detailed previously, the DSX status register may be a dedicated register, a flag in a register that is not dedicated to DSX status, such as an overall status register like a flag register, and so on.In another embodiment, the YBEGIN instruction includes not only opcode and shift operands, but also explicit operands for the DSX state such as the DSX status register and explicit operands for the DSX nesting count such as the DSX nesting count register The type operand, as shown in 707. As detailed previously, the DSX status register may be a dedicated register, a flag not dedicated to DSX status registers (such as an overall status register like a flag register, etc.), and the DSX nesting counter may be a dedicated register that is not dedicated to DSX Nested count register, such as the overall status register. Depending on the YBEGIN implementation, use the implied operand of the RTM status register. As detailed previously, the DSX status register may be a dedicated register, a flag not dedicated to registers in the DSX state (such as the overall status register like flag registers, etc.).In another embodiment, the YBEGIN instruction includes not only opcode and shift operands, but also explicit operands for the DSX state, such as the DSX status register, significant numbers for the DSX nesting count, such as the DSX nested count register Explicit operands and explicit operands for the RTM state, as shown at 709. As detailed previously, the DSX status register may be a dedicated register, a flag not dedicated to DSX status registers (such as an overall status register like a flag register, etc.), and the DSX nesting counter may be a dedicated register that is not dedicated to DSX Nested count register, such as the overall status register.Of course, other variations of YBEGIN are possible. For example, instead of providing a shift value, the instruction includes the fallback address itself in an immediate, register, or memory location.Returning to FIG. 6, the fetched / received YBEGIN instruction is decoded at 603. In some embodiments, the instructions are decoded by hardware decoders such as those described in detail below. In some embodiments, the instruction is decoded as a micro op (micro op). For example, some CISC-based machines typically use micro-operations derived from macros. In other embodiments, decoding is part of a software routine such as compiled in time.At 605, any operand associated with the decoded instruction is retrieved. For example, retrieve data from one or more of the DSX register, the DSX nested count register, and / or the RTM status register.The decoded YBEGIN instruction is executed at 607. In embodiments in which instructions are decoded as micro-operations, these micro-operations are performed. Execution of the decoded instruction causes the hardware to perform one or more of the following actions: 1) determine that the RTM transaction is active and continue the transaction; 2) use a displacement value that is added to the instruction pointer of the YBEGIN instruction 4) Abort; 5) Set DSX status to active; and / or 6) Reset DSX trace hardware.Typically, depending on the instance of a YBEGIN instruction, if there is no active RTM transaction, the DSX status is set to active, the DSX nested count is incremented (if the count is less than the maximum), the DSX trace hardware is reset (eg, as detailed above ), And use the shift value to calculate the rollback address to start the DSX area. As detailed previously, the state of the DSX is typically stored in accessible locations such as registers, such as the DSX Status and Control Register (DSXSR) discussed above with reference to FIG. 1. However, other means such as DSX status flags in non-dedicated control / status registers (such as FLAGS registers) may be utilized. Earlier we also described the reset of the DSX tracking hardware. As detailed previously, the state of the DSX is typically stored in accessible locations such as registers, such as the DSX Status and Control Register (DSXSR) discussed above with reference to FIG. 1. However, other means such as DSX status flags in non-dedicated control / status registers (such as FLAGS registers) may be utilized. This register can be checked by the core's hardware to determine if DSX actually happened.If there are some reasons why the DSX can not start, one or more of the other possible actions occur. For example, in some embodiments of processors that support RTM, if the RTM transaction is active, DSX should not be active initially and continue with RTM. If there is an error with the DSX setup at the beginning (incorrect nesting count), the abort will occur. In addition, in some embodiments, if there is no DSX, an error is generated and no operation (NOP) is performed. Regardless of which action is performed, in most embodiments, the DSX status is reset after this action (if it is set) to indicate that there is no pending DSX.Figure 8 shows a detailed embodiment of the execution of instructions, such as a YBEGIN instruction. For example, in some embodiments, this flow is block 607 of FIG. 6. In some embodiments, one or more hardware cores of a hardware device, such as a central processing unit (CPU), a graphics processing unit (GPU), an acceleration processing unit (APU), a digital signal processor (DSP) carried out. In other embodiments, the execution of the instruction is a simulation.In some embodiments, such as in a processor supporting an RTM transaction, a determination is made at 801 whether an RTM transaction is occurring. For example, in some embodiments of a processor that supports RTM, DSX should not be active initially if the RTM transaction is active. In this instance, there is an error in the RTM transaction and it should have its end program activated. The RTM transaction status is typically stored in registers such as RTM control and status registers. The processor's hardware evaluates the contents of this register to determine if RTM transactions are occurring. RTM transactions continue processing at 803 when an RTM transaction is taking place.When no RTM transaction is taking place or RTM is not supported, a determination is made at 805 as to whether the current DSX nested count is less than the maximum nested count. In some embodiments, the YBEGIN instruction provides a nested count register for storing the current nested count as an operand. Alternatively, there may be a dedicated nesting counter register in the hardware for storing the current nesting count. The maximum nested count is the maximum number of DSX starts (eg, via a YBEGIN instruction) that can occur without having to correspond to the end of the DSX (eg, via the YEND instruction).Abort occurs at 807 when the current DSX nested count is greater than the maximum. In some embodiments, the aborting is triggered by using a recovery circuit such as the DSX recovery circuit 135. In other embodiments, the YABORT instruction is executed as detailed below, which not only performs a rollback to the rollback address, but discards the presumedly stored writes and resets the current nested count and sets the DSX status inactive . As detailed above, the DSX status is typically stored in a control register, such as the DSX Status and Control Register (DSXSR) shown in FIG. 1. However, other means such as DSX status flags in non-dedicated control / status registers (such as FLAGS registers) may be utilized.When the current nested count is not greater than the maximum value, the current DSX nested count is incremented at 809.At 811, a determination is made as to whether the current DSX nested count is equal to one. When equal to one, in some embodiments, the back-off address is calculated at 813 by adding the offset value provided by the YBEGIN instruction to the address of the instruction following the YBEGIN instruction. In embodiments where the YBEGIN instruction provides a back-off address, then this calculation is not necessary.At 815, the DSX status is set to active (if needed) and the DSX tracking hardware is reset (eg, as detailed above). For example, as previously detailed, the status of the DSX is typically stored in accessible locations such as registers, such as the DSX Status and Control Register (DSXSR) discussed above with reference to FIG. 1. However, other means such as DSX status flags in non-dedicated control / status registers (such as FLAGS registers) may be utilized. This register can be checked by the core's hardware to determine if DSX actually happened.FIG. 9 shows an example of pseudo code showing execution of an instruction such as a YBEGIN instruction.YBEGIN WITH STRIDE instructionFigure 10 shows an embodiment of the execution of instructions for starting a DSX. As will be detailed herein, this instruction is referred to as "YBEGIN WITH STRIDE" and is used to indicate the beginning of a DSX area. Of course, the instruction can be called another name. In some embodiments, one or more hardware cores of a hardware device, such as a central processing unit (CPU), a graphics processing unit (GPU), an acceleration processing unit (APU), a digital signal processor (DSP) carried out. In other embodiments, the execution of the instruction is a simulation.At 1001, the YBEGIN WITH STRIDE instruction is received / retrieved. For example, instructions are fetched from memory into the instruction cache or fetched from the instruction cache. The fetched instruction can take one of several forms detailed below.Figure 11 shows some exemplary embodiments of the YBEGIN WITH STRIDE instruction format. In an embodiment, the YBEGIN WITH STRIDE instruction includes an operation code (YBEGIN WITH STRIDE) and an operand for providing a displacement for a back-off address (a back-off address is where the program execution should jump to handle erroneous speculation) And the span value operand, as shown at 1101. In essence, the displacement is part of the fallback address. In some embodiments, the displacement is provided as an immediate operand. In other embodiments, the displacement value is stored in a register or memory location operand. In some embodiments, the span is provided as an immediate operand. In other embodiments, the span is stored in a register or memory location operand. Depending on the YBEGIN WITH STRIDE implementation, use the implied operand of the DSX status register, nested count register, and / or RTM status register.In another embodiment, the YBEGIN WITH STRIDE instruction includes not only the opcode and displacement operands and the stride value operands, but also explicit operands for the DSX state, such as the DSX status register, as shown at 1103. In some embodiments, the displacement is provided as an immediate operand. In other embodiments, the displacement value is stored in a register or memory location operand. In some embodiments, the span is provided as an immediate operand. In other embodiments, the span is stored in a register or memory location operand. As detailed previously, the DSX status register may be a dedicated register, a flag in a register that is not dedicated to DSX status, such as an overall status register like a flag register, and so on. Depending on the YBEGIN WITH STRIDE implementation, use the nested operand of the nested count register and / or the RTM status register.In another embodiment, the YBEGIN WITH STRIDE instruction includes not only opcode and displacement operands and stride value operands, and stride value operands, but also explicit operands for DSX nested counts, such as DSX nested counts Register, as shown at 1105. In some embodiments, the displacement is provided as an immediate operand. In other embodiments, the displacement value is stored in a register or memory location operand. In some embodiments, the span is provided as an immediate operand. In other embodiments, the span is stored in a register or memory location operand. As detailed previously, the DSX nested count can be a dedicated register, not a flag in a DSX nested count register, such as an overall status register. Depending on the YBEGIN WITH STRIDE implementation, use the implied operand of the DSX status register and / or the RTM status register.In another embodiment, the YBEGIN WITH STRIDE instruction includes not only opcodes, shift operands, and stride value operands, but also explicit operands for the DSX state, such as the DSX status register, and operand such as the DSX nested count register The explicit operand at the DSX nested count, as shown at 1107. In some embodiments, the displacement is provided as an immediate operand. In other embodiments, the displacement value is stored in a register or memory location operand. In some embodiments, the span is provided as an immediate operand. In other embodiments, the span is stored in a register or memory location operand. As detailed previously, the DSX status register may be a dedicated register, a flag not dedicated to DSX status registers (such as an overall status register like a flag register, etc.), and the DSX nesting counter may be a dedicated register that is not dedicated to DSX Nested count register, such as the overall status register. Depending on the YBEGIN WITHSTRIDE implementation, use the implied operand of the RTM status register.In another embodiment, the YBEGIN WITH STRIDE instruction includes not only opcodes, shift operands, and stride value operands, but also explicit operands for the DSX state, such as the DSX status register, and operand such as the DSX nested count register The explicit operand counts on the DSX nested count as well as the RTM status register, as shown at 409. In some embodiments, the displacement is provided as an immediate operand. In other embodiments, the displacement value is stored in a register or memory location operand. In some embodiments, the span is provided as an immediate operand. In other embodiments, the span is stored in a register or memory location operand. As detailed previously, the DSX status register may be a dedicated register, a flag not dedicated to DSX status registers (such as an overall status register like a flag register, etc.), and the DSX nesting counter may be a dedicated register that is not dedicated to DSX Nested count register, such as the overall status register.Of course, other variations of YBEGIN WITH STRIDE are possible. For example, instead of providing a shift value, the instruction includes a fallback address that is itself in an immediate, register, or memory location.Returning to FIG. 10, the YBEGIN WITH STRIDE instruction fetched / received is decoded at 1003. In some embodiments, the instructions are decoded by hardware decoders such as those described in detail below. In some embodiments, the instruction is decoded as a micro op (micro op). For example, some CISC-based machines typically use micro-operations derived from macros. In other embodiments, decoding is part of a software routine such as compiled in time.At 1005, any operand associated with the decoded YBEGIN WITH STRIDE instruction is retrieved. For example, retrieve data from one or more of the DSX register, the DSX nested count register, and / or the RTM status register.The decoded YBEGIN WITH STRIDE instruction is executed at 1007. In embodiments in which instructions are decoded as micro-operations, these micro-operations are performed. Execution of the decoded instructions causes the hardware to perform one or more of the following actions: 1) determine that the RTM transaction is active and begin the transaction; 2) use an instruction pointer appended to the YBEGIN WITH STRIDE instruction Shift value to calculate the fallback address; 3) increment DSX nesting count; 4) abort; 5) set DSX status to active; 6) reset DSX trace hardware and / or 7) provide span value to DSX hardware trace Device.Typically, once a first instance of a YBEGIN WITH STRIDE instruction occurs, the DSX status is set to active if there is no active RTM transaction, the DSX tracking hardware is reset (eg, using the provided span value as detailed above) , And uses the shift value to calculate the fallback address to start the DSX area. As detailed previously, the state of the DSX is typically stored in accessible locations such as registers, such as the DSX Status and Control Register (DSXSR) discussed above with reference to FIG. 1. However, other means such as DSX status flags in non-dedicated control / status registers (such as FLAGS registers) may be utilized. Earlier we also described the reset of the DSX tracking hardware.Typically, once an instance of a YBEGIN WITH STRIDE instruction is present, the DSX status is set to active if there is no active RTM transaction, the DSX nested count is incremented (if the count is less than the maximum), the DSX trace hardware is reset (for example, As provided above using the provided span), and use the displacement value to calculate the fallback address to start the DSX region. As detailed previously, the state of the DSX is typically stored in accessible locations such as registers, such as the DSX Status and Control Register (DSXSR) discussed above with reference to FIG. 1. However, other means such as DSX status flags in non-dedicated control / status registers (such as FLAGS registers) may be utilized. Earlier we also described the reset of the DSX tracking hardware. As detailed previously, the state of the DSX is typically stored in accessible locations such as registers, such as the DSX Status and Control Register (DSXSR) discussed above with reference to FIG. 1. However, other means such as DSX status flags in non-dedicated control / status registers (such as FLAGS registers) may be utilized. This register can be checked by the core's hardware to determine if DSX actually happened.If there are some reasons why the DSX can not start, one or more of the other possible actions occur. For example, in some embodiments of processors that support RTM, if the RTM transaction is active, DSX should not be active initially and continue with RTM. If there is an error with the DSX setup at the beginning (incorrect nesting count), the abort will occur. In addition, in some embodiments, if there is no DSX, an error is generated and no operation (NOP) is performed. Regardless of which action is performed, in most embodiments, the DSX status is reset after this action (if it is set) to indicate that there is no pending DSX.Figure 12 shows a detailed implementation of the execution of instructions, such as a YBEGIN WITH STRIDE instruction. For example, in some embodiments, this flow is block 1007 of FIG. In some embodiments, one or more hardware cores of a hardware device, such as a central processing unit (CPU), a graphics processing unit (GPU), an acceleration processing unit (APU), a digital signal processor (DSP) carried out. In other embodiments, the execution of the instruction is a simulation.In some embodiments, for example, in a processor supporting an RTM transaction, a determination is made at 1201 as to whether an RTM transaction is occurring. For example, in some embodiments of a processor that supports RTM, DSX should not be active initially if the RTM transaction is active. In this instance, there is an error in the RTM transaction and it should have its end program activated. The RTM transaction status is typically stored in registers such as RTM control and status registers. The processor's hardware evaluates the contents of this register to determine if RTM transactions are occurring. RTM transactions continue processing at 1203 when an RTM transaction is taking place.When no RTM transaction is occurring or when the RTM is not supported, a determination is made at 1205 as to whether the current DSX nested count is less than the maximum nested count. In some embodiments, the YBEGIN WITH STRIDE instruction provides a nested count register for storing the current nested count as an operand. Alternatively, there may be a dedicated nesting counter register in the hardware for storing the current nesting count. The maximum nested count is the maximum number of DSX starts (eg, via a YBEGIN instruction) that can occur without having to correspond to the end of the DSX (eg, via the YEND instruction).Abort occurs at 1207 when the current nested count is greater than the maximum. In some embodiments, the suspension of the trigger rollback. In other embodiments, YABORT is performed as detailed below, which not only performs a rollback to the rollback address, but also discards speculatively stored writes and resets the current nested count and sets the DSX state inactive. As detailed above, the DSX status is typically stored in a control register, such as the DSX Status and Control Register (DSXSR) shown in FIG. 1. However, other means such as DSX status flags in non-dedicated control / status registers (such as FLAGS registers) may be utilized.When the current nested count is not greater than the maximum value, the current DSX nested count is incremented at 1209.A determination is made at 1211 as to whether the current DSX nested count is equal to one. When equal to one, in some embodiments, the back-off address is calculated at 1213 by adding the displacement value provided by the YBEGIN WITH STRIDE instruction to the address of the instruction following the YBEGIN WITH STRIDE instruction. In embodiments where the YBEGIN WITH STRIDE instruction provides a back-off address, then this calculation is not necessary.At 1215, the DSX status is set to active (if it needs to be) and the DSX tracking hardware is reset (eg, including using the provided span value as detailed above). For example, as previously detailed, the status of the DSX is typically stored in accessible locations such as registers, such as the DSX Status and Control Register (DSXSR) discussed above with reference to FIG. 1. However, other means such as DSX status flags in non-dedicated control / status registers (such as FLAGS registers) may be utilized. This register can be checked by the core's hardware to determine if DSX actually happened.YCONTINUE instructionAs the DSX reaches the end (eg, the iteration of the loop has run its process) without any problems, in some embodiments the instruction (YEND) is executed to indicate the end of the speculative region as detailed below. In short, execution of this instruction causes the submission of the current speculative state (all writes that have not yet been written) and the exit from the current speculative area, as discussed below. Then another iteration of the loop can be started by calling another YBEGIN.However, in some embodiments, this loop for YBEGIN, YEND, YBEGIN, etc., is obtained by using the continue instruction to commit the current loop iteration when speculation is no longer needed (eg, when there is no conflict between stores) Optimization. Continuing instructions also begins a new speculative loop iteration without the need to call YBEGIN.Figure 13 illustrates an embodiment of the execution of instructions for continuing DSX without terminating it. As will be detailed herein, this instruction is referred to as "YCONTINUE" and is used to indicate the end of the transaction. Of course, the instruction can be called another name.In some embodiments, one or more hardware cores of a hardware device, such as a central processing unit (CPU), a graphics processing unit (GPU), an acceleration processing unit (APU), a digital signal processor (DSP) carried out. In other embodiments, the execution of the instruction is a simulation.At 1301, the YCONTINUE instruction is received / retrieved. For example, instructions are fetched from memory into the instruction cache or fetched from the instruction cache. The removed instruction can take one of several forms.Figure 14 shows some exemplary embodiments of the YCONTINUE instruction format. In an embodiment, the YCONTINUE instruction includes an opcode (YCONTINUE), but no explicit operand, as shown at 1401. Depending on the implementation of YCONTINUE, the implied operand of the DSX status register and the nested count register. As previously detailed, the DSX nested count can be a dedicated register, the flags in the register are not dedicated to the DSX nested count (such as the overall status register), and so on. In addition, the DSX status register can be a dedicated register, a flag in a register that is not dedicated to DSX status, such as an overall status register like a flag register, and so on.In another embodiment, the YCONTINUE instruction includes not only the opcode, but also an explicit operand for the DSX state, such as a DSX status register, as shown at 1403. Depending on the implementation of YCONTINUE, the implied operand of the nested count register is used. As detailed previously, the DSX nested count can be a dedicated register, a flag not dedicated to registers in the DSX nested count (such as an overall status register), and so on. In addition, the DSX status register can be a dedicated register, a flag in a register that is not dedicated to DSX status, such as an overall status register like a flag register, and so on.In another embodiment, the YCONTINUE instruction includes not only the opcode, but also an explicit operand for the DSX nested count, such as a DSX nested count register, as shown at 1405. Depending on the YCONTINUE implementation, use the implied operand of the DSX status register. As detailed previously, the DSX nested count can be a dedicated register, a flag not dedicated to registers in the DSX nested count (such as an overall status register), and so on. In addition, the DSX status register can be a dedicated register, a flag in a register that is not dedicated to DSX status, such as an overall status register like a flag register, and so on.In another embodiment, the YCONTINUE instruction includes not only the opcode, but also explicit operands for the DSX state, such as the DSX status register, and explicit operands for the DSX nested count, such as the DSX nested count register, As 1407 shows. As detailed previously, the DSX nested count can be a dedicated register, a flag not dedicated to registers in the DSX nested count (such as an overall status register), and so on. In addition, the DSX status register can be a dedicated register, a flag in a register that is not dedicated to DSX status, such as an overall status register like a flag register, and so on.Returning to FIG. 13, the fetched / received YCONTINUE instruction is decoded at 1303. In some embodiments, the instructions are decoded by hardware decoders such as those described in detail below. In some embodiments, the instruction is decoded as a micro op (micro op). For example, some CISC-based machines typically use micro-operations derived from macros. In other embodiments, decoding is part of a software routine such as compiled in time.At 1305, any operand associated with the decoded YCONTINUE instruction is retrieved. For example, retrieve data from one or more of the DSX register and the DSX nested count register.The decoded YCONTINUE instruction is executed at 1307. In embodiments in which instructions are decoded as micro-operations, these micro-operations are performed. Execution of the decoded instructions causes the hardware to perform one or more of the following actions: 1) determine that speculative writes associated with the DSX will be committed due to the no longer needed speculation, and submit them, and Start a new speculative loop iteration (such as a new DSX region); and / or 2) do nothing.The first of these actions can be performed by the DSX inspection hardware as detailed earlier (the speculative write is terminated and a new speculative loop iteration is started). In this action, all speculative writes associated with the looping iterations of the DSX are submitted (the stores make them accessible outside the DSX), but unlike the YEND instruction, the DSX status is not set to indicate that the DSX does not exist. For example, all writes associated with the DSX (such as stored in cache, register, or memory) are submitted so that they are ended and are visible outside the DSX. Typically, DSX submission will not happen unless the DSX nested count is one. Otherwise, in some embodiments, nop is then executed.In some embodiments, nop can be executed if the DSX is not active.Figure 15 shows a detailed embodiment of the execution of instructions, such as a YCONTINUE instruction. For example, in some embodiments, this flow is block 1307 of FIG. 13. In some embodiments, one or more hardware cores of a hardware device, such as a central processing unit (CPU), a graphics processing unit (GPU), an acceleration processing unit (APU), a digital signal processor (DSP) carried out. In other embodiments, the execution of the instruction is a simulation.A determination is made at 1501 as to whether the DSX is active. As detailed above, the DSX status is typically stored in a control register, such as the DSX Status and Control Register (DSXSR) shown in FIG. 1. However, other means such as DSX status flags in non-dedicated control / status registers (such as FLAGS registers) may be utilized. Wherever the state is stored, the location of the processor is checked by the processor's hardware to determine if DSX actually occurred.When no DSX occurs, no op is performed at 1503.When DSX occurs, a determination is made at 1505 as to whether the DSX nested count is equal to one. As detailed above, the DSX nested count is usually stored in a nested count register. When the DSX nesting count is not one, nop is executed at 507. When the DSX nesting count is one, the commit and DSX restart are done at 1509. When commit and DSX restart occur, in some embodiments one or more of the following actions occur: 1) reset the DSX trace hardware (eg, as detailed above), 2) calculate a fallback address, and 3) Submission of an instruction (write) that was speculatively executed in the previously estimated region.FIG. 16 shows an example of pseudocode showing execution of an instruction such as a YCONTINUE instruction.YBORT instructionSometimes, within DSX, there is a problem (such as wrong guess) that DSX needs to be aborted. Figure 17 shows an embodiment of the execution of instructions for aborting DSX. As will be detailed in this article, this instruction is called "YABORT". Of course, the instruction can be called another name. In some embodiments, one or more hardware cores of a hardware device, such as a central processing unit (CPU), a graphics processing unit (GPU), an acceleration processing unit (APU), a digital signal processor (DSP) carried out. In other embodiments, the execution of the instruction is a simulation.At 1701, the YABORT instruction is received / retrieved. For example, instructions are fetched from memory into the instruction cache or fetched from the instruction cache. The fetched instruction can take one of several forms detailed below.Figure 18 shows some example embodiments of the YABORT instruction format. In an embodiment, the YABORT instruction only includes an opcode (YABORT), as shown at 1801. Depending on the YABORT implementation, use the implied operand of the DSX status register and / or the RTM status register. As detailed previously, the DSX status register may be a dedicated register, a flag in a register that is not dedicated to DSX status, such as an overall status register like a flag register, and so on.In another embodiment, the YABORT instruction includes not only the opcode, but also explicit operands of the DSX status register, such as the DSX status register, as shown at 1803. As detailed previously, the DSX status register may be a dedicated register, a flag in a register that is not dedicated to DSX status, such as an overall status register like a flag register, and so on. Depending on the YABORT implementation, use the implied operand of the RTM status register.In another embodiment, the YABORT instruction includes not only the opcode but also the explicit operand of the DSX status register such as the DSX status register and the display operand of the RTM status register as shown at 1805. As detailed previously, the DSX status register may be a dedicated register, a flag in a register that is not dedicated to DSX status, such as an overall status register like a flag register, and so on.Returning to FIG. 17, the fetched / received YABORT instruction is decoded at 1703. In some embodiments, the instructions are decoded by hardware decoders such as those described in detail below. In some embodiments, the instruction is decoded as a micro op (micro op). For example, some CISC-based machines typically use micro-operations derived from macros. In other embodiments, decoding is part of a software routine such as compiled in time.At 1705, any operand associated with the decoded YABORT instruction is retrieved. For example, retrieve data from one or more of the DSX registers and / or RTM status registers.The decoded YABORT instruction is executed at 1707. In embodiments in which instructions are decoded as micro-operations, these micro-operations are performed. Execution of the decoded instructions causes the hardware to do one or more of the following: 1) determine that the RTM transaction is active and terminate the RTM transaction; 2) determine that the DSX is inactive and perform no operations; and / Or 3) Abort DSX by resetting any DSX nested count, discarding all writes that were speculatively performed, setting the DSX status inactive, and rolling back the execution to the fallback address.For the first action, the RTM status is usually stored in the RTM Status and Control Register. When this register indicates that an RTM transaction is occurring, the YABORT instruction should not be executed. As a result, there is a problem with the RTM transaction and it should stop.Regarding the second and third actions, as previously detailed, the status of the DSX is typically stored in accessible locations such as registers, such as the DSX Status and Control Register (DSXSR) discussed above with reference to FIG. 1. However, other means such as DSX status flags in non-dedicated control / status registers (such as FLAGS registers) may be utilized. This register can be checked by the core's hardware to determine if DSX actually happened. When this register indicates no DSX, then there will be no reason to execute the YABORT instruction, and hence no operation (or a similar operation) will be performed. When this register indicates a DSX, DSX abort processing occurs, including resetting the DSX trace hardware, discarding all stored presumptively written writes, and resetting the DSX state to inactive and rollback execution.Figure 19 shows a detailed embodiment of the execution of instructions, such as a YABORT instruction. For example, in some embodiments, this flow is block 1707 of FIG. 17. In some embodiments, one or more hardware cores of a hardware device, such as a central processing unit (CPU), a graphics processing unit (GPU), an acceleration processing unit (APU), a digital signal processor (DSP) carried out. In other embodiments, the execution of the instruction is a simulation.In some embodiments, such as in a processor supporting an RTM transaction, a determination is made at 1901 as to whether an RTM transaction is occurring. For example, in some embodiments of a processor that supports RTM, DSX should not be active initially if the RTM transaction is active. In this instance, there is an error in the RTM transaction and it should have its end program activated. The RTM transaction status is typically stored in registers such as RTM control and status registers. The processor's hardware evaluates the contents of this register to determine if RTM transactions are occurring. When an RTM transaction is taking place, the RTM transaction continues.When no RTM transaction occurs, or RTM is not supported, a determination is made at 1905 as to whether the DSX is active or not. The status of the DSX is typically stored in accessible locations such as the DSX Status and Control Register (DSXSR) discussed above with reference to FIG. 1. However, other means such as DSX status flags in non-dedicated control / status registers (such as FLAGS registers) may be utilized. This register can be checked by the core's hardware to determine if DSX has occurred.When this register indicates no DSX, nop is executed at 1907. When this register indicates a DSX, DSX abort processing occurs at 1909, including resetting the DSX trace hardware, discarding all stored presumptively written writes, and resetting the DSX state to inactive and rollback execution .FIG. 20 shows an example of pseudocode showing the execution of an instruction such as a YABORT instruction.YTEST instructionThe software needs to know if DSX is active before starting a new DSX speculation zone. Figure 21 shows an embodiment of the execution of instructions for testing the status of a DSX. As will be detailed herein, this instruction is referred to as "YTEST" and is used to provide an indication of DSX activity through the use of the logo. Of course, the instruction can be called another name.In some embodiments, one or more hardware cores of a hardware device, such as a central processing unit (CPU), a graphics processing unit (GPU), an acceleration processing unit (APU), a digital signal processor (DSP) carried out. In other embodiments, the execution of the instruction is a simulation.At 2101, the YTEST instruction is received / retrieved. For example, instructions are fetched from memory into the instruction cache or fetched from the instruction cache. The removed instruction can take one of several forms. Figure 22 shows some exemplary embodiments of the YTEST instruction format. In an embodiment, the YTEST instruction includes an opcode (YTEST), but no explicit operand, as shown at 2201. Implied operand using DSX status register and flag register. As detailed previously, the DSX status register may be a dedicated register, a flag in a register that is not dedicated to DSX status, such as an overall status register like a flag register, and so on. Exemplary flag registers include the EFLAGS register. In particular, the flag register is used to store the zero flag (ZF).In another embodiment, the YTEST instruction includes not only the opcode, but also an explicit operand for the DSX state, such as a DSX status register, as shown at 2203. As detailed previously, the DSX status register may be a dedicated register, a flag in a register that is not dedicated to DSX status, such as an overall status register like a flag register, and so on. Implied operand using flags register. Exemplary flag registers include the EFLAGS register. In particular, the flag register is used to store the zero flag (ZF).In another embodiment, the YTEST instruction includes not only the opcode, but also the explicit operand of the flag register, as shown at 2205. Exemplary flag registers include the EFLAGS register. In particular, the flag register is used to store the zero flag (ZF). Implied operand using DSX status register. As detailed previously, the DSX status register may be a dedicated register, a flag in a register that is not dedicated to DSX status, such as an overall status register like a flag register, and so on.In another embodiment, the YTEST instruction includes not only the opcode, but also an explicit operand such as a DSX status register for the DSX state and an operand of the flag register, as shown at 2207. As detailed previously, the DSX status register may be a dedicated register, a flag in a register that is not dedicated to DSX status, such as an overall status register like a flag register, and so on. Implied operand using flags register. Exemplary flag registers include the EFLAGS register. In particular, the flag register is used to store the zero flag (ZF).Returning to FIG. 21, the fetched / received YTEST instruction is decoded at 2103. In some embodiments, the instructions are decoded by hardware decoders such as those described in detail below. In some embodiments, the instruction is decoded as a micro op (micro op). For example, some CISC-based machines typically use micro-operations derived from macros. In other embodiments, decoding is part of a software routine such as compiled in time.At 2105, any operand associated with the decoded YTEST instruction is retrieved. For example, retrieve data from the DSX status register.The decoded YTEST instruction is executed at 2107. In embodiments in which instructions are decoded as micro-operations, these micro-operations are performed. Execution of the decoded instructions causes the hardware to perform one or more of the following actions: 1) determining that the DSX status register indicates that the DSX is active, and if so, setting the zero flag in the flag register to 0; Or 2) Determine if the DSX status register indicates DSX is inactive, and if so, set the zero flag in the flag register to 1. Of course, although the zero flag is used to show the DSX activity status, other flags are used depending on the embodiment.FIG. 23 shows an example of pseudo code showing execution of an instruction such as a YTEST instruction.YEND instructionAs the DSX ends (eg, the iteration of the loop has run its process) without any problems, in some embodiments, instructions are executed to indicate the end of the speculative region. In short, the execution of this instruction causes the submission of the current speculative state (all writes that have not been written yet) and the exit from the current speculative region.Figure 24 shows an embodiment of the execution of instructions for ending the DSX. As will be detailed herein, this instruction is referred to as "YEND" and is used to indicate the end of the DSX. Of course, the instruction can be called another name.In some embodiments, one or more hardware cores of a hardware device, such as a central processing unit (CPU), a graphics processing unit (GPU), an acceleration processing unit (APU), a digital signal processor (DSP) carried out. In other embodiments, the execution of the instruction is a simulation.At 2401, the YEND instruction is received / removed. For example, instructions are fetched from memory into the instruction cache or fetched from the instruction cache. The removed instruction can take one of several forms. Figure 25 shows some exemplary embodiments of the YEND instruction format. In an embodiment, the YEND instruction includes an opcode (YEND), but no explicit operand, as shown at 2501. Depending on the YEND implementation, the implied register operand for the DSX state, nested count, and / or RTM state is used.In another embodiment, the YEND instruction includes not only the opcode, but also an explicit operand for the DSX state, such as a DSX status register, as shown at 2503. As detailed previously, the DSX status register may be a dedicated register, a flag in a register that is not dedicated to DSX status, such as an overall status register like a flag register, and so on. Depending on the YEND implementation, the implied register operand for nested count and / or RTM state is used.In another embodiment, the YEND instruction includes not only the opcode, but also an explicit operand for the DSX nested count, such as a DSX nested count register, as shown at 2505. As detailed previously, the DSX nested count can be a dedicated register, not a flag in a DSX nested count register, such as an overall status register. Depending on the YEND implementation, implied register operands for DSX status and / or RTM status are used.In another embodiment, the YEND instruction includes not only the opcode but also an explicit operand such as a DSX status register for the DSX state and an explicit operand for the DSX nested count, such as a DSX nested count register, As shown in 2507. As detailed previously, the DSX status register may be a dedicated register, a flag not dedicated to DSX status registers (such as an overall status register like a flag register, etc.), and the DSX nesting counter may be a dedicated register that is not dedicated to DSX Nested count register, such as the overall status register. Depending on the YEND implementation, use the implied operand of the RTM status register.In another embodiment, the YEND instruction includes not only opcodes but also explicit operands for the DSX state such as DSX status registers, explicit operands for DSX nested counts such as DSX nested count registers and Explicit operands for the RTM state, as shown at 2509. As detailed previously, the DSX status register may be a dedicated register, a flag not dedicated to registers in the DSX state (such as the overall status register like flag registers, etc.), and the DSX nesting counter may be a special purpose register, a The flag is not dedicated to DSX nesting counts (such as the overall status register).Returning to FIG. 24, the fetched / received YEND instruction is decoded at 2403. In some embodiments, the instructions are decoded by hardware decoders such as those described in detail below. In some embodiments, the instruction is decoded as a micro op (micro op). For example, some CISC-based machines typically use micro-operations derived from macros. In other embodiments, decoding is part of a software routine such as compiled in time.At 2405, any operand associated with the decoded YEND instruction is retrieved. For example, retrieve data from one or more of the DSX register, the DSX nested count register, and / or the RTM status register.The decoded YEND instruction is executed at 2407. In embodiments in which instructions are decoded as micro-operations, these micro-operations are performed. Execution of the decoded instructions causes the hardware to perform one or more of the following actions that are performed: 1) terminating speculative writes associated with the DSX (submitting them); 2) notifying errors (such as general protection faults) And perform no operation; 3) suspend the DSX; and / or 4) end the RTM transaction.The first of these actions (such that the speculative write is completed) causes all speculative writes associated with the DSX to be committed (stored so that they are accessible outside of the DSX) and sets the DSX status in the DSX status register to Indicates that DSX does not exist. For example, all writes associated with the DSX are submitted (such as in caches, registers, or memory) so that they are ended and are visible outside the DSX. Typically, DSX can not be terminated unless the speculative nested count is zero. If the nested count is greater than zero, NOP is performed in some embodiments.If there are some reasons why you can not end the DSX, one or more of the other three possible actions occur. For example, in some embodiments of a processor that supports RTM, DSX should not be active initially if the RTM transaction is active. In this instance, there is an error in the RTM transaction, and its closing routine should be activated, as indicated by the fourth action above.In some embodiments, if there is no DSX, an error is generated and no operation (NOP) is performed. For example, as previously detailed, the status of the DSX is typically stored in accessible locations such as registers, such as the DSX Status and Control Register (DSXSR) discussed above with reference to FIG. 1. However, other means such as DSX status flags in non-dedicated control / status registers (such as FLAGS registers) may be utilized. This register can be checked by the core's hardware to determine if DSX actually happened.In some embodiments, the interrupt routine is implemented if there is a failure in committing the transaction. For example, in some embodiments of a processor that supports RTM, an RTM interrupt routine is activated.Regardless of which action is performed, in most embodiments, the DSX status is reset after this action (if it is set) to indicate that there is no pending DSX.Figure 26 shows a detailed embodiment of the execution of instructions, such as the YEND instruction. For example, in some embodiments, this flow is block 2407 of FIG. 24. In some embodiments, one or more hardware cores of a hardware device, such as a central processing unit (CPU), a graphics processing unit (GPU), an acceleration processing unit (APU), a digital signal processor (DSP) carried out. In other embodiments, the execution of the instruction is a simulation.In some embodiments, such as in a processor supporting an RTM transaction, a determination is made at 2601 as to whether an RTM transaction is occurring. For example, in some embodiments of a processor that supports RTM, DSX should not be active initially if the RTM transaction is active. In this instance, there is an error in the RTM transaction and it should have its end program activated. The RTM transaction status is typically stored in registers such as RTM control and status registers. The processor's hardware evaluates the contents of this register to determine if RTM transactions are occurring.When an RTM transaction is taking place, a call is made at 2603 to end this RTM transaction. For example, an instruction to end an RTM transaction is invoked and executed. An example of such an instruction is XEND.When no RTM transaction occurs, a determination is made at 2605 as to whether the DSX is active or not. As detailed above, the DSX status is typically stored in a control register, such as the DSX Status and Control Register (DSXSR) shown in FIG. 1. However, other means such as DSX status flags in non-dedicated control / status registers (such as FLAGS registers) may be utilized. Wherever the state is stored, the location of the processor is checked by the processor's hardware to determine if DSX actually occurred.When no DSX occurs, an error is generated at 2607. For example, generate a general protection error. In addition, in some embodiments, nop is performed.When DSX occurs, decrement the DSX nested count at 2609. For example, decrementing stored DSX nested counts, such as those detailed above, stored in the DSX nested count register.A determination is made at 2611 as to whether the DSX nested count is equal to zero. As detailed above, the DSX nesting count is usually stored in a register. When the DSX nesting count is not zero, NOP is performed in some embodiments. When the DSX nested count is zero, terminate at 2615 and submit the speculative state of the current DSX.At 2617, a determination is made as to whether the submission was successful. For example, is there an error in the storage? If not, the DSX is aborted at 2621. When the commit is successful, a DSX status indication (such as stored in the DSX status and control register) is set at 2619 to indicate there is no active DSX. In some embodiments, setting of this indication occurs after a wrong generation 2607 or a stop 2621 of the DSX.FIG. 27 shows an example of a pseudo code that shows execution of an instruction such as a YEND instruction.The following discusses embodiments of instruction formats and execution resources for executing the above instructions.The instruction set includes one or more instruction formats. The given instruction format defines various fields (the number of bits, the position of the bits) to specify the operation to be performed (opcode) and the operands on which the operation will be performed, and so on. Some instruction formats are further broken down by the definition of the instruction template (or sub-format). For example, an instruction template for a given instruction format may be defined as a different subset of fields having instruction format fields (the included fields are generally in the same order, but where at least some of the fields have different bits because fewer fields are included) , And / or as having a given field that is interpreted differently. Thus, each instruction of ISA is expressed using a given instruction format (and, if defined, the given instruction template in the instruction template of the instruction format), and includes fields for specifying operations and operands. For example, an exemplary ADD instruction has a specific opcode and instruction format (including an opcode field for specifying the opcode and an operand field for selecting operands (source 1 / destination and source 2)), , And the ADD instruction appearing in the instruction stream will have a specific content in the operand field that selects a particular operand. The SIMD extension set known as Advanced Vector Extensions (AVX) (AVX1 and AVX2) and using the Vector Extensions (VEX) encoding scheme has been published and / or published (see, for example, "64 and IA-32 Architecture, October 2011 Software Developer's Handbook "("64 and IA-32 Architectures Software Developers Manual "); and see"Advanced Vector Extensions Programming Reference ", June 2011).Example instruction formatThe embodiments of the instructions described herein may be embodied in different formats. In addition, exemplary systems, architectures, and pipelines are discussed in detail below. Embodiments of the instructions may execute on these systems, architectures, and pipelines, but are not limited to the detailed systems, architectures, and pipelines.Universal vector friendly instruction formatThe vector friendly instruction format is an instruction format suitable for vector instructions (eg, there are specific fields dedicated to vector operations). Although embodiments in which both vector and scalar operations are supported by a vector-friendly instruction format are described, alternative embodiments only use vector operations through a vector-friendly instruction format.28A-28B are block diagrams illustrating a generic vector friendly instruction format and its instruction templates according to various embodiments of the present invention. Figure 28A is a block diagram illustrating a generic vector friendly instruction format and its Class A instruction templates according to embodiments of the invention; and Figure 28B is a block diagram illustrating a generic vector friendly instruction format and its class B instruction templates, according to embodiments of the invention The block diagram. Specifically, class A and class B instruction templates are defined for common vector friendly instruction format 2800, both of which include a no memory access 2805 instruction template and a memory access 2820 instruction template. The term "universal" in the context of a vector friendly instruction format refers to an instruction format that is not tied to any specific instruction set.Although an embodiment of the present invention in which the vector friendly instruction format supports a 64-byte vector operand length (or size) and a 32-bit (4-byte) or 64-bit (8-byte) data element width (Or size) (and thus, the 64-byte vector consists of 16 doubleword-sized elements or alternatively 8 quadword-sized elements); 64-byte vector operand length (or size) Byte or 8-bit data byte width (or size) 32-byte vector Operand Size (or size) of 32 bits (4 bytes), 64 bits (8 bytes), 16 bits 2 bytes), or 8 bits (1 byte) of data element width (or size); and 16-byte vector operands are 32 bits (4 bytes) However, alternative embodiments may support larger, smaller, and / or different vector operand sizes (eg, 256 (eg, 256) Byte vector operands) with larger, smaller, or different data element widths (for example, 128-bit (16-byte) data element widths).The Class A instruction templates in FIG. 28A include: 1) an instruction template for no roundtrip control-type operation 2810 for memory access and an instruction for no memory-accessed data transformational type operation 2815 within the instruction template without memory access 2805 Template; and 2) an instruction template of 2825 and a non-time-of-flight 2830 of memory accesses showing the temporality of memory accesses within the instruction templates of memory accesses 2820. The Type B instruction templates in FIG. 28B include: 1) Within the instruction template without memory access 2805, an instruction template of a partial round control type operation 2812 without write access to the memory access and a write mask without memory access Code-controlled vsize-type operation 2817; and 2) within the instruction template of the memory access 2820, an instruction template of the memory access write mask control 2827 is shown.The generic vector friendly instruction format 2800 includes the following fields listed below in the order shown in FIGS. 28A-28B.The format field 2840, a particular value (instruction format identifier value) in this field, uniquely identifies the vector friendly instruction format, and thus identifies that the instruction appears in the instruction stream in a vector friendly instruction format. As a result, this field is not needed for an instruction set that has only a generic vector friendly instruction format, in which case this field is optional.The base operation field 2842, whose content distinguishes between different base operations.The register index field 2844, the contents of which specify the location of the source and destination operands in registers or in memory, either directly or through address generation. These fields include a sufficient number of bits to select N registers from PxQ (eg, 32x512, 16x128, 32x1024, 64x1024) register files. Although N may be up to three sources and one destination register in one embodiment, alternative embodiments may support more or fewer source and destination registers (for example, up to two sources may be supported where one of these sources A source is also used as a destination, supporting up to three sources, one of which is also used as a destination and can support up to two sources and one destination.The modifier field 2846, whose contents distinguish the instructions appearing in the common vector instruction format that specify memory accesses from the instructions appearing in the common vector instruction format that do not specify memory accesses; that is, in the instruction templates without memory accesses 2805 And the memory access 2820 instruction template. The memory access operation reads and / or writes to the memory hierarchy (in some cases, the value in the register is used to specify the source and / or destination addresses) rather than non-memory access operations (eg, source and / or destination Is the register). Although in one embodiment this field is also selected between three different ways to perform memory address calculations, alternative embodiments may support more, less or different ways of performing memory address calculations.The augmentation operation field 2850, whose content distinguishes which one of various different operations to perform in addition to the base operation. This field is context-sensitive. In one embodiment of the invention, this field is divided into a class field 2868, an alpha field 2852, and a beta field 2854. The augmentation operation field 2850 allows multiple sets of common operations to be performed in a single instruction rather than two, three, or four instructions.The proportion field 2860, whose content allows the content of the index field for memory address generation (eg, for address generation using 2-scale * index + base address), is scaled.The displacement field 2862A - its content is used as part of a memory address generation (eg, for address generation using 2-scale * index + base + shift).The displacement factor field 2862B (note that the displacement field 2862A uses one or the other directly in the collocation indication on the displacement factor field 2862B) - its content is used as part of address generation, which specifies the size (N) to be accessed by memory scaling , Where N is the number of bytes in memory access (eg, address generation for 2-scale * index + base + + scaled displacement). The redundant lower-order bits are ignored and therefore the contents of the displacement factor field are multiplied by the total size of memory operands (N) to generate the final displacement used in calculating the effective address. The value of N is determined by the processor hardware at run time based on the complete opcode field 2874 (described later herein) and the data manipulation field 2854C. The displacement field 2862A and the displacement factor field 2862B are not used for no memory access 2805 instruction templates and / or different embodiments may result in only one or both being unrealized, in the sense that the displacement field 2862A and the displacement factor field 2862B Is optional.The data element width field 2864, whose content distinguishes which of multiple data element widths to use (in some embodiments for all instructions; in other embodiments, only for some of the instructions). This field is optional if it supports only one data element width and / or uses some aspect of the opcode to support the width of the data element. This field is optional.Write mask field 2870 whose content, based on the location of each data element, controls whether the location of the data element in the destination vector operand reflects the result of the base operation and the augment operation. Class A instruction templates support merge-write mask operations, while class B instruction templates support both merge-write mask operations and zero-write mask operations. When combined, the vector mask allows any set of elements in the protection destination to be kept from being updated (as specified by the base operation and the expansion operation) while performing any operation; in another embodiment, The old value of each element of the destination. In contrast, when zeros, the vector mask allows any set of elements in the destination to be zeroed (as specified by the base operation and the expansion operation) during any operation; in one embodiment, the destination's element is zeroed in the corresponding mask Bit has 0 value is set to 0. A subset of this functionality is the ability to control the vector length of the operations performed (ie, the span of elements to be modified from the first to the last), however, the elements being modified do not have to be contiguous. As such, the write mask field 2870 allows partial vector operations including loading, storing, arithmetic, logic, and the like. Although it is described that the content of write mask field 2870 selects one of a plurality of write mask registers containing the write mask to be used (and thus the content of write mask field 2870 indirectly identifies Mask operations performed), but instead or in addition, alternative embodiments allow the contents of the mask write field 2870 to directly specify the mask operation to be performed.The immediate field 2872 - its content allows for the specification of immediate values. This field is optional in that it does not exist in a generic vector friendly format that does not support immediate literals and does not exist in instructions that do not use immediate literals.Class field 2868 - its contents distinguish between different kinds of instructions. Referring to Figures 28A-B, the contents of this field select between Class A and Class B instructions. In FIGS. 28A-B, a rounded square is used to indicate that there is a private value in the field (eg, Class A 2868A and Class B 2868B for class field 2868 in FIGS. 28A-B, respectively).Class A instruction templateIn the case of a type A non-memory access 2805 instruction template, the alpha field 2852 is interpreted as an RS field 2852A whose content distinguishes between which of the different types of augmentation operations will be performed (eg, rounding without memory accesses, respectively Type operation 2810 and no memory access data transformation type operation 2815 instruction template specifies rounding 2852A.1 and data transformation 2852A.2), while the beta field 2854 distinguishes which one of the specified types of operations is to be performed. In the instruction template without memory access 2805, the scaling field 2860, the displacement field 2862A, and the displacement proportion field 2862B do not exist.Instruction template without memory access - completely rounding control operationsIn a full round control-free operation 2810 instruction template without memory access, the beta field 2854 is interpreted as a round control field 2854A whose content provides a static rounding operation. Although in the depicted embodiment of the invention, rounding control field 2854A includes all floating point exception (SAE) fields 2856 and rounding operation control fields 2858, alternative embodiments may support encoding these two concepts as same One field, or only one or the other of these concepts / fields (eg, may have only round operation control field 2858).The SAE field 2856 - whose content distinguishes whether or not to disable an exception event report; does not report any kind of floating-point exception flag and does not evoke any floating-point exception handler when the contents of the SAE field 2856 indicate suppression is enabled.The round operation control field 2858, whose content distinguishes which of a set of rounding operations (eg, rounds up, rounds down, rounds to zero, and rounds nearest) is performed. As such, the rounding operation control field 2858 allows the rounding mode to be changed on an instruction-by-instruction basis. In one embodiment of the invention in which the processor includes a control register for specifying a rounding mode, the contents of the round operation control field 2850 take precedence over the register value.Instruction template without memory access - data transformation type operationIn a memory-free data transformation type operation 2815 instruction template, the beta field 2854 is interpreted as a data transformation field 2854B whose content distinguishes which of a number of data transformations (eg, no data transformation, blending, broadcasting) is to be performed.In the case of a Type A memory access 2820 instruction template, the alpha field 2852 is interpreted as an expulsion hint field 2852B whose content distinguishes which of the eviction hints to use (in FIG. 28A, an instruction template for memory access latency 2825 And the memory access non-temporal 2830 instruction templates specify the temporal 2852B.1 and the non-temporal 2852B.2 respectively), and the beta field 2854 is interpreted as the data manipulation field 2854C whose content distinguishes between performing multiple data manipulation operations (Also referred to as primitives) (eg, no manipulation, broadcast, up-conversion of sources, and down-conversion of destinations). The memory access 2820 instruction template includes a ratio field 2860, and optionally a displacement field 2862A or a displacement ratio field 2862B.Vector memory instructions use conversion support to perform vector loading from memory and store vectors to memory. Like ordinary vector instructions, vector memory instructions transfer data back and forth in data elemental fashion to memory, where the elements actually transferred are dictated by the contents of the vector mask chosen as the write mask.Memory Access Instruction Template - TimelinessTemporal data is data that can be quickly reused to benefit from the cache. However, this is a hint, and different processors can implement it in different ways, including ignoring the hint altogether.Memory Access Instruction Templates - Not EphemeralNon-aging data is data that is unlikely to be reused quickly enough to benefit from cache operations in Level 1 high cache and should give eviction priority. However, this is a hint, and different processors can implement it in different ways, including ignoring the hint altogether.Class B instruction templateIn the case of a Class B instruction template, the alpha field 2852 is interpreted as a write mask control (Z) field 2852C whose content distinguishes whether the write mask operation controlled by the write mask field 2870 should be merged or zeroed.In the case of a type B non-memory-access 2805 instruction template, the portion of the beta field 2854 is interpreted as an RL field 2857A whose content distinguishes between which of the different types of extension operations are to be performed (eg, a write without memory access The mask control portion rounding control type operation 2812 instruction template and the no memory access write mask control the rounding 2857A.1 and vector length (VSIZE) 2857A.2 of the VSIZE type operation 2817 instruction template while the The rest distinguishes which of the specified types of operations will be performed. In the instruction template without memory access 2805, the scaling field 2860, the displacement field 2862A, and the displacement proportion field 2862B do not exist.In the Write Mask Control Section Without Memory Access Controlled Operation 2810 instruction template, the rest of the beta field 2854 is interpreted as a round operation field 2859A and the exception event report is disabled (given instructions do not report any kind Floating-point exception flag, and does not raise any floating-point exception handler).The round operation control field 2859A - just as round operation control field 2858, whose content distinguishes which of a set of rounding operations (eg, rounds up, rounds down, rounds to zero, and rounds nearest ). Thus, the rounding operation control field 2859A allows the rounding mode to be changed on a per-instruction basis. In one embodiment of the invention in which the processor includes a control register for specifying a rounding mode, the contents of the round operation control field 2850 take precedence over the register value.In a write mask control VSIZE type operation 2817 instruction template without memory access, the remainder of the beta field 2854 is interpreted as a vector length field 2859B whose content distinguishes which of a number of data vector lengths (eg, 128, 256 or 512 bytes).In the case of a Class B memory access 2820 instruction template, the portion of the beta field 2854 is interpreted as a broadcast field 2857B whose content distinguishes whether a broadcast type data manipulation operation is to be performed, while the remainder of the beta field 2854 is interpreted as a vector length field 2859B . The memory access 2820 instruction template includes a ratio field 2860, and optionally a displacement field 2862A or a displacement ratio field 2862B.For the common vector friendly instruction format 2800, the complete opcode field 2874 is shown to include a format field 2840, a base operation field 2842, and a data element width field 2864. Although one embodiment in which the full opcode field 2874 includes all of these fields is shown, the full opcode field 2874 includes less than all of these fields in embodiments that do not support all of these fields. The complete opcode field 2874 provides the opcode (opcode).The augment operation field 2850, the data element width field 2864, and the write mask field 2870 allow for the on-instruction-by-instruction designations of these features in a generalized vector friendly instruction format.The combination of write mask field and data element width field creates various types of instructions because these instructions allow the mask to be applied based on different data element widths.The various instruction templates that appear in Category A and Category B are beneficial in different situations. In some embodiments of the invention, different processors or different cores within a processor may support only Class A, only Class B, or may support both. For example, high-performance general out-of-order kernels intended for general purpose computing only support category B and endorsements intended primarily for graphics and / or scientific (throughput) computing support only category A, It is within the scope of the present invention that the cores of both support both (of course there are some mixes of templates and instructions from both categories, but not all of the templates and instructions from both categories). Likewise, a single processor may include multiple cores, all of which support the same class or where different cores support different classes. For example, in a processor with separate graphics and general purpose cores, one of the cores in the graphics core that is intended primarily for graphics and / or scientific computing supports only Class A while one or more of the general purpose cores Can be a high-performance general purpose core with out-of-order execution and register renaming that is only intended for general-purpose computing to support Class B. Another processor that does not have a separate graphics core may include one or more universally ordered or out-of-order cores that support both Class A and Class B. Of course, features from one class may also be implemented in other classes in different embodiments of the invention. Programs written in high-level languages may be made (eg, compiled in-time or statically compiled) in a variety of different forms of executables, including: 1) in the form of instructions only having classes supported by the target processor for execution; or 2) All combinations of instructions of the same type and having the control flow code selected to execute based on the instructions supported by the processor that is currently executing the code.Exemplary private vector friendly instruction formatFIG. 29 is a block diagram illustrating an example dedicated vector friendly instruction format according to an embodiment of the present invention. FIG. Figure 29 shows a dedicated vector friendly instruction format 2900 that specifies the location, size, interpretation, order of fields, and values for some of those fields, in the sense that the dedicated vector friendly instruction format 1300 is specific. The special vector friendly instruction format 2900 can be used to expand the x86 instruction set, and as such, some of these fields are similar or identical to those used in the existing x86 instruction set and its extensions (eg, AVX). The format remains the same as the prefix code field, the real opcode byte field, the MOD R / M field, the SIB field, the displacement field, and the immediate field with the extended existing x86 instruction set. The fields from FIG. 28 are shown with the fields from FIG. 29 mapped to the fields from FIG. 28.It should be understood that while the embodiments of the present invention have been described with reference to the specific vector friendly instruction format 2900 in the context of the universal vector friendly instruction format 2800 for illustrative purposes, the present invention is not limited to the specific vector friendly instruction format 2900, Except for the place. For example, the universal vector friendly instruction format 2800 envisions a variety of possible sizes for various fields, while the dedicated vector friendly instruction format 2900 is shown as having a particular size of field. As a specific example, although the data element width field 2864 is shown as a bit field in the dedicated vector friendly instruction format 2900, the present invention is not limited thereto (that is, the universal vector friendly instruction format 2800 contemplates other size).The generic vector friendly instruction format 2800 includes the fields listed below in the order shown in FIG. 29A.EVEX prefix (bytes 0-3) 2902 - encoded in four bytes.Format Field 2840 (EVEX Byte 0, Bits [7: 0]) - The first byte (EVEX Byte 0) is the format field 2840 and it contains 0x62 (used in one embodiment of the invention to distinguish vectors Friendly instruction format unique value).The second through fourth bytes (EVEX bytes 1-3) include a number of bit fields that provide dedicated capabilities.REX field 2905 (EVEX byte 1, bits [7-5]) - consists of the EVEX.R bit field (EVEX byte 1, bit [7] -R), the EVEX.X bit field 6] -X) and (2857BEX byte 1, bit [5] -B). The EVEX.R, EVEX.X and EVEX.B bit fields provide the same functionality as the corresponding VEX bit fields and are encoded in 1's complement format, ie ZMM0 is encoded as 1111B and ZMM15 is encoded as 0000B. The other fields of these instructions encode the lower three bits (rrr, xxx, and bbb) of the register index as is known in the art, so that by adding EVEX.R, EVEX.X and EVEX.B Rrrr, Xxxx, and Bbbb are formed.REX 'field 2810, which is the first part of REX' field 2810, and is the EVEX.R 'bit field (EVEX word) used to encode the upper 16 or lower 16 registers of the extended 32 register sets Section 1, Bits [4] -R '). In one embodiment of the invention, this bit, along with other bits indicated below, is stored in bit-reversed format (in the well known x86 32-bit mode) as distinguished from the BOUND instruction whose actual opcode byte is 62, But does not accept the value 11 in the MOD field in the MOD R / M field (described below); alternative embodiments of the present invention do not store the bits of this indication and other indicated bits below in an inverted format. A value of 1 is used to encode the lower 16 registers. In other words, R'Rrrr is formed by combining EVEX.R ', EVEX.R and other RRRs from other fields.Opcode Map Field 2915 (EVEX Byte 1, Bits [3: 0] -mmmm) - Its content encodes the implied leading opcode byte (0F, 0F 38, or 0F 3).Data Element Width field 2864 (EVEX byte 2, bit [7] -W) - is indicated by the notation EVEX.W. EVEX.W is used to define the granularity (size) of a data type (a 32-bit data element or a 64-bit data element).EVEX.vvvv 2920 (EVEX Byte 2, Bits [6: 3] -vvvv) - The role of EVEX.vvvv can include the following: 1) EVEX.vvvv encodes the first source register operand and has the effect of having two or more The instruction of the source operand is valid and the first source register operand is specified in the form of inversion (one's complement); 2) EVEX.vvvv encodes the destination register operand for a particular vector The displacement is specified in one's complement; or 3) EVEX.vvvv does not encode any operands, preserves this field, and should contain 1111b. Thus, the EVEX.vvvv field 2920 encodes the 4 lower order bits of the first source register descriptor stored in inverted (one's complement) format. Depending on the instruction, an extra different EVEX bit field is used to expand the size of the specifier to 32 registers.EVEX.U Class 2868 field (EVEX byte 2, bit [2] -U) - if EVEX.U = 0, it indicates class A or EVEX.U0; if EVEX.U = 1, it indicates class B Or EVEX.U1.The prefix encoding field 2925 (EVEX byte 2, bits [1: 0] -pp) - provides additional bits for the underlying operation field. In addition to providing support for legacy SSE instructions in EVEX prefix format, this also has the benefit of tightening the SIMD prefix (the EVEX prefix requires only 2 bits instead of bytes to express the SIMD prefix). In one embodiment, to support legacy SSE instructions that use SIMD prefixes (66H, F2H, F3H) in both the legacy format and the EVEX prefix format, these legacy SIMD prefixes are encoded into SIMD prefix encoding fields; and at runtime , Is extended to a traditional SIMD prefix before the PLA is provided to the decoder (so the PLA can execute these legacy instructions in legacy and EVEX formats without modification). Although newer instructions may use the contents of the EVEX prefix encoding field directly as an opcode extension, some embodiments extend in a similar manner for consistency, but allow different meanings to be assigned by these conventional SIMD prefixes. Alternative embodiments may be redesigned to support 2-bit SIMD prefix coding and thus do not require expansion.Alpha field 2852 (EVEX byte 3, bits [7] -EH, also known as EVEX.EH, EVEX.rs, EVEX.RL, EVEX.write mask control and EVEX.N; also shown as alpha) As stated earlier, this field is context-sensitive.The beta field 2854 (EVEX byte 3, bits [6: 4] -SSS, also referred to as EVEX.s 2-0, EVEX.r 2-0, EVEX.rrl, EVEX.LLO, EVEX.LLB, Shown) - As mentioned earlier, this field is context-sensitive.REX 'field 2810, which is the remainder of the REX' field, and is an EVEX.V 'bit field (EVEX bytes) that can be used to encode the upper 16 or lower 16 registers of the extended 32 register sets 3, bit [3] -V '). This bit is stored in bit-reversed format. A value of 1 is used to encode the lower 16 registers. In other words, V'VVVV is formed by combining EVEX.V ', EVEX.vvvv.Write Mask Field 2870 (EVEX Byte 3, Bits [2: 0] -kkk) - Its contents specify the register index in the writemask register as previously described. In one embodiment of the invention, the particular value EVEX.kkk = 000 has a special behavior that implies that no write mask is used for a particular instruction (this can be done in a variety of ways, including using write masks hard-wired to all ones Or bypass mask hardware hardware to achieve).The real opcode field 2930 (byte 4) is also referred to as the opcode byte. Part of the opcode is specified in this field.The MOD R / M field 2940 (byte 5) includes an MOD field 2942, a Reg field 2944, and an R / M field 2946. As previously noted, the contents of the MOD field 2942 distinguish between memory accesses and non-memory accesses. The role of the Reg field 2944 can be attributed to two scenarios: encoding a destination register operand or a source register operand; or as an opcode extension and not for encoding any instruction operand. The role of the R / M field 2946 may include encoding an instruction operand that references a memory address, or encoding a destination register operand or a source register operand.Proportional, Index, Base Address (SIB) Byte (Byte 6) - As previously described, the contents of the Scale field 2850 are used for memory address generation. SIB.xxx 2954 and SIB.bbb 2956 - The contents of these fields were previously mentioned for the register indices Xxxx and Bbbb.Displacement field 2862A (Bytes 7-10) - When MOD field 2942 contains 10, Bytes 7-10 are the displacement field 2862A, and it works the same as a legacy 32-bit displacement (disp32) and works at byte granularity .Displacement Factor Field 2862B (Byte 7) - When MOD field 2942 contains 01, byte 7 is the displacement factor field 2862B. The location of this field is the same as that of the traditional x86 instruction set 8-bit shift (disp8), which operates on byte granularity. Since disp8 is sign-extended, it can only address between -128 and 127 byte offsets; disp8 usage can be set to only four really useful values in terms of 64-byte cache lines - 128, -64, 0, and 64; disp32 is used because a larger range is often needed; however, disp32 requires 4 bytes. In contrast to disp8 and disp32, the displacement factor field 2862B is a reinterpretation of disp8; the actual displacement is determined by multiplying the contents of the displacement factor field by the size of the memory operand access (N) when using the displacement factor field 2862B. This type of displacement is called disp8 * N. This reduces the average instruction length (a single byte is used for the displacement, but with a much larger range). This compression displacement is based on the assumption that the effective displacement is a multiple of the granularity of memory accesses, and thus the redundant lower order bits of the address offset do not need to be encoded. In other words, displacement factor field 2862B replaces the traditional x86 instruction set 8-bit displacement. Thus, the displacement factor field 2862B is encoded in the same manner as the x86 instruction set 8-bit displacement (hence no change in ModRM / SIB encoding rules), with the only difference that disp8 is overloaded to disp8 * N. In other words, there is no change in the encoding rule or the encoding length, but only in the interpretation of the displacement value by hardware (this requires scaling the displacement by the size of the memory operand to get the byte-wise address offset ).The immediate field 2872 operates as previously described.Full opcode fieldFigure 29B is a block diagram illustrating fields in a dedicated vector friendly instruction format 2900 that make up a complete opcode field 2874 in accordance with one embodiment of the present invention. Specifically, the complete opcode field 2874 includes a format field 2840, a base operation field 2842, and a data element width (W) field 2864. The base operation field 2842 includes a prefix encoding field 2925, an opcode mapping field 2915, and a real opcode field 2930.Register index field29C is a block diagram illustrating fields having a dedicated vector friendly instruction format 2900 that make up register index field 2844 according to one embodiment of the present invention. In particular, the register index field 2844 includes a REX field 2905, a REX 'field 2910, a MODR / M.reg field 2944, a MODR / M.r / m field 2946, a VVVV field 2920, an xxx field 2954, and a bbb field 2956.Expand the operation fieldFigure 29D is a block diagram illustrating fields in a dedicated vector friendly instruction format 2900 that make up an augmentation operation field 2850 in accordance with one embodiment of the present invention. When class (U) field 2868 contains 0, it represents EVEX.U0 (class A 2868A); when it contains 1 it represents EVEX.U1 (class B 2868B). The alpha field 2852 (EVEX byte 3, bit [7] -EH) is interpreted as rs field 2852A when U = 0 and MOD field 2942 contains 11 (indicating no memory access operation). The beta field 2854 (EVEX byte 3, bits [6: 4] -SSS) is interpreted as rounding control field 2854A when rs field 2852A contains 1 (round 2852A.1). The round control field 2854A includes a one-bit SAE field 2856 and a two-bit round operation field 2858. When rs field 2852A contains 0 (data transform 2852A.2), beta field 2854 (EVEX byte 3, bits [6: 4] -SSS) is interpreted as a three-bit data transform field 2854B. The alpha field 2852 (EVEX byte 3, bit [7] -EH) is interpreted as the Deportment Prompt (EH) field 2852B and the beta (U) field when U = 0 and the MOD field 2942 contains 00,01 or 10 indicating a memory access operation Field 2854 (EVEX byte 3, bits [6: 4] -SSS) is interpreted as a three-bit data manipulation field 2854C.When U = 1, the alpha field 2852 (EVEX byte 3, bit [7] -EH) is interpreted as the Write Mask Control (Z) field 2852C. The part of the beta field 2854 (EVEX byte 3, bit [4] -S 0) is interpreted as the RL field 2857A when it contains 1 (U) and the MOD field 2942 contains 11 (indicating no memory access operation) 2857A.1), the remainder of the beta field 2854 (EVEX byte 3, bits [6-5] -S 2-1) is interpreted as the round operation field 2859A and when RL field 2857A contains 0 (VSIZE 2857 .A2), the remainder of the beta field 2854 (EVEX byte 3, bits [6-5] -S 2-1) is interpreted as vector length field 2859B (EVEX byte 3, bit [6-5] -L 1-0). The beta field 2854 (EVEX byte 3, bits [6: 4] -SSS) is interpreted as the vector length field 2859B (EVEX word) when U = 1 and MOD field 2942 contains 00, 01 or 10 indicating a memory access operation Section 3, bit [6-5] -L 1-0) and the broadcast field 2857B (EVEX byte 3, bit [4] -B).Example register architectureFigure 30 is a block diagram of a register architecture 3000 in accordance with one embodiment of the present invention. In the illustrated embodiment, there are 32 512-bit wide vector registers 3010; these registers are referenced as zmm0 through zmm31. The lower 256 bits of the lower 16zmm register overwrite the ymm0-16 register. The lower 128 bits of the lower 16zmm register (the lower 128 bits of the ymm register) overwrite the xmm0-15 register. A dedicated vector friendly instruction format 2900 operates on these overlapping register files as shown in the following table.In other words, vector length field 2859B selects between a maximum length and one or more other shorter lengths, where each such shorter length is one-half of the previous length and does not have a vector template of vector length field 2859B The maximum vector length operation. In addition, in one embodiment, the Class B instruction templates of the dedicated vector friendly instruction format 2900 operate on packed or scalar single / double precision floating point data as well as packed or scalar integer data. The scalar operation is the operation performed on the lowest order data element position in the zmm / ymm / xmm register; depending on the embodiment, the higher order data element positions remain the same or zeroed prior to the instruction.Write mask register 3015 - in the embodiment shown, there are eight write mask registers (k0 through k7), each of size 64 bits. In an alternative embodiment, the size of write mask register 3015 is 16 bits. As previously described, in one embodiment of the present invention, the vector mask register k0 can not be used as a write mask; when the encoding of the normal indication k0 is used as a write mask, it selects the hard-wired write mask 0xFFFF , Effectively deactivating the instruction's write mask operation.General Purpose Register 3025 - In the embodiment shown, there are sixteen 64-bit general-purpose registers that are used with existing x86 addressing modes to address memory operands. These registers are referenced by the names RAX, RBX, RCX, RDX, RBP, RSI, RDI, RSP and R8 to R15.Scalar floating point stack register file (x87 stack) 3045, on which the MMX packed integer flat register file 3050 is overlaid - in the illustrated embodiment, the x87 stack is used for x87 instruction set expansion to 32/64 / 80-bit floating-point data performs an eight-element stack of scalar floating-point operations; instead, the MMX registers are used to perform operations on 64-bit packed integer data and operands for certain operations that are performed between the MMX and XMM registers.Alternate embodiments of the present invention may use wider or narrower registers. In addition, alternative embodiments of the present invention may use more, less or different register files and registers.Example core architecture, processor and computer architectureThe processor cores can be implemented in different ways for different purposes in different processors. For example, implementations of such cores may include: 1) a general purpose ordered core intended for general purpose computing; 2) a high performance generalized out of order core intended for general purpose computing; 3) a general purpose ordered kernel intended for graphics and / Or science (throughput) calculation of the special core. Implementations of different processors may include: 1) a CPU including one or more general purpose ordered cores intended for general purpose computing and / or one or more general out of order cores intended for general purpose computing; and 2) Includes coprocessors intended for one or more specialized cores primarily for graphics and / or science (throughput). Such disparate processors result in different computer system architectures, which may include 1) a coprocessor on a chip separate from the CPU, 2) a coprocessor on a die in the same package as the CPU but separate, 3) Co-Processors on the same die as the CPU (in which case such co-processors are sometimes referred to as dedicated logic such as integrated graphics and / or scientific (throughput) logic or the like, Core); and 4) an on-chip system that can include the described CPUs (sometimes referred to as application cores or application processors), the coprocessors and additional functions described above on the same die. An example core architecture is next described, followed by an example processor and computer architecture.Example Core ArchitectureOrdered and chaotic nuclear block diagram31A is a block diagram illustrating an example ordered pipeline and an exemplary register renaming out of sequence issue / execution pipeline in accordance with various embodiments of the present invention. FIG. 31B is a block diagram illustrating an example embodiment of an out-of-order issue / execute architecture core of an exemplary architecture and an exemplary register renaming to be included in a processor in accordance with various embodiments of the present invention. FIG. The solid boxes in FIGS. 31A-B show the ordered pipelines and the ordered kernels, while the optional augmented dashed boxes show the register renaming, out-of-order issue / execution pipelines and cores. Given an ordered aspect that is a subset of the out-of-order aspect, the out-of-order aspect will be described.31A, processor pipeline 3100 includes fetch stage 3102, length decode stage 3104, decode stage 3106, dispatch stage 3108, rename stage 3110, schedule (also referred to as dispatch or publish) stage 3112, register read / memory read Fetch stage 3114, execute stage 3116, write back / memory write stage 3118, exception handling stage 3122, and commit stage 3124.FIG. 31B shows a processor core 3190 that includes a front end unit 3130 coupled to an execution engine unit 3150, and both execution engine units and front end units are coupled to a memory unit 3170. The core 3190 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As a further option, the core 3190 may be a dedicated core such as, for example, a network or communication core, a compacting engine, a coprocessor core, a general purpose computing graphics processing unit (GPGPU) core, a graphics core, and the like.The front end unit 3130 includes a branch prediction unit 3132 coupled to an instruction cache unit 3134, an instruction cache unit 3134 coupled to an instruction translation lookaside buffer (TLB) 3136, an instruction translation lookaside buffer 3136 coupled to the instruction fetch unit 3138, an instruction fetch unit 3138 is coupled to the decoding unit 3140. Decode unit 3140 (or decoder) may decode the instructions and generate one or more micro-operations, micro-code entry points, micro-instructions, instructions that are decoded from the original instructions, or that otherwise reflect the original instructions, or that are derived from the original instructions , Other instructions, or other control signals as output. Decoding unit 3140 may be implemented using a variety of different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs) and the like. In one embodiment, the core 3190 includes a microcode ROM or other medium that stores microcode for certain macros (eg, in the decode unit 3140 or otherwise within the front end unit 3130). Decoding unit 3140 is coupled to rename / allocator unit 3152 in execution engine unit 3150.Execution engine unit 3150 includes a rename / allocator unit 3152 coupled to retirement unit 3154 and a set of one or more scheduler units 3156. Scheduler unit 3156 represents any number of different schedulers, including reservation stations, central command windows, and the like. Scheduler unit 3156 is coupled to physical register bank unit 3158. Each physical register file unit 3158 represents one or more physical register file (s), where different physical register files store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer , Vector floating point, status (eg, an instruction pointer that is the address of the next instruction to be executed), and the like. In one embodiment, the physical register file unit 3158 includes a vector register unit, a write mask register unit, and a scalar register unit. These register units can provide architectural vector registers, vector mask registers, and general purpose registers. The physical register file unit 3158 overlaps the retire unit 3154 to show various ways in which register renaming and out-of-order execution can be implemented (eg, using reorder buffers and retire register files; using future files, history buffers and Retire a register file; use a register map and a register pool, etc.). Retirement unit 3154 and physical register file unit 3158 are coupled to execution cluster 3160. Execution cluster 3160 includes a set of one or more execution units 3162 and a set of one or more memory access units 3164. Execution unit 3162 may perform various operations (eg, shift, addition, subtraction, multiplication) on various types of data (eg, scalar floating point, packed integer, packed floating point, vector integer, vector floating point) Although some embodiments may include several execution units dedicated to a particular function or set of functions, other embodiments may include only one execution unit that executes one or all of the functions. Scheduler unit 3156, physical register file unit 3158, and execution cluster 3160 are shown as potentially multiple because some embodiments create separate pipelines for certain types of data / operations (eg, each having its own scheduler unit , A physical register file unit, and / or a clustered scalar integer pipeline, scalar floating-point / packed integer / packed floating-point / vector integer / vector floating-point pipeline and / or memory access pipeline - and in the case of separate memory access pipelines , Some embodiments in which only the execution cluster of the pipeline has the memory access unit 3164 are implemented. It should also be understood that where separate pipelines are used, one or more of these pipelines may be issued / executed out of order and the rest ordered.The set of memory access units 3164 is coupled to a memory unit 3170 that includes a data TLB unit 3172 coupled to a data cache unit 3174 where the data cache unit is coupled to a level 2 (L2) cache unit 3176. In an exemplary embodiment, the memory access unit 3164 may include a load unit, a memory address unit, and a memory data unit, each of which is coupled to a data TLB unit 3172 in the memory unit 3170. The instruction cache unit 3134 is also coupled to a level 2 (L2) cache unit 3176 in the memory unit 3170. The L2 cache unit 3176 is coupled to one or more other levels of cache and eventually coupled to main memory.As an example, an exemplary register renamed, out-of-order issue / execute core architecture may implement pipeline 3100 as follows: 1) instruction fetch 3138 to perform fetch and length decode stages 3102 and 3104; 2) decode unit 3140 to perform decode stage 3106; 3) The rename / dispatch unit 3152 executes the allocate stage 3108 and the rename stage 3110; 4) the scheduler unit 3156 executes the dispatch stage 3112; 5) the physical register file unit 3158 and the memory unit 3170 execute the register read / memory read stage 3114; Execution cluster 3160 executes execution stage 3116; 6) Memory unit 3170 and physical register file unit 3158 execute write back / memory write stage 3118; 7) Each unit may be involved in exception handling stage 3122; and 8) Retirement unit 3154 and physical registers Heap unit 3158 executes commit stage 3124.The core 3190 may support one or more instruction sets (eg, the x86 instruction set (with some extensions added with newer versions); the MIPS instruction set by MIPS Technologies, Inc. of Sunnyvale, California; The ARM-controlled ARM instruction set in Neville, with optional add-on extensions such as NEON, includes the instructions described in this document. In one embodiment, core 3190 includes logic to support tightening of data instruction set extensions (eg, AVX1, AVX2), thereby allowing operations used by many multimedia applications to be performed using tightening data.It should be understood that the core may support multi-threaded operations (executing a set of two or more parallel operations or threads) and may perform the multi-threaded operations in a variety of ways, including time-division multi-threaded operations, A thread operation, in which a single physical core provides a logical core for each of a plurality of threads that the physical core is synchronizing for multithreaded operation, or a combination thereof (eg, time-division-fetching and decoding and thereafter utilizing techniques such asHeating Threading Technology Synchronous multi-threaded operation).Although register renaming is described in the context of out-of-order execution, it should be understood that register renaming can be used in an ordered architecture. Although the illustrated embodiment of the processor also includes separate instruction and data cache units 3134/3174 and shared L2 cache unit 3176, alternative embodiments may have a single internal cache for both instructions and data, Such as, for example, Level 1 (L1) internal caches or multiple levels of internal caches. In some embodiments, the system may include a combination of internal caches and external caches external to the cores and / or processors. Alternatively, the full cache can be external to the core and / or processor.A specific example of an ordered core architecture32A-B show a block diagram of a more specific exemplary in-order core architecture that will be one of a plurality of logic blocks (including other cores of the same type and / or different types) in a chip. Depending on the application, these logic blocks communicate with some fixed functional logic, memory I / O interfaces, and other necessary I / O logic through a high-bandwidth interconnection network (eg, a ring network).Figure 32A is a block diagram of a single processor core and its local subset 3204 of its level 2 (L2) cache with its connection to the on-die interconnect network 3202, in accordance with various embodiments of the present invention. In one embodiment, the instruction decoder 3200 supports the x86 instruction set with a packed data instruction set extension. L1 cache 3206 allows for low latency access to cache memory in scalar and vector locations. Although in one embodiment (to simplify the design), scalar units 3208 and vector units 3210 use separate sets of registers (scalar register 3212 and vector register 3214, respectively), and the data transferred between these registers is written to memory And then read back from the first level (L1) cache 3206, alternative embodiments of the invention may use different approaches (eg, using a single set of registers or including allowing data to be transferred between the two register files without being Write and read back the communication path).The local subset of L2 caches 3204 is part of a global L2 cache that is divided into separate local subsets, ie, a local subset for each processor core. Each processor core has a direct access path to a local subset 3204 of its own L2 cache. The data read by the processor core is stored in its L2 cache subset 3204 and can be quickly accessed in parallel with other processor cores accessing their own local L2 cache subset. The data written by the processor core is stored in its own L2 cache subset 3204 and is flushed from other subsets if necessary. The ring network ensures the consistency of shared data. The ring network is bi-directional to allow agents such as processor cores, L2 caches, and other logic blocks to communicate with each other within the chip. Each ring data path is 1012 bits wide in each direction.Figure 32B is an expanded view of a portion of the processor core in Figure 32A in accordance with an embodiment of the present invention. 32B includes an L1 data cache 3206A portion of L1 cache 3204, as well as more detail regarding vector unit 3210 and vector registers 3214. In particular, vector unit 3210 is a 16-wide vector processing unit (VPU) (see 16-wide ALU 3228) that executes one or more of integer, single-precision floating-point and double-precision floating-point instructions. The VPU supports mixing of register inputs through the blending unit 3220, numeric conversions via numeric conversion units 3222A-B, and replication of memory input through the copy unit 3224. Write mask register 3226 allows the resulting vector to be asserted.Processor with integrated memory controller and graphicsFigure 33 is a block diagram of a processor 3300 that may have more than one core, may have an integrated memory controller, and may have an integrated graphics device according to an embodiment of the present invention. The solid box of Figure 33 shows a processor 3300 having a single core 3302A, a system agent 3310, a set of one or more bus controller units 3316, and an optional additional dashed box showing alternative processors 3300 having a plurality of cores 3302A-N, a set of one or more integrated memory controller units 3314 in the system agent unit 3310, and application specific logic 3308.Accordingly, different implementations of processor 3300 may include: 1) a CPU, where dedicated logic 3308 is integrated graphics and / or scientific (throughput) logic (which may include one or more cores), and cores 3302A-N are either one or A plurality of general purpose cores (eg, a general purpose ordered core, a generalized out of order core, a combination of both); 2) a coprocessor, where cores 3302A-N are intended to be used primarily for graphics and / or science Quantities); and 3) coprocessors, where cores 3302A-N are a plurality of universally-ordered cores. Thus, the processor 3300 may be a general-purpose processor, a coprocessor, or a dedicated processor such as, for example, a network or communications processor, a compression engine, a graphics processor, a GPGPU (general purpose graphics processing unit), a high throughput, MIC) coprocessor (including 30 or more cores), or embedded processor. The processor can be implemented on one or more chips. The processor 3300 may be part of one or more substrates, and / or the processor may be implemented on one or more substrates using any one of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS .The memory hierarchy includes one or more levels of caches within a core, a set or one or more shared cache units 3306, and a set of external memories (not shown) coupled to the integrated memory controller unit 3314. The set of shared cache units 3306 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of caching, Cache (LLC), and / or a combination of the above. Although in one embodiment, the ring-based interconnect unit 3312 interconnects the integrated graphics logic 3308, the set of shared cache units 3306, and the system agent unit 3310 / integrated memory controller unit 3314, alternative embodiments may use any number Known technology to interconnect these cells. In one embodiment, the coherency between one or more cache units 3306 and cores 3302-A-N may be maintained.In some embodiments, one or more of the cores 3302A-N can implement multithreading. System agent 3310 includes those components that coordinate and operate on cores 3302A-N. The system agent unit 3310 may include, for example, a power control unit (PCU) and a display unit. The PCU may be or may include the logic and components required to regulate the power states of the cores 3302A-N and the integrated graphics logic 3308. The display unit is used to drive one or more externally connected displays.The cores 3302A-N may be homogeneous or heterogeneous in terms of architectural instruction sets; that is, two or more of these cores 3302A-N may be able to execute the same instruction set while other cores may be able to execute the Only a subset of the instruction set or a different instruction set.Exemplary computer architectureFigure 34-37 is a block diagram of an exemplary computer architecture. A wide range of applications for laptop devices, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, Other system designs and configurations for video game devices, set-top boxes, microcontrollers, cell phones, portable media players, hand-held devices, and various other electronic devices are also suitable. In general, multiple systems or electronic devices capable of incorporating the processors and / or other execution logic disclosed herein are generally suitable.Referring now to FIG. 34, shown is a block diagram of a system 3400 in accordance with one embodiment of the present invention. System 3400 may include one or more processors 3410, 3415 coupled to a controller hub 3420. In one embodiment, the controller hub 3420 includes a graphics memory controller hub (GMCH) 3490 and an input / output hub (IOH) 3450 (which may be on separate chips); the GMCH 3490 includes a memory and graphics controller, a memory 3440 And coprocessor 3445 are coupled to the memory and graphics controller; IOH 3450 couples input / output (I / O) device 3460 to GMCH 3490. Alternatively, one or both of the memory and the graphics controller are integrated within the processor (as described herein), the memory 3440 and the coprocessor 3445 are directly coupled to the processor 3410 and the controller hub 3420, which controller The hub and IOH 3450 are in a single chip.The optional properties of the additional processor 3415 are shown in dashed lines in FIG. 34. Each processor 3410, 3415 may include one or more of the processing cores described herein, and may be some version of the processor 3300.Memory 3440 may be, for example, a dynamic random access memory (DRAM), a phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 3420 communicates with the processors 3410, 3415 via a multi-drop bus such as a front side bus (FSB), a point-to-point interface such as a fast lane interconnect (QPI), or similar connection 3495 Communication.In one embodiment, coprocessor 3445 is a dedicated processor, such as, for example, a high-throughput MIC processor, a network or communications processor, a compacting engine, a graphics processor, a GPGPU, or an embedded processor and the like. In one embodiment, the controller hub 3420 may include an integrated graphics accelerator.There are various differences in quality metrics between physical resources 3410, 3415 that include architecture, microarchitecture, thermal, power consumption characteristics, and the like.In one embodiment, processor 3410 executes instructions that control a general type of data processing operations. Coprocessor instructions can be embedded in these instructions. The processor 3410 identifies these coprocessor instructions as the type that should be performed by the attached coprocessor 3445. Accordingly, processor 3410 publishes these coprocessor instructions (or control signals representing coprocessor instructions) to coprocessor 3445 on a coprocessor bus or other interconnect. Coprocessor 3445 accepts and executes the received coprocessor instructions.Referring now to FIG. 35, shown is a block diagram of a first more specific exemplary system 3500 in accordance with an embodiment of the present invention. As shown in FIG. 35, the multiprocessor system 3500 is a point-to-point interconnect system and includes a first processor 3570 and a second processor 3580 coupled via a point-to-point interconnect 3550. Each of processors 3570 and 3580 may be some version of processor 3300. In one embodiment of the present invention, the processors 3570 and 3580 are the processors 3410 and 3415, respectively, and the coprocessor 3538 is the coprocessor 3445. In another embodiment, the processors 3570 and 3580 are a processor 3410 and a coprocessor 3445, respectively.Processors 3570 and 3580 are shown as including integrated memory controller (IMC) units 3572 and 3582, respectively. Processor 3570 also includes point-to-point (P-P) interfaces 3576 and 3578 as part of its bus controller unit; similarly, second processor 3580 includes P-P interfaces 3586 and 3588. The processors 3570, 3580 may exchange information via the P-P interface 3550 using point-to-point (P-P) interface circuits 3578, 3588. As shown in FIG. 35, the IMCs 3572 and 3582 couple the processors to respective memories, namely, memory 3532 and memory 3534, which may be portions of main memory locally connected to the respective processors.The processors 3570, 3580 may each exchange information with the chipset 3590 via respective P-P interfaces 3552, 3554 that use point-to-point interface circuits 3576, 3594, 3586, 3598. Chipset 3590 may optionally exchange information with coprocessor 3538 via high performance interface 3539. In one embodiment, coprocessor 3538 is a dedicated processor such as, for example, a high-throughput MIC processor, a network or communications processor, a compression engine, a graphics processor, a GPGPU, or an embedded processor and the like.A shared cache (not shown) may be included in either processor or external to both but connected to these processors via the PP interconnect so that if the processor is placed in a low-power mode, either Or local cache information for both processors may be stored in the shared cache.Chipset 3590 may be coupled to first bus 3516 via interface 3596. In one embodiment, the first bus 3516 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or other third generation I / O interconnect bus, but the scope of the present invention is not affected This limit.As shown in FIG. 35, various I / O devices 3514 may be coupled to the first bus 3516 along with a bus bridge 3518, which couples the first bus 3516 to the second bus 3520. In one embodiment, a processor such as a coprocessor, a high throughput MIC processor, a GPGPU, an accelerator (eg, a graphics accelerator or digital signal processing (DSP) unit), a field programmable gate array, or any other processor One or more additional processors 3515 of the class are coupled to a first bus 3516. In one embodiment, the second bus 3520 may be a low pin count (LPC) bus. In one embodiment, various devices may be coupled to the second bus 3520, various devices including, for example, a keyboard and / or mouse 3522, a communications device 3527, and a storage unit 3528, such as may include instructions / code and data 3530 disk drive or other mass storage device. In addition, the audio I / O 3524 may be coupled to the second bus 3520. Note that other architectures are possible. For example, instead of the point-to-point architecture in Figure 35, the system may implement a multi-drop bus or other such architecture.Referring now to FIG. 36, shown is a block diagram of a second more specific exemplary system 3600 in accordance with an embodiment of the present invention. Similar elements in FIGS. 35 and 36 use similar reference numerals, and certain aspects of FIG. 35 have been omitted from FIG. 36 to avoid obscuring other aspects of FIG. 36.36 shows that the processors 3570, 3580 may include integrated memory and I / O control logic ("CL") 3572 and 3582, respectively. Therefore, the CL 3572, 3582 includes an integrated memory controller unit and includes I / O control logic. Figure 36 shows that not only memories 3532, 3534 are coupled to CLs 3572, 3582, but also I / O devices 3614 are also coupled to control logic 3572, 3582. Legacy I / O device 3615 is coupled to a chipset 3590.Referring now to FIG. 37, shown is a block diagram of a SoC 3700 in accordance with an embodiment of the present invention. Similar parts in FIG. 33 have the same reference numerals. In addition, dashed boxes are an optional feature of more advanced SoCs. In FIG. 37, an interconnect unit 3702 is coupled to an application processor 3710 that includes a set of one or more cores 202A-N and a shared cache unit 3306, a system agent unit 3310, a bus controller unit 3316 ; An integrated memory controller unit 3314; a set or one or more coprocessors 3720, which may include integrated graphics logic, an image processor, an audio processor and a video processor; a static random access memory (SRAM) unit 3730; A direct memory access (DMA) unit 3732, and a display unit 3740 for coupling to one or more external displays. In one embodiment, coprocessor 3720 includes a dedicated processor, such as, for example, a network or communications processor, a compacting engine, a GPGPU, a high throughput MIC processor, or an embedded processor and the like.Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of these implementations. Embodiments of the invention may be implemented as a computer program or program code executing on a programmable system including at least one processor, a memory system (including volatile and non-volatile memory and / or storage elements) , At least one input device, and at least one output device.Program code, such as code 3530 shown in FIG. 35, may be applied to input instructions to perform the functions described herein and generate output information. The output information can be applied to one or more output devices in a known manner. For the purposes of this application, a processing system includes any system having a processor such as, for example, a digital signal processor (DSP), microcontroller, application specific integrated circuit (ASIC), or microprocessor.The program code can be implemented in a high-level programming language or an object-oriented programming language to communicate with the processing system. When needed, assembly language or machine language can also be used to implement the program code. In fact, the mechanisms described in this article are not limited to the scope of any particular programming language. In either case, the language can be a compiled language or an interpreted language.One or more aspects of at least one embodiment may be implemented by representational instructions stored on a machine-readable medium that represent various logic in a processor that, when read by a machine, cause the machine to be configured to execute The logic of the technology described in this article. Such representations, referred to as "IP cores," may be stored on a tangible, machine readable medium and provided to various customers or manufacturing facilities for loading into the manufacturing machine that actually made the logic or processor.Such machine-readable storage media may include, but is not limited to, non-transitory, tangible arrangements of articles made or formed by machines or devices including storage media such as hard disks; any other type of disks including floppy disks, optical disks, compact disks Compact disc rewritable (CD-RW), and magneto-optical disks; semiconductor devices such as read-only memories (ROMs) such as dynamic random access memories (DRAMs) and static random Random access memory (RAM) such as an access memory (SRAM), erasable programmable read only memory (EPROM), flash memory, electrically erasable programmable read only memory (EEPROM) ; Magnetic or optical cards; or any other type of medium suitable for storing electronic instructions.Accordingly, embodiments of the present invention also include a non-transitory, tangible machine-readable medium containing instructions or containing design data, such as a hardware description language (HDL), that defines the structure, circuitry, means, processor And / or system characteristics. These embodiments are also referred to as program products.Simulation (including binary transformation, code deformation, etc.)In some cases, an instruction converter can be used to translate instructions from a source instruction set to a target instruction set. For example, the instruction translator may transform (eg, using static binary transforms, including dynamically compiled dynamic transforms), deform, emulate, or otherwise convert the instructions into one or more other instructions to be processed by the core. The instruction converter can be implemented in software, hardware, firmware, or a combination thereof. The instruction translator may be on the processor, external to the processor, or partially on the processor and partly external to the processor.38 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set into binary instructions in a target instruction set in accordance with an embodiment of the present invention. In the illustrated embodiment, the instruction converter is a software instruction converter, but the instruction converter may alternatively be implemented in software, firmware, hardware, or various combinations thereof. 38 shows that a program in the high-level language 3802 can be compiled using the x86 compiler 3804 to generate x86 binary code 3806 that can be natively executed by the processor 3816 having at least one x86 instruction set core. Processor 3816 having at least one x86 instruction set core represents any processor that performs substantially the same function as an Intel processor with at least one x86 instruction set core by compatibly executing or otherwise processing (1) Or (2) the goal is to run on an Intel processor with at least one x86 instruction set core to achieve substantially the same result as an Intel processor with at least one x86 instruction set core Application or other software version of the target code. The x86 compiler 3804 represents a compiler for generating x86 binary code 3806 (eg, object code) that can be processed with or without additional link processing on a processor 3816 having at least one x86 instruction set core On the implementation. Similarly, FIG. 38 shows that the program of the high-level language 3802 can be compiled using an alternative instruction set compiler 3808 to generate a program that can be executed by a processor 3814 that does not have at least one x86 core (eg, A MIPS instruction set from MIPS Technologies Inc., and / or a core processor that executes an ARM instruction set from ARM Holdings, Inc. of Sunnyvale, California.) Alternate instruction set binary code 3810 that is natively implemented. Instruction converter 3812 is used to translate x86 binary code 3806 into code that may be natively executed by processor 3814 that does not have an x86 instruction set core. The translated code is unlikely to be the same as the alternative instruction set binary code 3810 because the instruction converter capable of doing so is difficult to fabricate; however, the translated code will do the generic operation and will consist of instructions from the alternative instruction set. Thus, the instruction converter 3812 represents software, firmware, hardware, or a combination thereof that allows a processor not having an x86 instruction set processor or core or other, through simulation, simulation, or any other process The electronic device executes the x86 binary code 3806. |
A processing system and method for communicating in a processing system over a bus is disclosed. The processing system includes a receiving device, a bus having first, second and third channels, and a sending device configured to address the receiving device on the first channel, and read a payload from the receiving device on the second channel, the sending device being further configured to select between the first and third channels to write a payload to the receiving device. |
A processing system, including: a receiving device; a bus having first, second and third channels; and a sending device configured to address the receiving device on the first channel, read from the receiving device on the second channel, write to the receiving device on the third channel, and select between: a first bus transmission mode wherein payload write data is to be written to the receiving device on the first channel or third channel; and a second bus transmission mode wherein first payload write data is to be written to the receiving device on the third channel during a first clock cycle and second payload write data is to be concurrently written to the receiving device on the first channel during the first clock cycle, wherein the first payload write data is associated with a first write operation and the second payload write data is associated with a second write operation, and wherein the first and second payload write data are distinct from address and control information. The processing system of claim 1, wherein, in the second bus transmission mode, the sending device is further configured to write the second payload write data to a first address of the receiving device on the first channel and write the first payload write data to a second address of the receiving device on the third channel. The processing system of claim 1, further including a second receiving device, and wherein, in the second bus transmission mode, the sending device is further configured to write the second payload write data to the receiving device on the first channel and write the first payload write data to the second receiving device on the third channel. The processing system of claim 1, wherein the bus further includes a fourth channel, the sending device being further configured to address the receiving device on the first channel for write operations and address the receiving device on the fourth channel for read operations, and wherein, in the first bus transmission mode, the sending device is further configured to select between the first, third and fourth channels to write the first payload write data to the receiving device. 2. The processing system of claim. 4, wherein the sending device is further configured, in the second bus transmission mode, to write the first payload write data to a first address of the receiving device on one of the first, third and fourth channels and write the second payload write data to a second address of the receiving device on another one of the first, third and fourth channels. The processing system of claim 4, wherein the sending device is further configured, in the second bus transmission mode, to write the second payload write data to a first address of the receiving device on the first channel, write the first payload write data to a second address of the receiving device on the third channel, and write a third payload write data to a third address of the receiving device on the fourth channel. The processing system of claim 4, further including a second receiving device, and wherein the sending device is further configured, in the second bus transmission mode, to write the first payload write data to the receiving device on one of the first, third and fourth channels and write the second payload write data to the second receiving device on another one of the first, third and fourth channels. The processing system of claim 4, further including second and third receiving devices, and wherein the sending device is further configured, in the second bus transmission mode, to write the second payload write data to the receiving device on the first channel, write the first payload write data to the second receiving device on the third channel, and write a third payload to the third receiving device on the fourth channel. The processing system of claim 1, wherein the sending device is further configured to provide a control signal to the receiving device indicating whether the first channel is currently being used to address the receiving device or write the second payload write data to the receiving device. The processing system of claim 1, wherein the sending device is further configured to provide a control signal to the receiving device while addressing the receiving device, the control signal indicating whether a payload for the address will be written to the receiving device on the first or third channel. 2. The processing system of claim 1, wherein the sending device writes the first payload write data in accordance with the selected bus transmission mode. The processing system of claim 1, wherein writing of the first payload write data and writing of the second payload write data are completed during the first clock cycle. A processing system, including: a receiving device; a bus having first, second and third channels; means for addressing the receiving device on the first channel; means for reading from the receiving device on the second channel; means for writing to the receiving device on the third channel; and means for selecting between: a first bus transmission mode wherein payload write data is to be written to the receiving device on the first channel or the third channel; and a second bus transmission mode wherein second payload write data is to be written to the receiving device on the first channel during a first clock cycle and first payload write data is to be concurrently written to the receiving device on the third channel during the first clock cycle, wherein the first payload write data is associated with a first write operation and the second payload write data is associated with a second write operation, and wherein the first and second payload write data are distinct from address and control information. A method of communicating between a sending device and one or more receiving devices over a bus, the bus including first, second and third channels, the method including: addressing a receiving device on the first channel, reading from the receiving device on the second channel; writing to the receiving device on the third channel; and selecting between: a first bus transmission mode wherein payload write data is to be written to the receiving device on the first channel or the third channel; and a second bus transmission mode wherein second payload write data is to be written to the receiving device on the first channel during a first clock cycle and first payload write data is to be concurrently written to the receiving device on the third channel during the first clock cycle, wherein the first payload write data is associated with a first write operation and the second payload write data is associated with a second write operation, and wherein the first and second payload write data are distinct from address and control information. The method of claim 14, further including, in the second bus transmission mode, writing the second payload write data to a first address of the receiving device on the first channel and writing the first payload write data to a second address of the receiving device on the third channel. The method of claim 14, further including, in the second bus transmission mode, writing the second payload write data to the receiving device on the first channel and writing the first payload write data to a second receiving device on the third channel. The method of claim 14, wherein the bus further includes a fourth channel, and wherein the receiving device is addressed on the first channel for a write operation, the method further including addressing the receiving device on the fourth channel for a read operation, wherein in the first bus transmission mode, the method further includes selecting between the first, third and fourth channels to write the first payload write data to the receiving device. The method of claim 17, further including, in the second bus transmission mode, writing the first payload write data to a first address of the receiving device on one of the first, third and fourth channels and writing the second payload write data to a second address of the receiving device on another one of the first, third and fourth channels. The method of claim 17, further including, in the second bus transmission mode, writing the second payload write data to a first address of the receiving device on the first channel, writing the first payload write data to a second address of the receiving device on the third channel and writing a third payload write data to a third address of the receiving device on the fourth channel. The method of claim 17, further including, in the second bus transmission mode, writing the first payload write data to the receiving device on one of the first, third and fourth channels and writing the second payload write data to a second receiving device on another one of the first, third and fourth channels. The method of claim 17, further including, in the second bus transmission mode, writing the second payload write data to the receiving device on the first channel and writing the first payload 26 write data to a second receiving device on the third channel, and writing a third payload write data to a third receiving device on the fourth channel. The method of claim 14, further including providing a control signal to the receiving device indicating whether the first channel is currently being used to address the receiving device or write a second payload write data to the receiving device. The method of claim 14, further including providing a control signal to the receiving device while addressing the receiving device, the control signal indicating whether a payload for the address will be written to the receiving device on the first or third channel. A bus mastering device, including: a processor; and a bus interface configured to interface the processor to a bus having first, second and third channels, address a slave on the first channel, receive from the slave on the second channel, write to the slave on the third channel, and select between: a first bus transmission mode wherein payload write data is to be sent to the slave on the first channel or the third channel; and a second bus transmission mode wherein second payload write data is to be sent to the slave on the first channel during a first clock cycle and first payload write data is to be concurrently sent to the slave on the third channel during the first clock cycle, wherein the first payload write data is associated with a first write operation and the second payload write data is associated with a second write operation, and wherein the first and second payload write data are distinct from address and control information. The bus mastering device of claim 24, wherein the bus interface device is further configured, in the second bus transmission mode, to send the second payload write data to a first address of the slave on the first channel and send the first payload write data to a second address of the slave on the third channel. The bus mastering device of claim 24, wherein the bus interface is further configured, in the second bus transmission mode, to send the second payload write data to the slave on the first channel and send the first payload write data to a second slave on the third channel. 2. The bus mastering device of claim 24, wherein the bus further includes a fourth channel, and wherein the bus interface is further configured to address the slave on the first channel for write operations and address the slave on the fourth channel for read operations, and wherein, in the first bus transmission mode, the bus interface is further configured to select between the first, third and fourth channels to write the first payload write data to the slave. The bus mastering device of claim 27, wherein the bus interface is further configured, in the second bus transmission mode, to send the first payload write data to a first address of the slave on one of the first, third and fourth channels and send the second payload write data to a second address of the slave on another one of the first, third and fourth channels. The bus mastering device of claim 27, wherein the bus interface is further configured, in the second bus transmission mode, to send the second payload write data to a first address of the slave on the first channel, send the first payload write data to a second address of the slave on the third channel, and send a third payload to a third address of the slave on the fourth channel. The bus mastering device of claim 27, wherein the bus interface is further configured, in the second bus transmission mode, to send the first payload write data to the slave on one of the first, third and fourth channels and send the second payload write data to a second slave on another one of the first, third and fourth channels. The bus mastering device of claim 27, further including second and third slaves, and wherein the bus interface is further configured, in the second bus transmission mode, to send the second payload write data to the slave on the first channel, send the first payload write data to a second slave on the third channel, and send a third payload to a third slave on the fourth channel. The bus mastering device of claim 24, wherein the bus interface is further configured to provide a control signal to the slave indicating whether the first channel is currently being used to address the slave or send the second payload write data to the slave. The bus mastering device of claim 24, wherein the bus interface is further configured to provide a control signal to the slave while addressing the slave, the control signal indicating whether a payload for the address will be sent to the slave on the first or third channel. 2. The slave device of claim 24, wherein the bus interface is further configured to receive a control signal from the bus mastering device while the memory is being addressed, the control signal indicating whether a payload for the address will be received on the first or third channel. A bus mastering device, including: a processor; and means for interfacing the processor to a bus having first, second and third channels, the means for interfacing the processor to the bus including means for addressing a slave on the first channel, means for receiving from the slave on the second channel, means for writing to the slave on the third channel, and means for selecting between: a first bus transmission mode wherein payload write data is to be sent to the slave on the first channel or the third channel; and a second bus transmission mode wherein second payload write data is to be sent to the slave on the first channel during a first clock cycle and first payload write data is to be concurrently sent to the slave on the third channel during the first clock cycle, wherein the first payload write data is associated with a first write operation and the second payload write data is associated with a second write operation, and wherein the first and second payload write data are distinct from address and control information. A slave device, including: memory; and a bus interface configured to interface the memory to a bus having first, second and third channels, receive a memory address on the first channel, in a first bus transmission mode, receive payload data from a bus mastering device on the first channel or the second channel, in a second bus transmission mode, receive second payload write data from the bus mastering device on the first channel during a first clock cycle and concurrently receive first payload write data from the bus mastering device on the second channel during the first clock cycle, wherein the first payload write data is associated with a first write operation and the second payload write data is associated with a second write operation, and wherein the first and second payload write data are distinct from address and control information, and send a payload to the bus mastering device on the third channel. The slave device of claim 36, wherein the bus further includes a fourth channel, wherein the memory is further configured to be addressed through the bus interface by the bus mastering 29 device on the first channel for write operations and on the fourth channel for read operation, the bus interface being further configured to receive, in the second bus transmission mode, a third payload write data from the bus mastering device on the fourth channel. The slave device of claim 36, wherein the bus interface is further configured to receive a control signal from the bus mastering device indicating whether the first channel is currently being used to address the memory or receive the second payload write data. A slave device, including: memory; and means for interfacing the memory to a bus having first, second and third channels, the means for interfacing the memory to the bus including means for receiving a memory address on the first channel, means for receiving, in a first bus transmission mode, payload data from a bus mastering device on the first channel or the second channel, means for receiving, in a second bus transmission mode, a second payload write data from the bus mastering device on the first channel, during a first clock cycle and concurrently receiving first payload write data from the bus mastering device on the second channel during the first clock cycle, wherein the first payload write data is associated with a first write operation and the second payload write data is associated with a second write operation, and wherein the first and second payload write data are distinct from address and control information, and means for sending a payload to the bus mastering device on the third channel. |
WO 2007/101170 PCT/US2007/062830 1 AUXILIARY WRITES OVER ADDRESS CHANNEL RELATED APPLICATIONS [0001] The present Application for Patent claims priority to Provisional Application No. 60/776,517 entitled "Auxiliary Writes Over Address Channel" filed February 24, 2006, and assigned to the assignee hereof and hereby expressly incorporated by reference herein. BACKGROUND Field [00021 The present disclosure relates generally to processing systems, and more specifically, to systems and techniques for performing auxiliary writes over the address channel of a bus. Background [00031 At the heart of most modem processing systems is an interconnect referred to as a bus. The bus moves information between various processing entities in the system. Today, most bus architectures are fairly standardized. These standardized bus architectures typically have independent and separate read, write and address channels. [0004] This type of bus architecture is often found in processing systems with one or more general purpose processors supported by memory. In these systems, the memory provides a storage medium that holds the programs and data needed by the processors to perform their functions. A processor may read or write to the memory by placing an address on the address channel and sending the appropriate read/write control signal. Depending on the state of the read/write control, the processor either writes to the memory over the write channel or reads from the memory over the read channel. In these types of processing systems, as well as many others, it is desirable to reduce the write latency and increase the write bandwidth. 2 SUMMARY [0005] According to a first aspect of the present invention, there is provided a processing system, including: a receiving device; a bus having first, second and third channels; and a sending device configured to address the receiving device on the first channel, read from the receiving device on the second channel, write to the receiving device on the third channel, and select between: a first bus transmission mode wherein payload write data is to be written to the receiving device on the first channel or third channel; and a second bus transmission mode wherein first payload write data is to be written to the receiving device on the third channel during a first clock cycle and second payload write data is to be concurrently written to the receiving device on the first channel during the first clock cycle, wherein the first payload write data is associated with a first write operation and the second payload write data is associated with a second write operation, and wherein the first and second payload write data are distinct from address and control information [0006] According to a second aspect of the present invention, there is provided a processing system, including: a receiving device; a bus having first, second and third channels; means for addressing the receiving device on the first channel; means for reading from the receiving device on the second channel; means for writing to the receiving device on the third channel; and means for selecting between: a first bus transmission mode wherein payload write data is to be written to the receiving device on the first channel or the third channel; and a second bus transmission mode wherein second payload write data is to be written to the receiving device on the first channel during a first clock cycle and first payload write data is to be concurrently written to the receiving device on the third channel during the first clock cycle, wherein the first payload write data is associated with a first write operation and the second payload write data is associated with a second write operation, and wherein the first and second payload write data are distinct from address and control information. 3 [00071 According to a third aspect of the present invention, there is provided a method of communicating between a sending device and one or more receiving devices over a bus, the bus including first, second and third channels, the method including: addressing a receiving device on the first channel, reading from the receiving device on the second channel; writing to the receiving device on the third channel; and selecting between: a first bus transmission mode wherein payload write data is to be written to the receiving device on the first channel or the third channel; and a second bus transmission mode wherein second payload write data is to be written to the receiving device on the first channel during a first clock cycle and first payload write data is to be concurrently written to the receiving device on the third channel during the first clock cycle, wherein the first payload write data is associated with a first write operation and the second payload write data is associated with a second write operation, and wherein the first and second payload write data are distinct from address and control information. [00081 According to a fourth aspect of the present invention, there is provided a bus mastering device, including: a processor; and a bus interface configured to interface the processor to a bus having first, second and third channels, address a slave on the first channel, receive from the slave on the second channel, write to the slave on the third channel, and select between: a first bus transmission mode wherein payload write data is to be sent to the slave on the first channel or the third channel; and a second bus transmission mode wherein second payload write data is to be sent to the slave on the first channel during a first clock cycle and first payload write data is to be concurrently sent to the slave on the third channel during the first clock cycle, wherein the first payload write data is associated with a first write operation and the second payload write data is associated with a second write operation, and wherein the first and second payload write data are distinct from address and control information. [00091 According to a fifth aspect of the present invention, there is provided a bus mastering device, including: a processor; and 3a means for interfacing the processor to a bus having first, second and third channels, the means for interfacing the processor to the bus including means for addressing a slave on the first channel, means for receiving from the slave on the second channel, means for writing to the slave on the third channel, and means for selecting between: a first bus transmission mode wherein payload write data is to be sent to the slave on the first channel or the third channel; and a second bus transmission mode wherein second payload write data is to be sent to the slave on the first channel during a first clock cycle and first payload write data is to be concurrently sent to the slave on the third channel during the first clock cycle, wherein the first payload write data is associated with a first write operation and the second payload write data is associated with a second write operation, and wherein the first and second payload write data are distinct from address and control information. [00101 According to a sixth aspect of the present invention, there is provided a slave device, including: memory; and a bus interface configured to interface the memory to a bus having first, second and third channels, receive a memory address on the first channel, in a first bus transmission mode, receive payload data from a bus mastering device on the first channel or the second channel, in a second bus transmission mode, receive second payload write data from the bus mastering device on the first channel during a first clock cycle and concurrently receive first payload write data from the bus mastering device on the second channel during the first clock cycle, wherein the first payload write data is associated with a first write operation and the second payload write data is associated with a second write operation, and wherein the first and second payload write data are distinct from address and control information, and send a payload to the bus mastering device on the third channel. [00111 According to a seventh aspect of the present invention, there is provided a slave device, including: memory; and means for interfacing the memory to a bus having first, second and third channels, the means for interfacing the memory to the bus including means for receiving a memory address on the first channel, means for receiving, in a first bus transmission mode, payload data from a bus mastering device on the first channel or the second channel, 3b means for receiving, in a second bus transmission mode, a second payload write data from the bus mastering device on the first channel, during a first clock cycle and concurrently receiving first payload write data from the bus mastering device on the second channel during the first clock cycle, wherein the first payload write data is associated with a first write operation and the second payload write data is associated with a second write operation, and wherein the first and second payload write data are distinct from address and control information, and means for sending a payload to the bus mastering device on the third channel. [00121 It is understood that other embodiments of the present invention will become readily apparent to those skilled in the art from the following detailed description, wherein various embodiments of the invention are shown and described by way of illustration. As will be realized, the invention is capable of other and different embodiments and its several details are capable of modification in various other respects, all without departing from the spirit and scope of the present invention. Accordingly, the drawings and detailed description are to be regarded as illustrative in nature and not as restrictive. BRIEF DESCRIPTION OF THE DRAWINGS [0013] Aspects of the present invention are illustrated by way of example, and not by way of limitation, in the accompanying drawings, wherein: 100141 FIG. I is a simplified block diagram illustrating an example of two devices in a processing system communicating over a bus; [00151 FIG. 2 is an illustration showing information flowing on the address and write channels of a bus in the processing system of FIG. 1 with the address channel providing a generic medium for addresses and payloads; 100161 FIG. 3 is a timing diagram showing three write operations over a bus in the processing system of FIG. 1; [0017] FIG. 4 is a simplified block diagram illustrating a sending device in communication with two receiving devices in a processing system; [0018] FIG. 5 is an illustration showing information flowing on the address and write channels of a bus in the processing system of FIG. 4. WO 2007/101170 PCT/US2007/062830 4 [00191 FIG. 6 is a simplified block diagram illustrating an example of two devices in a processing system communicating over a 4-channel bus; [0020] FIG. 7 is a timing diagram showing three write operations over a bus in the processing system of FIG. 6; [00211 FIG. 8 is a simplified block diagram illustrating a sending device in communication with three receiving devices in a processing system; and [0022] FIG. 9 is an illustration showing information flowing on the read and write address channels and write channels of a bus in the processing system of FIG. 8. DETAILED DESCRIPTION [00231 The detailed description set forth below in connection with the appended drawings is intended as a description of various embodiments of the present invention and is not intended to represent the only embodiments in which the present invention may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of the present invention. However, it will be apparent to those skilled in the art that the present invention may be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring the concepts of the present invention. [00241 FIG. 1 is a simplified block diagram illustrating an example of two devices in a processing system communicating over a bus. The processing system 100 may be a collection of hardware devices that cooperate to perform one or more processing functions. Typical applications of the processing system 100 include, but are not limited to, desktop computers, laptop computers, servers, cellular phones, personal digital assistants (PDA), game consoles, pagers, modems, audio equipment, medical devices, automotive, video equipment, industrial equipment, or any other machine or device capable of processing, retrieving and storing information. [00251 The processing system 100 is shown with a sending device 102 in communication with a receiving device 104 over a bus 106. The bus 106 includes three channels: an address channel 106a, a write channel 106b, and a read channel 106c. A "channel" is defined as a set of electrical conductors used to carry information between tI~n di-tI nn ,I 1 k < 1 -b '-4 --- -- I .. -. WO 2007/101170 PCT/US2007/062830 address channel is 32-bits wide, and the write and read channels are each 64-bits wide. Typically, a bus interconnect (not shown) will be used to establish a point-to-point communications path between the sending device 102 and the receiving device 104 over the bus 106. Alternatively, the bus 106 may be a dedicated bus, a shared bus, or any other type of suitable bus architecture. [0026] The sending device 102 may be any type of bus mastering device. In this example, the sending device 102 includes a processor 108 and a bus interface 110. The processor 108 may be a general purpose processor, such as a microprocessor, a special purpose processor, such as a digital signal processor (DSP), an application specific integrated circuit (ASIC), a direct memory access (DMA) controller, a bridge, a programmable logic component, or any other entity that requires access to the bus 106. The bus interface 110 is used to drive the address and write channels 106a, 106b, as well as provide the appropriate control signals. The bus interface 110 also serves as a receiver for the read channel 106c. [00271 The receiving device 104 may be any type of slave device. The receiving device 104 may be temporary memory, such as SDRAM, DRAM, or RAM, or a longer term storage device such as flash memory, ROM memory, EPROM memory, EEPROM memory, CD-ROM, DVD, magnetic disk, rewritable optic disk and the like. Alternatively, the receiving device 104 may be a bridge or any other device capable of retrieving and storing information. In this example, the receiving device 104 includes a bus interface 112 and memory 114. The bus interface 112 is used to drive the read channel 106c and the appropriate control signals. The bus interface 112 also serves as a receiver for the address and write channels 106a, 106b. The memory 114 may be any device whose contents can be accessed (i.e., read and written to) randomly. [00281 In this bus architecture, the sending device 102 may read or write to the receiving device 104. When the sending device 102 performs a write operation, it sends the address to the receiving device 104 on the address channel 106a with the appropriate control signals. The payload may be sent either on the address channel 106a or the write channel 106b. The "payload" refers to the data associated with a particular read or write operation, and in this case, a write operation. When the sending device performs a read operation, it sends the address to the receiving device 104 on the address channel 106a with the appropriate control signals. In response, the receiving device 104 sends WO 2007/101170 PCT/US2007/062830 6 [0029] An example of three write operations will now be described with reference to FIG. 2. FIG. 2 is an illustration showing the information flowing on the address and write channels. In this example, the sending device initiates a 32-byte write operation followed by two 8-byte write operations. [00301 Referring to FIG. 2, on the first clock cycle 202, the sending device initiates the 32-byte write operation by sending a 4-byte address Al to the receiving device on the address channel 106a with the appropriate control signals. During the same clock cycle 202, the sending device also sends the first 8-bytes of the first payload W1(l) to the receiving device on the write channel 106b. [0031] The sending device initiates the next write operation during the second clock cycle 204 by sending a 4-byte address A2 to the receiving device before completion of the first write operation on the address channel 106a with the appropriate control signals. The sending device continues to transmit the first payload during the same clock cycle by sending the second 8-bytes W1(2) to the receiving device on the write channel 106b. [00321 The sending device then uses the next two clock cycles 206 and 208 to send the second payload to the receiving device on the address channel 106a, while concurrently completing the transmission of the first payload on the write channel 106b. In particular, in the third clock cycle 206, the sending device sends to the receiving device the first 4-bytes of the second payload W2(1) on the address channel 106a and the third 8-bytes of the first payload W1(3) on the write channel 106b. On the fourth clock cycle 208, the sending device sends to the receiving device the final 4-bytes of the second payload W2(2) on the address channel 106a and the final 8-bytes of the first payload WI(4) on the write channel 106b. [0033] The sending device initiates the third write operation on the fifth clock cycle 210 by sending a 4-byte address A3 to the receiving device on the address channel 106a with the appropriate control signals. During the same clock cycle 210, the sending device also sends the third payload W3 to the receiving device on the write channel 106b. [0034] Two control signals may be added to the address channel 106a to create a medium to support the transmission of both addresses and payloads. The first control signal, referred to as an "Adlress./Data" siomal i. mp +A A;-+4 m--, +1WO 2007/101170 PCT/US2007/062830 7 information being transmitted on the address channel 106a is an address or a payload. In this example, when the Address/Data signal is asserted, an address is being transmitted on the address channel 106a. Conversely, when the Address/Data signal is deasserted, the payload is being transmitted on the address channel 106a. The second control signal, referred to as a "Transfer Attribute," is used when transmitting an address on the address channel 106a. When an address is being transmitted, the "Transfer Attribute" signal is used to indicate whether the payload for that address will be transmitted on the address channel 106a or the write channel 106b. [00351 An example illustrating how these control signals may be used will now be described with reference to FIG. 3. The bus protocol for the address and write channels 106a, 106b is shown below in Table 1. This bus protocol is being used to illustrate the inventive aspects of a processing system, with the understanding that such inventive aspects may be used with other bus protocols. Those skilled in the art will readily be able to vary and/or add signals to this protocol in the actual implementation of the bus architectures described herein. TABLE 1 Address Cbannel Signal Definition Driven By Address 32-bit medium to transmit Sending Device addresses and payloads. Address/Data Indicates whether the Sending Device information being transmitted on the address channel is an address or a payload. AValid Indicates whether valid Sending Device information is being transmitted on the address channel. Transfer Attribute Indicates whether the Sending Device payload for the current address will be transmitted on the address channel or the write channel. WO 2007/101170 PCT/US2007/062830 8 Address Channel Signal Definition Driven By Read/Write Indicates whether a read or Sending Device write operation is being requested. Payload Size Indicates the size of the Sending Device payload for the current address. Address Transfer Ack Indicates whether the Receiving Device receiving device has successfully received information transmitted on the address channel. Write Channel Signal Definition Driven By Write 64-bit medium to transmit Sending Device payloads. WValid Indicates whether valid Sending Device information is being transmitted on the write channel. Write Transfer Ack Indicates whether the Recciving Device receiving device has successfully reccived information transmitted on the write channel. [00361 FIG. 3 is a timing diagram showing the control signaling for the same three write operations described above in connection with FIG. 2. A System Clock 306 may be used to synchronize communications between the sending and receiving devices. The System Clock 306 is shown with five clock cycles, with each clock cycle numbered sequentially. [00371 A write operation may be initiated on the address channel 106a by the sending device during the first clock cycle 301. This operation may be achieved by WO 2007/101170 PCT/US2007/062830 9 308. Concurrently, the sending device asserts the AValid, Address/Data, and Transfer Attribute signals 312, 313, 314. The asserted AValid signal 312 indicates that valid information is being transmitted on the address channel 106a, the asserted Address/Data signal 313 indicates that the information is an address Al, and the asserted Transfer Attribute signal 314 indicates that the payload for the address Al will be transmitted on the write channel 106b. The sending device also deasserts the Read/Write signal 316 to request a write operation. The Payload Size 318 signal may be used to indicate the size of the payload, which in this case is 32-bytcs. [0038] During the same first clock cycle 301, the sending device uses the Write medium 320 to transmit the first 8-bytes of the first payload Wl(l). The sending device also asserts the WValid signal 324 to indicate that valid information is being transmitted on the write channel 106b. [0039] At the end of the first clock cycle 301, the sending device checks for an asserted Address Transfer Ack signal 310 to confirm the successful delivery of the address Al over the address channel 106a to the receiving device. The sending device also checks for an asserted Write Transfer Ack signal 322 to confirm the successful delivery of the first 8-bytes of the first payload W1(1) over the write channel 106b to the receiving device. [0040] On the second clock cycle 302, the sending device transmits the address A2 for the second write operation on the 32-bit Address medium 308 before the first write operation completes. The sending device asserts the AValid signal 312 to indicate that valid information is being transmitted on the address channel 106a. The sending device also asserts the Address/Data signal 313 to indicate that the information is an address A2. The Transfer Attribute 314 is deasserted to indicate that the payload for the address A2 will be transmitted on the address channel 106a. The sending device also deasserts the Read/Write signal 316 to request a write operation. The Payload Size 318 signal may be used to indicate the size of the payload, which in this case is 8-bytes. [00411 During the same second clock cycle 302, the sending device uses the Write medium 320 to send the second 8-bytes of the first payload W1(2). The sending device also asserts the W~alid signal 324 to indicate that valid information is being transmitted on the write channel 106b. WO 2007/101170 PCT/US2007/062830 [0042] At the end of the second clock cycle 302, the sending device checks for an asserted Address Transfer Ack signal 310 to confirm the successful delivery of the address A2 over the address channel 106a to the receiving device. The sending device also checks for an asserted Write Transfer Ack signal 322 to confirm the successful delivery of the second 8-bytes of the first payload W1(2) over the write channel 106b to the receiving device. [0043] On the third clock cycle 303, the sending device transmits the first 4-bytes of the second payload W2(1) on the 32-bit Address medium 308. The sending device asserts the AValid signal 312 to indicate the valid information is being transmitted on the address channel 106a and deasserts the Address/Data signal 313 to indicate that the information is part of a payload. The state of the Transfer Attribute signal 314, Read/Write signal 316, and Payload Size 318 signal can be ignored during this clock cycle. In FIG. 3, the states for these signals remain unchanged, but could be set to any state. [00441 During the same third clock cycle 303, the sending device uses the Write medium 320 to send the third 8-bytes of the first payload W1(3). The sending device also asserts the WValid signal 324 to indicate that valid information is being transmitted on the write channel 106b. [00451 At the end of the third clock cycle 303, the sending device checks for an asserted Address Transfer Ack signal 310 to confirm the successful delivery of the first 4-bytes of the second payload W2(1) over the address channel 106a to the receiving device. The sending device also checks for an asserted Write Transfer Ack signal 322 to confirm the successful delivery of the third 8-bytes of the first payload Wi(3) over the write channel 106b to the receiving device. [00461 On the fourth clock cycle 304, the sending device transmits the final 4-bytes of the second payload W2(2) on the 32-bit Address medium 308. The sending device asserts the AValid signal 312 to indicate the valid information is being transmitted on the address channel 106a and deasserts the Address/Data signal 313 to indicate that the information is part of a payload. The state of the Transfer Attribute signal 314, Read/Write signal 316, and Payload Size 318 signal can be ignored during the payload tenure. WO 2007/101170 PCT/US2007/062830 11 [00471 During the same fourth clock cycle 304, the sending device uses the Write medium 320 to send the final 8-bytes of the first payload W1(4). The sending device continues to assert the WValid signal 324 to indicate that valid information is being transmitted on the write channel 106b. [0048] At the end of the fourth clock cycle 304, the sending device checks for an asserted Address Transfer Ack signal 310 to confirm the successful delivery of the final 4-bytes of the second payload W2(2) over the address channel 106a to the receiving device. The sending device also checks for an asserted Write Transfer Ack signal 322 to confirm the successful delivery of the final 8-bytes of the first payload W1(4) over the write channel 106b to the receiving device. [00491 On the fifth clock cycle 305, the sending device transmits the address A3 for the third write operation on the 32-bit Address medium 308. The sending device asserts the AValid signal 312 to indicate that valid information is being transmitted on the address channel 106a. The sending device also asserts the Address/Data signal 313 to indicate that the information being transmitted on the address channel 106a is an address A3. The Transfer Attribute signal 314 is also asserted by the sending device to indicate that the payload for the address A3 will be transmitted on the write channel 106b. The Read/Write signal 316 remains deasserted to request a write operation. The Payload Size 318 signal may be used to indicate the size of the payload, which in this case is 8 bytes. [00501 During the same fifth clock cycle 305, the sending device uses the Write medium 320 to send the payload W3. The sending device also asserts the WValid signal 324 to indicate that valid information is being transmitted on the write channel 106b. [00511 At the end of the fifth clock cycle 305, the sending device checks for an asserted Address Transfer Ack signal 310 to confirm the successful delivery of the address A3 over the address channel 106a to the receiving device. The sending device also checks for an asserted Write Transfer Ack signal 322 to confirm the successful delivery of the third payload W3 over the write channel 106b to the receiving device. [0052] FIG. 4 is a simplified block diagram illustrating a sending device 402 in communication with two receiving devices 404a, 404b through a bus interconnect 416 in a processing system 400. In this example, the sending device 402 can write to both WO 2007/101170 PCT/US2007/062830 12 receiving devices 404a, 404b concurrently using the 32-bit address channel 406a as a medium for transmitting addresses and payloads to the bus interconnect 416. The bus interconnect 416 can then use the 32-bit address channels 406ai, 406a 2 to address the receiving devices 404a, 404b and the 64-bit write channels 406bi, 406b2 to transmit the payloads. In the case where the bus interconnect 416 needs to perform multiple write operations to one or both receiving devices 404a, 404b, the address channels 406ai, 406a2 may also be used as media to transmit both addresses and payloads. [0053] An example will now be described with reference to FIG. 5. FIG. 5 is an illustration showing the information flowing on the address and write channels. In this exampic, the bus interconnect 416 will provide point-to-point connections that allow each transmission from the sending device 402 to reach one of the receiving devices 404a, 404b in the same clock cycle. In practice, however, the bus interconnect 416 may be a clocked device with buffering (see FIG. 4). [0054] Referring to FIG. 5, the sending device initiates a 32-byte write operation followed by an 8-byte write operation. On the first clock cycle 502, the sending device initiates the 32-byte write operation by sending an address Al to the bus interconnect on the address channel 406a with the appropriate control signals. During the same clock cycle 502, the sending device also sends the first 8-bytes of the first payload Wl(l) to the bus interconnect on the write channel 406b. The bus interconnect transmits the address Al to the first receiving device 404a on the first receiving device's address channel 406ai, and transmits the first 8-bytes of the first payload WI(1) to the first receiving device 404a on the first receiving device's write channel 406bi. [0055] On the second clock cycle 504, the sending device initiates the next write operation by sending an address A2 to the bus interconnect on the address channel 406a with the appropriate control signals. During the same clock cycle 504, the sending device also sends the second 8-bytes of the first payload W1(2) to the bus interconnect on the write channel 406b. The bus interconnect 416 transmits the address A2 to the second receiving device 404b on the second receiving device's address channel 406a 2, and transmits the second 8-bytes of the first payload W1 (2) to the first receiving device 404a on the first receiving device's write channel 406bi. [0056] On the third and fourth clock cycles 506, 508, the sending device sends the remainder of the first payload W1(3), W1(4) through the bus interconnect to the first WO 2007/101170 PCT/US2007/062830 13 receiving device 404a on the write channels 406b, 406bi. During the same third and fourth clock cycles 506, 508, the sending device transmits the second payload W2(1), W2(2) to the bus interconnect on the address channel 406a. The second payload W2(1), W(2), being only 8-bytes, may be transmitted in the third and fourth clock cycles 506, 508 by the bus interconnect to the second receiving device over half the byte lanes on the second receiving device's write channel 406b2. Alternatively, the bus interconnect can transmit the entire payload during the fourth clock cycle 508 on the 64-bit write channel 406b 2 for the second recciving device, as shown. [00571 FIG. 6 is a simplified block diagram illustrating an example of two devices in a processing system 600 communicating over a 4-channcl bus. A separate and independent address channel is provided for each of the read and write channels. In this example, each channel is 32-bits wide, but may be any width in practice depending upon the particular application and overall design constraints. A write operation over the 4-channel bus may be performed in the same way described earlier in connection with the 3-channel bus. That is, the sending device 602 transmits address on the write address channel 606a and payloads on both the write address channel 606a and the write channel 606b. The difference between the two bus architectures is the manner in which the read operation is performed. A read operation over the 4-channel bus is performed by sending to the receiving device 604 the address on a read address channel 606d. In response, the receiving device 604 sends the payload to the sending device 602 on the read channel 606c. [0058] An example will now be described with reference to FIG. 7. The bus protocol for the address and write channels 606a, 606b, 606d is listed below in Table 2. This bus protocol is being used to illustrate the inventive aspects of a processing system, with the understanding that such inventive aspects may be used with other bus protocols. Those skilled in the art will readily be able to vary and/or add signals to this protocol in the actual implementation of the bus architectures described herein. TABLE 2 _ _ __ Write Address Channel Signal Definition Driven By Write Address 32-bit medium to transmit Sending Device WO 2007/101170 PCT/US2007/062830 14 ..... Write Adres Channel ia Definitinriven By payloads. Write Address/Data Indicates whether the Sending Device information being transmitted on the write address channel is a write address or a payload. Transfer Attribute Indicates whether the Sending Device payload for the current address will be transmitted on the write address channel, read address channel or the write channel. Write AValid Indicates whether valid Sending Device information is being transmitted on the write address channel. Write Payload Size Indicates the size of the Sending Device payload for the current write address. Write Address Transfer Indicates whether the Receiving Device Ack receiving device has successfully received information transmitted on the write address channel. Red Address Channel Sigl Definition Driven By Read Address 32-bit medium to transmit Sending Device read addresses and payloads. Read Address/Data Indicates whether the Sending Device information being transmitted on the read address channel is a read address or a payload. WO 2007/101170 PCT/US2007/062830 .. ead Addres Channel Signal Definition, Driven By Read AValid Indicates whether valid information is being transmitted on the read address channel. Read Payload Size Indicates the size of the Sending Device payload for the current read address. Rcad Address Transfer Indicates whether the Receiving Devicc Ack receiving device has successfully received information transmitted on the read address channel. Write Channel Signal Definition Driven By Write 32-bit medium to transmit Sending Device payloads. WValid Indicates whether valid Sending Device information is being transmitted on the write channel. Write Transfer Ack Indicates whether the Receiving Device receiving device has successfully received information transmitted on the write channel. [00591 The protocol for the Transfer Ack signal on the write address channel is shown below in Table 3. TABLE 3 _Transfer Attribute: _ _ _Definition 000 Payload for the current address will be transmitted on the write channel. 001 Payload for the current address will be transmitted on the write address channel. WO 2007/101170 PCT/US2007/062830 16 010 Payload for the current address will be transmitted on the read address channel. 011 Reserved [0060] FIG. 7 is a timing diagram showing the control signaling for a 16-byte write operation followed by a 12-byte write operation and then a 4-byte write operation. A System Clock 706 may be used to synchronize communications between the sending and receiving devices. The System Clock 706 is shown with four clock cycles, with each clock cycle numbered sequentially. [0061] A write operation may be initiated on the address channel 606a by the sending device during the first clock cycle 701. This operation may be achieved by transmitting the address Al for the first write operation on the 32-bit Write Address medium 708. During the same clock cycle 701, the sending device asserts the Write AValid signal 712 to indicate that valid information is being transmitted on the write address channel 606a. The sending device also asserts the write Address/Data signal 713 to indicate that the information is an address Al. The sending device also sets the Transfer Attribute signal 714 to "000" to indicate that the payload for the address Al will be transmitted on the write channel 606b. The Payload Size 718 signal may be used to indicate the size of the payload, which in this case is 16-bytes. [00621 During the same first clock cycle 701, the sending device uses the Write medium 720 to transmit the first 4-bytes of the first payload W1(1). The sending device also asserts the WValid signal 724 to indicate that valid information is being transmitted on the write channel 606b. [0063] At the end of the first clock cycle 701, the sending device checks for an asserted Write Address Transfer Ack signal 710 to confirm the successful delivery of the address Al over the address channel 606a to the receiving device. The sending device also checks for an asserted Write Transfer Ack signal 722 to confirm the successful delivery of the first 4-bytes of the first payload W1(1) over the write channel 606b to the receiving device. [0064] On the second clock cycle 702, the sending device transmits the address A2 for the second write operation on the 32-bit Address medium 708 before the first write operation completes. The sending device asserts the Write AValid signal 712 to -- - -- - I- - - . I . . . WO 2007/101170 PCT/US2007/062830 17 The sending device also asserts the Address/Data signal 713 to indicate that the information is an address A2. The sending device sets the Transfer Attribute signal 714 to "010" to indicate that the payload for the address A2 will be transmitted on the read address channel 606d. The Payload Size 718 signal may be used to indicate the size of the payload, which in this case is 12-bytes. [0065] During the same second clock cycle 702, the sending device uses the Write medium 720 to transmit the second 4-bytes of the first payload Wi(2), and asserts the WValid signal 724 to indicate that valid information is being transmitted on the write channel 606b. The sending device uses the Read Address medium 730 to send the first 4-bytes of the second payload W2(1), and asserts the Rcad AValid signal 728 to indicate that valid information is being transmitted on the read address channel 606d. The sending device deasserts the Read Address/Data signal 729 to indicate that the information being transmitted on the read address channel 606d is payload data. [0066] At the end of the second clock cycle 702, the sending device checks for an asserted Write Address Transfer Ack signal 710 to confirm the successful delivery of the address A2 over the address channel 606a to the receiving device. The sending device also checks for asserted Write Transfer Ack and Read Address Transfer Ack signals 722, 726 to confirm the successful delivery of the payload data over the write and read address channels 606b, 606d. [0067] On the third clock cycle 703, the sending device asserts the Write AValid signal 712 to indicate that valid information is being transmitted on the write address channel 606a. The sending device also asserts the Address/Data signal 713 to indicate that the information is an address A3. The sending device sets the Transfer Attribute signal 714 to "001" to indicate that the payload for the address A3 will be transmitted on the write address channel 606a. The Payload Size 718 signal may be used to indicate the size of the payload, which in this case is 4-bytes. [0068] During the same third clock cycle 703, the sending device uses the Write medium 720 to transmit the third 4-bytes of the first payload Wi(3), and asserts the WValid signal 724 to indicate that valid information is being transmitted on the write channel 606b. The sending device uses the Read Address medium 730 to send the second 4-bytes of the second payload W2(2), and asserts the Read AValid signal 728 to indicate that valid information is being transmitted on the read address channel 606d. WO 2007/101170 PCT/US2007/062830 18 The sending device deasserts the Read Address/Data signal 729 to indicate that the information being transmitted on the read address channel 606d is payload data. [00691 At the end of the third clock cycle 703, the sending device checks for an asserted Write Address Transfer Ack signal 710 to confirm the successful delivery of the address A3 over the address channel 606a to the receiving device. The sending device also checks for asserted Write Transfer Ack and Read Address Transfer Ack signals 722, 726 to confirm the successful delivery of the payload data over the write and read address channels 606b, 606d. [0070] On the fourth clock cycle 704, the sending device uses the Write medium 720 to send the final 4-bytes of the first payload W1(4), and the Read Address medium 730 to send the final 4-bytes of the second payload W2(3). The sending device asserts the WValid and Read AValid signals 724, 728 to indicate that valid information is being transmitted on the write and read address channels 606b, 606d. The sending device deasserts the Read Address/Data signal 729 to indicate that the information being transmitted on the read address channel 606d is payload data. [0071] The sending device uses Write address medium 708 to send the third payload W3, and asserts the Write AValid signal 712 to indicate that valid information is being sent on the write address channel 606a. The sending device deasserts the Address/Data signal 713 to indicate that the information transmitted on the write address channel 606a is payload data. The state of the Transfer Attribute signal 714 and Payload Size 718 signal may ignored. [0072] FIG. 8 is a simplified block diagram illustrating a sending device 802 in communication with three receiving devices 804a-804c through a bus interconnect 816 in a processing system 800. In this example, the sending device 802 can write to all three receiving devices 804a-804c concurrently using the read and write address channels 806d, 806a as media for transmitting addresses and payloads. The bus interconnect 816 can then use the write address channels 806a1 , 806a 2, 806a 3 to address the receiving devices 804a, 804b, 804c and the write channels 806b], 806b 2 , 806b3 to transmit the payloads. In the case where the bus interconnect 816 needs to perform multiple write operations to one or more receiving devices 804a, 804b, 804c, the read and write address channels 806d1 , 806d2, 806d3, 806ai, 806a 2, 806a3 may also be used as generic media to transmit both addresses and payloads. WO 2007/101170 PCT/US2007/062830 19 [0073] An example of will now be described with reference to FIG. 9. FIG. 9 is an illustration showing the information flowing on the address and write channels. In this example, the bus interconnect 816 will provide point-to-point connections that allow each transmission from the sending device 802 to reach one of the receiving devices 804a, 804b, 804c in the same clock cycle. In practice, however, the bus interconnect 816 may be a clocked device with buffering (see FIG. 8). [0074] Referring to FIG. 9, on the first clock cycle 902, the sending device initiates the 16-byte write operation by sending an address Al to the bus interconnect on the address channel 806a with the appropriate control signals. During the same clock cycle 902, the sending device also sends the first 4-bytes of the first payload W1(1) to the bus interconnect on the write channel 806b. The bus interconnect transmits the address Al to the first receiving device 804a on the first receiving device's address channel 806ai, and transmits the first 4-bytes of the first payload W1(1) to the first receiving device 804a on the first receiving device's write channel 806bi. [0075] On the second clock cycle 904, the sending device initiates the next write operation by sending an address A2 to the bus interconnect on the address channel 806a with the appropriate control signals. During the same clock cycle 904, the sending device also sends the second 4-bytes of the first payload W1(2) to the bus interconnect on the write channel 806b and the first 4-bytes of the second payload W2(1) to the bus interconnect on the read address channel 806d. The bus interconnect 816 transmits the address A2 to the second receiving device 804b on the second receiving device's address channel 806a 2, transmits the second 4-bytes of the first payload W1(2) to the first receiving device 804a on the first receiving device's write channel 806bi, and transmits the first 4-bytes of the second payload W2(1) to the second receiving device 804b on the second recciving device's write channel 806b 2. [00761 On the third clock cycle 906, the sending device initiates the next write operation by sending an address A3 to the bus interconnect on the address channel 806a with the appropriate control signals. At the same time, the sending device also sends the third 4-bytes of the first payload Wi (3) to the bus interconnect on the write channel 806b, and the second 4-bytes of the second payload W2(2) to the bus interconnect on the read address channel 806d. The bus interconnect 816 transmits the address A3 to the third receiving device 804c on the third receiving device's address channel 806a 3, WO 2007/101170 PCT/US2007/062830 on the first receiving device's write channel 806bi, and transmits the second 4-bytes of the second payload W2(2) to the second receiving device 804b on the second receiving device's write channel 806b2. [0077] On the fourth clock cycle 908, the sending device sends the final 4-bytes of the first payload W1(4) to the bus interconnect on the write channel 806b, the final 4 bytes of the second payload W2(3) to the bus interconnect on the read address channel 806d, and the third payload W3 to the bus interconnect on the write address channel 806a. The bus interconnect 816 transmits the final 4-bytes of the first payload W1 (4) to the first receiving device 804a on the first receiving device's write channel 806bi, transmits the final 4-bytes of the second payload W2(3) to the second receiving device 804b on the second receiving device's write channel 806b2, and transmits the third payload W3 to the third receiving device 804c on the third receiving device's write channel 806b3. [00781 The various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASTC), a field programmable gate array (FPGA) or other programmable logic component, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing components, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. [00791 The methods or algorithms described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. A storage medium may be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In 21 the storage medium may reside in an ASIC. The ASIC may reside in the sending and/or receiving component, or elsewhere. In the alternative, the processor and the storage medium may reside as discrete components in the sending and/or receiving component, or elsewhere. [0080] The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. [00811 It will be understood that the term "comprise" and any of its derivatives (eg comprises, comprising) as used in this specification is to be taken to be inclusive of features to which it refers, and is not meant to exclude the presence of any additional features unless otherwise stated or implied. [0082] The reference to any prior art in this specification is not, and should not be taken as, an acknowledgement of any form of suggestion that such prior art forms part of the common general knowledge. |
Provided are a method, system, and program. An initial configuration is maintained assigning multiple local interfaces to one initial local address. For each local interface, a remote address of a remote interface on at least one remote device to which the local interface connects is received. The initial local address is used to identify the local interfaces assigned to the initial local addressin response to receiving a same remote address for each remote interface connected to the local interfaces assigned the initial local address. |
1.A method for address allocation of an adapter interface includes:Maintaining the initial configuration by assigning multiple local interfaces to an initial local port address of the port to which the local interface is assigned as part of the initial configuration;For each local interface, receive the remote address of the remote interface on at least one remote device connected to the local interface;In response to receiving the same remote address for each remote interface connected to the local interface to which the initial local port address is assigned, use the initial local port address to identify the local interface assigned to the initial local port address;In response to receiving multiple remote addresses from a remote interface connected to the local interface, generating at least one identifier, wherein the generating at least one identifier includes generating a different identifier for each different remote address received; andIn response to generating the at least one identifier, a different identifier is assigned to the local interface that was previously assigned the initial local port address.2.The method of claim 1, wherein each generated identifier includes an additional port address, the method further comprising:For each additional port address generated, configure an additional port in the device; andA local interface is assigned to the port, and the port includes the additional port and the port having the initial local port address.3.The method of claim 2, wherein the local interface assigned to one port is connected to a remote interface having the same remote address.4.The method of claim 1, wherein the received remote address is received as part of an identification sequence, the method further comprising:Sending the initial local port address to a remote interface connected to the local interface.5.The method of claim 4, wherein the identifier assigned to the local interface including at least one generated identifier includes a local port address, and the method further includes:In response to generating the at least one local port address, initiating an additional identification sequence; andIn response to the additional identification sequence, a local port address identifying the local interface is sent to the connected remote interface.6.The method of claim 1, wherein the at least one remote device and a local device including the local interface implement a SAS architecture, wherein the local port address and the remote address include a SAS address, and wherein the local And remote interface including PHY.7.The method of claim 1, wherein the remote interfaces with different remote addresses are located on different remote devices.8.The method of claim 1, wherein the combination of the identifier and the initial local port address is used to identify the assigned local interface.9.The method of claim 8, wherein the at least one identifier includes a domain, wherein the local interface remains allocated to the port having the initial local port address.10.The method of claim 8, wherein the remote interfaces with different remote addresses are located on different remote devices, wherein the combination of each of the plurality of identifiers and the default local port address identifies the local A local interface within the device, and wherein the initial local port address identifies the local interface within the remote device.11.The method of claim 8, wherein the plurality of identifiers include domains, and the method further includes:For each received remote address, a different domain is generated in the local device that contains the local interface connected to the remote interface with the remote address.12.The method of claim 11, wherein the generated domain includes a domain in the initial configuration.13.A device for connecting a local interface in a local device to a connected remote interface in at least one remote device through an interface, the device includes:A device for maintaining the initial configuration of allocating a plurality of local interfaces to an initial local port address of a port to which the local interface is allocated as part of the initial configuration;A device for receiving a remote address of a remote interface connected to the local interface for each local interface;For responding to receiving the same remote address for each remote interface connected to the local interface assigned the initial local port address, using the initial local port address to identify the local interface assigned to the initial local port address DeviceMeans for generating at least one identifier in response to receiving a plurality of remote addresses from a remote interface connected to the local interface, wherein the generating at least one identifier includes generating a different identification for each different remote address received Operator; andMeans for assigning different identifiers to the local interface to which the initial local port address was previously assigned in response to generating the at least one identifier.14.The apparatus of claim 13, wherein each generated identifier includes an additional local port address, further including:Means for configuring an additional port in the device for each additional local port address generated; andA device for assigning a local interface to the port, the port including the additional port and the port having the initial local port address.15.The device of claim 13, wherein the local interface assigned to one port is connected to a remote interface having the same remote address.16.The device of claim 13, wherein at least one of the received remote addresses is received as part of an identification sequence, further comprising:A device for sending the initial local port address to a remote interface connected to the local interface.17.The device of claim 16, wherein the identifier assigned to the local interface including at least one generated identifier includes a local port address, and further includes:Means for initiating an additional identification sequence in response to generating at least one said local port address; andMeans for sending a local port address identifying the local interface to the connected remote interface in response to the additional identification sequence.18.The device according to claim 13, wherein the at least one remote device and the local device including the local interface implement a SAS architecture, wherein the local port address and the remote address include a SAS address, and wherein, Local and remote interfaces include PHY.19.The device of claim 13, wherein the remote interfaces with different remote addresses are located on different remote devices.20.The apparatus of claim 13, wherein the combination of the identifier and the initial local port address is used to identify the assigned local interface.21.The apparatus of claim 20, wherein the at least one identifier includes a domain, wherein the local interface still remains assigned to the port having the initial local port address.22.The device of claim 20, wherein the remote interfaces having different remote addresses are located on different remote devices, wherein the combination of each of the plurality of identifiers and the initial local port address is identified A local interface within the local device, and wherein the initial local port address identifies the local interface within the remote device.23.The device of claim 20, wherein the at least one identifier includes a domain and further includes:Means for generating different domains for each received remote address in a local device that includes a local interface connected to the remote interface with the remote address.24.The apparatus of claim 23, wherein the generated domain includes a domain in the initial configuration. |
Adapter interface address assignmentbackground1.fieldThis embodiment involves address allocation to the adapter interface.2.Description of related technologiesAn adapter or multi-channel protocol controller enables devices coupled to the adapter to communicate with one or more connected end devices via physical cables or wires according to a storage interconnect architecture, also known as a hardware interface, where the storage interconnect architecture defines communication And standard methods for identifying such communications, such as Serial Attached Small Computer System Interface (SCSI) (SAS), Serial Advanced Technology Attachment (SATA), and so on. These storage interconnect architectures allow the device to maintain one or more connections, such as a direct point-to-point connection to the final device or a connection through one or more expanders. Devices can also be interconnected via switches, expanders, Fibre Channel arbitrated loops, optical fibers, and so on. In the SAS / SATA architecture, the SAS port is composed of one or more SAS PHYs, where each SAS PHY interface is to the physical layer (that is, the physical interface or connection) and the SAS link layer containing multiple protocol link layers. Communication from the SAS PHY in a port is handled by the transport layer used for that port. There is a transport layer for each SAS port for interfacing with each type of application layer supported by the port. "PHY" as defined in the SAS protocol is a device object used to interface to other devices and physical interfaces. Further details about the SAS architecture of devices and expanders are in the reference numbers issued by ANSI: ISO / IEC 14776-150: 200x and ANSI INCITS. ***: 200x PHY layer (SO / IEC 14776-150: 200x and ANSI INCITS. * **: 200xPHY layer) (July 9, 2003) is described in the technical specification "Information-Serial Attached SCSI (SAS) (Information Technology-Serial Attached SCSI (SAS))"; details about the Fibre Channel architecture It is described in the technical specification "Fibre Channel and Signaling Interface" of the document number ISO / IEC AWI 14165-25; details about the SATA architecture are in the technical specification "Serial: ATA: High Speed Serialized AT AT Attachment (Serial ATA: High-speed serialized AT connection) "is described in version 1.0A (January 2003).Within the adapter, the PHY layer may include a parallel-to-serial converter for performing serial-to-parallel conversion of data so that parallel data is sent to the layers above the PHY layer, and serial data is sent from the PHY to the receiver via the physical interface The PHY layer of the device. In the SAS specification, there is a set of link layers for each SAS PHY layer, so that each link layer protocol engine is efficiently coupled to the parallel-to-serial converter in the PHY layer. The physical interfaces of the PHYs on different devices can be connected via cables or via etching paths on the circuit board that communicate the circuit board paths.As mentioned above, the port contains one or more PHYs. The ports in the device are associated with the physical PHY based on the configuration that occurs during the identification sequence. For those PHYs in the device that are configured to use the same SAS address in the SAS domain during the identification sequence, assign one or more PHYs in the device to the port, where the PHYs with the same SAS address in one port on the device are connected to The PHY of the same SAS address in the SAS domain is also used on the remote device. Wide ports have multiple interfaces or PHYs, while narrow ports have only one PHY. A wide link includes a set of physical links connecting the PHY of a wide port to a corresponding PHY in a corresponding remote wide port, and a narrow link is a physical link that attaches a narrow port to a corresponding remote narrow port. For further details on the SAS architecture, refer to the ISO / IEC 14776-150: 200x and ANSI INCITS. ***: 200x PHY layers (SO / IEC 14776-150: 200x and ANSI INCITS. ***: 200x PHY layers issued by ANSI ) (July 9, 2003) is described in the technical specification "Information-Serial Attached SCSI (SAS) (Information Technology-Serial Attached SCSI (SAS))".Brief description of the drawingsReferring now to the drawings, the same reference numerals in the drawings indicate corresponding parts in all drawings:1 and 2 show a system and an adapter according to various embodiments;Figures 3, 5a, 5b and 7 show how devices can be connected according to various embodiments; and4 and 6 illustrate operations to perform an identification sequence between connected devices according to various embodiments.Detailed DescriptionIn the following description, reference is made to the drawings that form part of the invention and show several embodiments. It is understood that other embodiments may be utilized and structural or operational changes may be made.Figure 1 illustrates a computing environment in which various embodiments may be implemented. The host system 2 includes one or more central processing units (CPUs) 4 (only one shown), volatile memory 6, non-volatile storage 8, operating system 10, and adapters 12a, 12b. The adapter includes and includes final Physical interfaces for remote devices such as devices, switches, expanders, storage devices, and servers. The application 16 is also executed in the memory 6, which can send and receive transmissions via one of the adapters 12a, 12b. The host 2 may include any computing device known in the art, such as a mainframe, server, personal computer, workstation, laptop computer, handheld computer, telephone device, network device, virtualization device, storage controller, and the like. Various CPUs 4 and operating systems 10 known in the art may be used. The programs and data in the memory 6 can be swapped into the storage 8 as part of the memory management operation.The operating system 10 may load device drivers 20a and 20b for each storage interface supported in the adapter 12, to allow communication with devices that use the same supported storage interface to communicate, and also load such as peripheral component interconnect (PCI) A bus interface 24 such as an interface allows communication with the bus 26. Further details of the PCI interface are described in the publication "PCL Local Bus, Rev. 2.3 (PCL Local Bus, Version 2.3)" published by PCI-SIG. The operating system 10 may load the device drivers 20a, 20b supported by the adapters 12a, 12b after detecting the presence of the adapters 12a, 12b, which may occur during initialization or dynamically. In the embodiment of FIG. 1, the operating system 10 loads three device drivers 20a and 20b. For example, the device drivers 20a and 20b may support SAS and SATA storage interfaces, that is, interconnect architectures. More or fewer device drivers may be loaded based on the number of storage interfaces supported by the adapters 12a and 12b.FIG. 2 shows an embodiment of the adapter 12, which may include adapters 12a, 12b. Each adapter includes one or more ports 30, where each port 30 includes a port layer 32 that interfaces with one or more SAS PHYs 34. Each PHY includes a SAS link layer 36 containing one or more protocol link layers. Figure 2 shows three protocol link layers, including a serial SCSI protocol (SSP) link layer 38a, a serial tunneling protocol (STP) layer 38b, and a serial management protocol (SMP) layer 38c that process SSP frames. Furthermore, they interface with their respective transport layers, namely, SSP transport layer 40a, STP transport layer 40b, and SMP transport layer 40c via port layer 32. These layers may be implemented as program components executed from memory and / or implemented in hardware.Each PHY 34 of the port 30 also includes a SAS PHY layer 42 and a physical layer 44. The physical layer 44 includes a physical interface, which includes transmitter and receiver circuits, paths, and connectors. As shown, the physical layer 44 is coupled to the PHY layer 42, where the PHY layer 42 specifies an encoding scheme and timing mechanism such as 8b10b to convert bits. The PHY layers 32a, 32b ... 32n may include a serial-parallel converter that performs serial-parallel conversion and a phase-locked loop (PLL) that tracks incoming data, and provides the serial clock with the data clock of the incoming data To be used when performing conversions. The data is received in a serial format at the adapter 12 and converted into a parallel format at the SAS PHY layers 32a, 32b ... 32n for transmission within the adapter 12. The SAS PHY layer 42 also specifies error detection, bit shifting and amplitude reduction, and out-of-band (OOB) signaling to establish an operational link with another SAS PHY in another device and send data to the outside of the adapter 12 Speed negotiation of PHY in the device, etc.In the embodiment of FIG. 2, there is one protocol transport layer 40a, 40b, and 40c that interfaces with each type of application layer 48a, 48b, 48c in the application layer 50. The application layer 50 may be supported in the adapter 12 or the host system 2 and provide network services to end users. For example, the SSP transport layer 46a interfaces with the SCSI application layer 48a, the STP transport layer 46c interfaces with the Advanced Technology Attachment (ATA) application layer 48b, and the SMP transport layer 46d interfaces with the management application layer 48c. Details about the physical layer, PHY layer, link layer, port layer, transport layer and application layer, and the components that implement these layers described here can be found in the technical specification "Information Technology-Serial Attached SCSI (SAS) (Information Technology-Serial Attached to SCSI (SAS)). Further details about ATA technology are in the publication "Information-Technology-ATAttachment with Packet Interface-6 (ATA / ATAPI-6) (Information Technology-AT Attachment-6 with Packet Interface (ATA / ATAPI-6))", It is described in the reference number ANSI INCITS 261-2002 (September 2002).Each port 30 has a unique SAS address on the adapter 12, and each PHY 34 in the port has a unique identifier in the adapter 12 for management functions and routing. The adapter 12 may also have one or more unique domain addresses, where different ports in the adapter 12 may be organized into different domains or devices. The PHY address of the PHY may include the SAS address of the port to which the PHY is assigned, and the port SAS address is used to identify and address the PHY to an external device in the SAS domain.FIG. 3 shows an example of how devices 100 and 102 can interface, where device 100 has 8 PHYs 104a, 104b ... 104j linked to 8 PHYs 106a, 106b ... 160j at device 104, respectively. The devices 100 and 102 may include a host, an expander, a storage device, a server, etc., where these devices may implement the architecture described with reference to FIG. 2. These devices 100 and 102 may have an initial address configuration for their PHY, where the PHY may share the same port address, and may be located in the same domain. The initial address configuration of the PHY in the device is based on the user's configuration selection.FIG. 4 illustrates operations implemented in devices implementing the architecture of FIG. 2 such as adapter 12 devices 100 and 102 to perform the identification sequence and configure the PHY within the port. During the identification sequence, the device is informed of the address of a remote interface (eg, remote PHY) connected to the device's local interface (eg, local PHY). The identification sequence operation in FIG. 4 may be programmed in the adapter 12, the port layer 32 of the devices 100, 102, or performed by the device drivers 20a and 20b of the adapter 12. When the identification sequence begins (at block 150) after a reset or start sequence at a device such as 100, a loop is performed at blocks 152 to 170 for each port j provided in the initial or default configuration maintained at the device such as 100. For each initial port j, a loop is performed at blocks 154 to 160 for each PHYi assigned to port j in the initial configuration. At block 156, the device such as 100 sends identifying address information containing the SAS address of the PHYi that is the SAS address of port j to the attached PHY such as 106a, 106b ... 106h in the remote device 102. PHYi also receives (at block 158) identification address information from the PHY attached to PHYi. The device 100 may receive the identification information from the remote device 102 before sending the identification information, and vice versa. When the PHY sends and receives the identification information, the identification of the PHY is completed. In addition, if the device 100 does not receive identification information about the attached device PHY, then a timeout will occur, where the entire link initialization process will be restarted. Control then returns to block 154 to send and receive the identification address information of the next PHY.After all the PHYs, such as 104a, 104b ... 104h, have received the identification address information from the attached PHYs, such as 106a, 106b ... 106h, (at block 162) a decision is made as to whether all PHYs, For example, 104a, 104b ... 104h all received the same SAS address judgment from the PHY to which they are connected. If so, a wide port is formed for all PHYs that originally allocated to port j, such as 104a, 104b ... 104h, so that all PHYs are configured to use the SAS address of the initial port j. The common SAS addresses of all remote PHYs, such as 106a, 106b ... 106h, are then associated with the SAS addresses of the local PHY, such as the common port j of 104a, 104b ... 104h, for use during operation. If (at block 162) the SAS addresses of the remote PHYs 106a, 106b ... 106h are not the same, then for each unique remote SAS address k received, connect to the local PHY of the remote SAS address k, eg 104a, 104b ... 104h is assigned (at block 168) to the newly configured port with the new unique port SAS address. If the connected remote PHY is located in a different remote device, the new unique SAS address of the local PHY may not be the same. In some embodiments, the new unique port SAS address may be different from the initial SAS address configured for the port, or a port SAS address may be the same as the initial SAS address, and other additional new ones connected to different remote devices The SAS address can be unique. From block 166 or 168, control (at block 170) returns to block 152 to consider any other ports in the initial configuration. After considering all the ports in the initial configuration, if (at block 172) a new port and SAS address are configured, control returns to block 150 to perform a second instance of the initialization process using the new allocation of PHY to port address .Local and remote PHYs include local and remote interfaces at local and remote devices, respectively. An interface is a physical or logical component connected to another interface on the same or different devices. The term interface may include interfaces other than PHY interfaces. A wide port includes a port assigned with multiple interfaces, and one or more interfaces can be assigned to a port. A local address such as a local SAS address includes an address or identifier assigned to one or more interfaces, and a remote address such as a remote SAS address includes one of remote devices assigned to another interface such as one of the local interfaces Or addresses or identifiers of multiple interfaces.Using the operation of FIG. 4, the port is configured to include the maximum number of PHYs in each new port, where the PHY in each new port will be connected to the PHY with the same SAS address in the connected adapter. In addition, if the PHY in the initial port configuration is not connected to a PHY with the same PHY address, configure the new port with a new SAS address to provide a new port so that the PHY assigned to the new port is connected to PHY with the same SAS address. In addition, after the reconfiguration of the port, the identification sequence is executed again to perform the configuration using the new port configuration.FIG. 5a shows an embodiment in which the PHY in device 180 is configured to have a SAS address "x", which is connected to the PHY in three different devices 182, 184, and 186, each of which has a different SAS addresses "A", "B" and "C". Performing the operation of FIG. 4 within a device having the configuration of FIG. 5a results in the configuration shown in FIG. 5b, where the adapter 180 is configured to use three SAS addresses XA, XB, and XC to communicate with PHY communication. Each of the SAS addresses XA, XB, and XC may include addresses of different ports.Figure 6 shows an alternative embodiment of the operation of performing the identification sequence and establishing the port configuration. Figure 6 includes many of the same operations in Figure 4 with the following exceptions. When it is determined (at block 212) that the connected PHY does not return the same address to port j, instead of configuring a new port using a different SAS address as shown in FIG. 4, at block 218, for each unique target received The SAS address k forms a different domain with a unique domain identifier in the device 180. Each PHY is then internally identified using both the SAS address and the newly configured domain identifier. When the domain designation is completed, the device such as 100 (Figure 3) does not perform the identification sequence again, but instead uses the domain identifier and SAS address to distinguish the PHYs with the same address connected to different devices. However, external devices 182, 184, 186 may use the same SAS address to address the local PHY.7 shows an embodiment obtained by performing the operation of FIG. 6 in a device having the configuration shown in FIG. 5a, where a device such as 100 is configured to use the same SAS address "X", but PHYs connected to different addresses are configured in different domains A, B, and C. Therefore, the device 250 uses a combination of the domain identifier and SAS address to distinguish its local PHY. Using the embodiment of FIG. 6, because there is no replacement for the default port configuration, unlike the second identification sequence performed at 172 in FIG. 4, the second identification sequence will not be performed. Use the same address "X" instead. Therefore, remote devices 182, 184, and 186 (FIG. 7) use the same SAS address to address different PHYs in device 180, and device 180 uses domain addresses A, B, and C in combination with port SAS address "X" to distinguish the local PHY device.The described embodiments provide techniques for assigning a PHY or interface to a port when the interface receives different SAS addresses from the attached PHY. The embodiment of FIG. 6 minimizes communication and coordination between the local and remote PHYs because the initial address configuration is used for interfaces that receive different addresses from the attached device, and the device assigns interfaces to different domains. Interfaces that connect to different addresses are internally distinguished.In some embodiments, configuration is performed to form a port with the largest possible bandwidth, that is, the maximum number of PHYs / connections. Maximizing the number of PHYs in a port maximizes the throughput of the port. In addition, maximizing the PHY maximizes the possibility of load balancing. Furthermore, maximizing the number of PHYs and connections at the port increases the number of alternative paths to the port, which minimizes I / O latency. Furthermore, maximizing the number of PHYs at a port provides redundant connections to allow continued operation in the event of one or more PHY failures.Other embodiment detailsThe described embodiments may be implemented as methods, devices, or articles using programming and / or engineering techniques to produce software, firmware, hardware, or any combination thereof. The terms "article" and "circuit" as used herein refer to state machines, codes or logic implemented in hardware logic (eg, integrated circuit chips, programmable gate arrays (PGA), application specific integrated circuits (ASIC), etc.), Or computer-readable media, such as magnetic storage media (eg, hard drives, floppy disks, magnetic tapes, etc.), optical storage (CD-ROM, optical disks, etc.), volatile and non-volatile memory devices (eg, EEPROM, ROM, PROM, RAM, DRAM, SRAM, firmware, programmable logic, etc.). The code in the computer-readable medium is accessed and executed by the processor. When code or logic is executed by a processor, the circuit may include a medium containing the code or logic and a processor that executes the code loaded from the medium. The code in which the preferred embodiment is implemented can also be accessed from a file server via a transmission medium or via a network. In such a case, the article in which the code is implemented may include a transmission medium, such as a network transmission line, a wireless transmission medium, signal propagation through space, radio waves, infrared signals, and the like. Therefore, "article of manufacture" may include the medium in which the code is embodied. In addition, "article of manufacture" may include a combination of hardware and software in which code is embodied, processed, and executed. Of course, those skilled in the art can recognize that various modifications can be made to the configuration, and the article of manufacture can include any information-carrying medium known in the art. In addition, the device, adapter, etc. can be implemented with one or more integrated circuits on the adapter or motherboard.In the described embodiment, the physical interface is represented by the PHY, thereby providing an interface between the physical connection and other layers within the adapter. In other embodiments, the interface representing the physical connection may be implemented using a structure other than PHY.The described embodiment uses a SAS architecture. In alternative embodiments, the techniques used to assign physical connections to ports can be applied to other storage interfaces.In the described embodiment, certain operations are described with reference to layers within the device / adapter architecture. In alternative embodiments, functions described as being performed by one layer may be performed in another layer.In the described embodiment, the transmission is received at the device from the remote device via a connection. In alternative embodiments, the transmitted and received information processed by the transport protocol layer or device driver may be received from a separate process executing in the same computer in which the device driver and the transport protocol driver execute.In some embodiments, device driver and network adapter embodiments may be included in a computer system that includes a storage controller, such as a SCSI, Redundant Array of Independent Disks (RAID) controller, which manages Access to non-volatile storage devices such as drives, tape media, optical disks, etc. In alternative implementations, network adapter embodiments may be included in systems that do not include storage controllers, such as certain hubs and switches.In the described embodiment, the storage interfaces supported by the adapter include SATA and SAS. In other embodiments, other storage interfaces may be supported. In addition, the adapter is described as supporting certain transport protocols, such as SSP, STP, and SMP. In other implementations, the adapter may support other transport protocols for sending using supported storage interfaces. Supported storage interfaces can send data at the same link speed or different non-overlapping link speeds. In addition, when different supported storage interconnect architectures use different physical configurations, the physical interfaces may have different physical configurations, that is, the arrangement and number of pins and other physical interconnects.The operations shown in FIGS. 4 and 6 show certain events that occur in a certain order. In alternative embodiments, certain operations can be performed, modified, or removed in a different order. Moreover, operations can be added to the above operations while still following the described embodiments. In addition, the operations described herein may occur sequentially, or some operations may be processed in parallel. Furthermore, operations may be performed by a single processing unit or by distributed processing units.The adapters 12a, 12b can be implemented as a network card, such as a peripheral component interconnect (PCI) card or some other I / O card, or on an integrated circuit component mounted on the system motherboard or backplane.The foregoing description of the various embodiments has been presented for purposes of illustration and description. In view of the above teachings, various modifications and changes are possible. |
Instructions and logic provide SIMD vector population count functionality. Some embodiments store in each data field of a portion of n data fields of a vector register or memory vector, a plurality of bits of data. In a processor, a SIMD instruction for a vector population count is executed, such that for that portion of the n data fields in the vector register or memory vector, the occurrences of binary values equal to each of a first one or more predetermined binary values, are counted and the counted occurrences are stored, in a portion of a destination register corresponding to the portion of the n data fields in the vector register or memory vector, as a first one or more counts corresponding to the first one or more predetermined binary values. |
THE CLAIMSWhat is claimed is:1. A processor comprising:a storage to store a first source vector portion comprising a first plurality of packed data fields, wherein each of the first plurality of packed data fields in the first portion of the source vector is to store a second plurality of bits comprising four or more bits;a destination register portion, corresponding to the first source vector portion, to store one or more counts of occurrences, in the corresponding portion of the source vector, of a corresponding one or more predetermined binary values;a decode stage to decode a first instruction specifying a vector population count operation and a packed data field size; andone or more execution units, responsive to the decoded first instruction, to:read the second plurality of bits of each of the packed data fields in the first portion of the source vector;for the first plurality of data fields in the first portion of the source vector, count the occurrences of binary values equal to said one or more predetermined binary values, and store the counted occurrences, in the portion of the destination register corresponding to the first source vector portion, as said one or more counts corresponding to the one or more predetermined binary values.2. The processor of claim 1, wherein the first portion of the source vector is 32-bits.3. The processor of claim 1, wherein the first portion of the source vector is 64-bits. 4. The processor of claim 1, wherein said storage to store the first portion of the source vector is a 32-bit register.5. The processor of claim 1, wherein said storage to store the first portion of the source vector is a cached memory location.6. The processor of claim 1, wherein said storage to store the first portion of the source vector is a 32-bit element of a vector register.7. The processor of claim 1, wherein said destination register portion is a 32-bit register.8. The processor of claim 1, wherein said destination register portion is a 32-bit portion of a 64 bit register.9. The processor of claim 1, wherein said destination register portion is a 32-bit element of a 128-bit vector register.10. The processor of claim 1, wherein said destination register portion is a 64-bit register.11. The processor of claim 1, wherein the second plurality of bits is 4-bits.12. The processor of claim 1, wherein the second plurality of bits is 8-bits.13. The processor of claim 1, wherein the packed data field size is 8-bits.14. The processor of claim 1, wherein said one or more predetermined binary values arespecified by the first instruction as an immediate operand.15. The processor of claim 1, wherein said one or more predetermined binary values arespecified by the first instruction as one or more elements in a register operand.16. The processor of claim 1, further comprising:one or more execution units, responsive to the decoded first instruction, to:read the second plurality of bits of each of the packed data fields in a second portion of the source vector;for a same first plurality of data fields in the second portion of the source vector, count the occurrences of binary values equal to a second one or more predetermined binary values, andstore the counted occurrences, in a portion of the destination registercorresponding to the second source vector portion, as a second one or more counts corresponding to the second one or more predetermined binary values.17. The processor of claim 16, wherein said storage to store the first portion of the source vector also stores the second portion of the source vector as 32-bit elements of a vector register.18. The processor of claim 16, wherein said portion of the destination register corresponding to the second source vector portion is a 32-bit element of a vector register.19. The processor of claim 16, wherein said second one or more predetermined binary values are specified by the first instruction as one or more elements in a portion of a register operand corresponding to the second source vector portion.20. The processor of claim 16, wherein said second one or more predetermined binary values are specified by the first instruction as a 32-bit element of a vector register operandcorresponding to the second source vector portion.21. The processor of claim 16, wherein said second one or more predetermined binary values are specified by the first instruction as an immediate operand.22. A method comprising:storing in each of a first portion of a plurality of n data fields of a first vector register, a second plurality of bits comprising four or more bits;executing, in a processor, a SIMD instruction for a vector population count; and for the first portion of the plurality of n data fields in the first vector register, counting the occurrences of binary values equal to each of a first one or more predetermined binary values, andstoring the counted occurrences, in a portion of a destination register corresponding to the first portion of the plurality of n data fields in the first vector register, as a first one or more counts corresponding to the first one or more predetermined binary values.23. The method of claim 22, wherein said second one or more predetermined binary values are specified by the first instruction as an immediate operand.24. The method of claim 22, further comprising:storing in each of a second portion of a plurality of n data fields of the first vector register, the second plurality of bits; and for the second portion of the plurality of n data fields in the first vector register, counting the occurrences of binary values equal to each of a second one or more predetermined binary values, andstoring the counted occurrences, in a portion of the destination register corresponding to the second portion of the plurality of n data fields in the first vector register, as a second one or more counts corresponding to the second one or more predetermined binary values.25. The method of claim 24, wherein said portion of the destination register corresponding to the second portion of a plurality of n data fields of the first vector register is a 32-bit element of the destination register.26. A processing system comprising:a memory; anda plurality of processors each processor comprising:a storage to store a first source vector portion comprising a first plurality of packed data fields, wherein each of the first plurality of packed data fields in the first portion of the source vector is to store a second plurality of bits comprising four or more bits;a destination register portion, corresponding to the first source vector portion, to store one or more counts of occurrences, in the corresponding portion of the source vector, of a corresponding one or more predetermined binary values;a decode stage to decode a first instruction specifying a vector population count operation and a packed data field size; andone or more execution units, responsive to the decoded first instruction, to:read the second plurality of bits of each of the packed data fields in the first portion of the source vector;for the first plurality of data fields in the first portion of the source vector, count the occurrences of binary values equal to said one or more predetermined binary values, and store the counted occurrences, in the portion of the destination register corresponding to the first source vector portion, as said one or more counts corresponding to the one or more predetermined binary values.27. The processing system of claim 26, wherein the second plurality of bits is 4-bits.28. The processing system of claim 26, wherein the second plurality of bits is 8-bits.29. The processing system of claim 26, each processor further comprising:one or more execution units, responsive to the decoded first instruction, to:read the second plurality of bits of each of the packed data fields in a second portion of the source vector;for a same first plurality of data fields in the second portion of the source vector, count the occurrences of binary values equal to a second one or more predetermined binary values, andstore the counted occurrences, in a portion of the destination registercorresponding to the second source vector portion, as a second one or more counts corresponding to the second one or more predetermined binary values.30. The processing system of claim 29, wherein said storage to store the first portion of the source vector also stores the second portion of the source vector as 32-bit elements of a vector register.31. The processing system of claim 29, wherein said portion of the destination registercorresponding to the second source vector portion is a 32-bit element of a vector register.32. The processing system of claim 29, wherein said second one or more predetermined binary values are specified by the first instruction as one or more elements in a portion of a register operand corresponding to the second source vector portion.33. The processing system of claim 29, wherein said second one or more predetermined binary values are specified by the first instruction as an immediate operand. |
METHODS, APPARATUS, INSTRUCTIONS AND LOGIC TO PROVIDE VECTORPOPULATION COUNT FUNCTIONALITYCROSS-REFERENCE TO RELATED APPLICATIONS The present application claims the benefit of U.S. Non-Provisional Application No.13/960,769 (Attorney Docket No. P59592) filed Aug. 6, 2013. Said Application No. 13/960,769 is hereby incorporated herein in its entirety.FIELD OF THE DISCLOSUREThe present disclosure pertains to the field of processing logic, microprocessors, and associated instruction set architecture that, when executed by the processor or other processing logic, perform logical, mathematical, or other functional operations. In particular, the disclosure relates to instructions and logic to provide population count functionality. BACKGROUND OF THE DISCLOSUREThe human genome represents a significant amount of information, and storing such large quantities of information usually involves representing the four base nucleotides, thymine, cytosine, adenine and guanine (T, C, A, G) as bit pairs. There are about 3 billion base pairs in the human genome, and at two bits per base (four choices), the human genome has about 6 billion bits or about 750 MB of information (storing one copy of each chromosome). Typically, it may be a more common practice to represent each base nucleotide of the base pair with two bits, requiring about 1.4 GB of information. One format for storing sequences is known as, "packedDna." The DNA, or deoxyribonucleic acid, packed as two bits per base, is represented as binary 2-bit values: T = 00, C = 01, A = 10, G = 11. The first base is in the most significant 2- bits of a byte; the last base is in the least significant 2 bits. For example, the sequence TCAG is represented as 00011011 in binary (hexadecimal Ox IB). Similar compression schemes are also employed in some other databases, data mining applications, and search applications.A common operation in genome alignment is to count the occurrences of nucleotides within a string in order to match or partially match base-pair strings. With a packed data format (such as packedDna) the techniques may involve the use of look-up tables, together with shift and mask operations, and/or bitwise population counts together with logical operations in order to count the different nucleotide occurrences within a string.Modern processors often include instructions to provide operations that arecomputationally intensive, but offer a high level of data parallelism that can be exploited through an efficient implementation using various data storage devices, such as for example, single- instruction multiple-data (SIMD) vector registers. In SIMD execution, a single instruction operates on multiple data elements concurrently or simultaneously. This is typicallyimplemented by extending the width of various resources such as registers and arithmetic logic units (ALUs), allowing them to hold or operate on multiple data elements, respectively.The central processing unit (CPU) may provide such parallel hardware to support the SIMD processing of vectors. A vector is a data structure that holds a number of consecutive data elements. A vector register of size L may contain N vector elements of size M, where N=L/M. For instance, a 64-byte vector register may be partitioned into (a) 64 vector elements, with each element holding a data item that occupies 1 byte, (b) 32 vector elements to hold data items that occupy 2 bytes (or one "word") each, (c) 16 vector elements to hold data items that occupy 4 bytes (or one "doubleword") each, or (d) 8 vector elements to hold data items that occupy 8 bytes (or one "quadword") each. On the other hand, some applications may store and operate on packed sub-byte data elements where a register or portion of a register of size k bits may contain n vector elements of size m, where n=k/m. For instance, a 64-bit register or portion of a register may be partitioned into (e) 64 packed elements, with each element holding a data item that occupies 1 bit, (f) 32 packed elements to hold data items that occupy 2 bits each, or (g) 16 packed elements to hold data items that occupy 4 bits (or one "nibble") each. A 32-bit register or portion of a register may be partitioned into (h) 32 packed elements, with each element holding a data item that occupies 1 bit, (i) 16 packed elements to hold data items that occupy 2 bits each, or (j) 8 packed elements to hold data items that occupy 4 bits each.A number of applications have large amounts of data-level parallelism and may be able to benefit from SIMD support. However, some applications spend a significant amount of time in operations such as reformatting the data to take advantage of the SIMD parallelism. Some applications (e.g. such as genome sequencing and alignment, databases, data mining, and search applications) may have data elements that are smaller than 8-bits. To maintain SIMD efficiency, these sub-byte elements may need to be decompressed to each occupy one byte before being processed in parallel. As a result, such applications may see somewhat limited performance benefits from SIMD, operations.To date, potential solutions to such performance concerns and related processing difficulties have not been adequately explored. BRIEF DESCRIPTION OF THE DRAWINGSThe present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings.Figure 1A is a block diagram of one embodiment of a system that executes instructions to provide SIMD vector population count functionality.Figure IB is a block diagram of another embodiment of a system that executes instructions to provide SIMD vector population count functionality.Figure 1C is a block diagram of another embodiment of a system that executesinstructions to provide SIMD vector population count functionality.Figure 2 is a block diagram of one embodiment of a processor that executes instructions to provide SIMD vector population count functionality.Figure 3A illustrates packed data types according to one embodiment.Figure 3B illustrates packed data types according to one embodiment.Figure 3C illustrates packed data types according to one embodiment.Figure 3D illustrates an instruction encoding to provide SIMD vector population count functionality according to one embodiment.Figure 3E illustrates an instruction encoding to provide SIMD vector population count functionality according to another embodiment.Figure 3F illustrates an instruction encoding to provide SIMD vector population count functionality according to another embodiment.Figure 3G illustrates an instruction encoding to provide SIMD vector population count functionality according to another embodiment.Figure 3H illustrates an instruction encoding to provide SIMD vector population count functionality according to another embodiment.Figure 4A illustrates elements of one embodiment of a processor micro-architecture to execute instructions that provide SIMD vector population count functionality.Figure 4B illustrates elements of another embodiment of a processor micro-architecture to execute instructions that provide SIMD vector population count functionality.Figure 5 is a block diagram of one embodiment of a processor to execute instructions that provide SIMD vector population count functionality.Figure 6 is a block diagram of one embodiment of a computer system to execute instructions that provide SIMD vector population count functionality.Figure 7 is a block diagram of another embodiment of a computer system to execute instructions that provide SIMD vector population count functionality. Figure 8 is a block diagram of another embodiment of a computer system to execute instructions that provide SIMD vector population count functionality.Figure 9 is a block diagram of one embodiment of a system-on-a-chip to execute instructions that provide SIMD vector population count functionality.Figure 10 is a block diagram of an embodiment of a processor to execute instructions that provide SIMD vector population count functionality.Figure 11 is a block diagram of one embodiment of an IP core development system that provides SIMD vector population count functionality.Figure 12 illustrates one embodiment of an architecture emulation system that provides SIMD vector population count functionality.Figure 13 illustrates one embodiment of a system to translate instructions that provide SIMD vector population count functionality.Figure 14 illustrates a diagram for one embodiment of an example of genome sequencing and alignment processing which can make use of an instruction to provide SIMD vector population count functionality.Figure 15A illustrates a flow diagram for one embodiment of an example of vector sub- byte decompression in preparation to use of an instruction to provide SIMD vector population count functionality.Figure 15B illustrates a flow diagram for an alternative embodiment of an example of vector sub-byte decompression in preparation to use of an instruction to provide SIMD vector population count functionality.Figure 16A illustrates an embodiment of an apparatus for executing an instruction to provide SIMD vector population count functionality.Figure 16B illustrates an alternative embodiment of an apparatus for executing an instruction to provide SIMD vector population count functionality.Figure 16C illustrates another alternative embodiment of an apparatus for executing an instruction to provide SIMD vector population count functionality.Figure 16D illustrates another alternative embodiment of an apparatus for executing an instruction to provide SIMD vector population count functionality.Figure 16E illustrates another alternative embodiment of an apparatus for executing an instruction to provide SIMD vector population count functionality.Figure 17A illustrates a flow diagram for one embodiment of an example process for executing an instruction to provide SIMD vector population count functionality. Figure 17B illustrates a flow diagram for an alternative embodiment of an example process for executing an instruction to provide SIMD vector population count functionality.Figure 17C illustrates a flow diagram for another alternative embodiment of an example process for executing an instruction to provide SIMD vector population count functionality.Figure 17D illustrates a flow diagram for another alternative embodiment of an example process for executing an instruction to provide SIMD vector population count functionality.Figure 18A illustrates a flow diagram for one embodiment of an example process for executing an instruction to provide SIMD vector population count functionality.Figure 18B illustrates a flow diagram for an alternative embodiment of an example process for executing an instruction to provide SIMD vector population count functionality.Figure 18C illustrates a flow diagram for another alternative embodiment of an example process for executing an instruction to provide SIMD vector population count functionality.Figure 18D illustrates a flow diagram for another alternative embodiment of an example process for executing an instruction to provide SIMD vector population count functionality.DETAILED DESCRIPTIONThe following description discloses instructions and processing logic to provide SIMD vector population count functionality within or in association with a processor, computer system, or other processing apparatus. Some embodiments include processors with a register or other storage media to store a source vector portion comprising a plurality of packed data fields, wherein each of the plurality of packed data fields in the portion of the source vector is to store at least four bits of data, and with a destination register portion, corresponding to the source vector portion, to store one or more counts of occurrences, in the corresponding portion of the source vector, of a corresponding one or more predetermined binary values. A processor decode stage decodes an instruction specifying the vector population count operation and the packed data field size. One or more processor execution units, responsive to the decoded instruction, read the bits of each of the packed data fields in the portion of the source vector. For the plurality of data fields in that portion of the source vector, a count of the occurrences of binary values equal to each of the one or more predetermined binary values is generated and the counted occurrences are stored, in the portion of the destination register corresponding to the source vector portion, as one or more counts for each of the corresponding one or more predetermined binary values.Some embodiments store in each data field of a portion of n data fields of a vector register or memory vector, at least four bits of data. In a processor, a SIMD instruction for a vector population count is executed, such that for that portion of the n data fields in the vector register or memory vector, the occurrences of binary values equal to each of a first one or more predetermined binary values, are counted and the counted occurrences are stored, in a portion of a destination register corresponding to the portion of the n data fields in the vector register or memory vector, as a first one or more counts corresponding to the first one or morepredetermined binary values.It will be appreciated that SIMD population count instructions may be used for genome sequencing and alignment processing. Similar compression schemes are also employed more generally in other databases, data mining applications, and search applications, such that these applications may also use SIMD population count instructions.A common operation in genome alignment is to count the occurrences of nucleotides within a string in order to match or partially match base-pair strings. With a packed data format (such as packedDna) the techniques that might otherwise involve the use of look-up tables, together with shift and mask operations, and/or bitwise population counts together with logical operations in order to count the different nucleotide occurrences within a string, may use SIMD population count instructions instead. By using the SIMD population count instructions, many of the operations formerly required to count the different nucleotide occurrences within a string may be eliminated. Thus the performance of applications such as genome sequencing and alignment processing, and more generally for database applications, such as data mining, and search applications may be significantly improved.In the following description, numerous specific details such as processing logic, processor types, micro-architectural conditions, events, enablement mechanisms, and the like are set forth in order to provide a more thorough understanding of embodiments of the present invention. It will be appreciated, however, by one skilled in the art that the invention may be practiced without such specific details. Additionally, some well known structures, circuits, and the like have not been shown in detail to avoid unnecessarily obscuring embodiments of the present invention.Although the following embodiments are described with reference to a processor, other embodiments are applicable to other types of integrated circuits and logic devices. Similar techniques and teachings of embodiments of the present invention can be applied to other types of circuits or semiconductor devices that can benefit from higher pipeline throughput and improved performance. The teachings of embodiments of the present invention are applicable to any processor or machine that performs data manipulations. However, the present invention is not limited to processors or machines that perform 512 bit, 256 bit, 128 bit, 64 bit, 32 bit, or 16 bit data operations and can be applied to any processor and machine in which manipulation or management of data is performed. In addition, the following description provides examples, and the accompanying drawings show various examples for the purposes of illustration. However, these examples should not be construed in a limiting sense as they are merely intended to provide examples of embodiments of the present invention rather than to provide an exhaustive list of all possible implementations of embodiments of the present invention.Although the below examples describe instruction handling and distribution in the context of execution units and logic circuits, other embodiments of the present invention can be accomplished by way of data and/or instructions stored on a machine -readable, tangible medium, which when performed by a machine cause the machine to perform functions consistent with at least one embodiment of the invention. In one embodiment, functions associated with embodiments of the present invention are embodied in machine-executable instructions. The instructions can be used to cause a general-purpose or special-purpose processor that is programmed with the instructions to perform the steps of the present invention. Embodiments of the present invention may be provided as a computer program product or software which may include a machine or computer-readable medium having stored thereon instructions which may be used to program a computer (or other electronic devices) to perform one or more operations according to embodiments of the present invention. Alternatively, steps of embodiments of the present invention might be performed by specific hardware components that contain fixed- function logic for performing the steps, or by any combination of programmed computer components and fixed-function hardware components.Instructions used to program logic to perform embodiments of the invention can be stored within a memory in the system, such as DRAM, cache, flash memory, or other storage.Furthermore, the instructions can be distributed via a network or by way of other computer readable media. Thus a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), but is not limited to, floppy diskettes, optical disks, Compact Disc, Read-Only Memory (CD-ROMs), and magneto- optical disks, Read-Only Memory (ROMs), Random Access Memory (RAM), ErasableProgrammable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), magnetic or optical cards, flash memory, or a tangible, machine -readable storage used in the transmission of information over the Internet via electrical, optical, acoustical or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.). Accordingly, the computer-readable medium includes any type of tangible machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer). A design may go through various stages, from creation to simulation to fabrication. Data representing a design may represent the design in a number of manners. First, as is useful in simulations, the hardware may be represented using a hardware description language or another functional description language. Additionally, a circuit level model with logic and/or transistor gates may be produced at some stages of the design process. Furthermore, most designs, at some stage, reach a level of data representing the physical placement of various devices in the hardware model. In the case where conventional semiconductor fabrication techniques are used, the data representing the hardware model may be the data specifying the presence or absence of various features on different mask layers for masks used to produce the integrated circuit. In any representation of the design, the data may be stored in any form of a machine readable medium. A memory or a magnetic or optical storage such as a disc may be the machine readable medium to store information transmitted via optical or electrical wave modulated or otherwise generated to transmit such information. When an electrical carrier wave indicating or carrying the code or design is transmitted, to the extent that copying, buffering, or re -transmission of the electrical signal is performed, a new copy is made. Thus, a communication provider or a network provider may store on a tangible, machine-readable medium, at least temporarily, an article, such as information encoded into a carrier wave, embodying techniques of embodiments of the present invention.In modern processors, a number of different execution units are used to process and execute a variety of code and instructions. Not all instructions are created equal as some are quicker to complete while others can take a number of clock cycles to complete. The faster the throughput of instructions, the better the overall performance of the processor. Thus it would be advantageous to have as many instructions execute as fast as possible. However, there are certain instructions that have greater complexity and require more in terms of execution time and processor resources. For example, there are floating point instructions, load/store operations, data moves, etc.As more computer systems are used in internet, text, and multimedia applications, additional processor support has been introduced over time. In one embodiment, an instruction set may be associated with one or more computer architectures, including data types, instructions, register architecture, addressing modes, memory architecture, interrupt and exception handling, and external input and output (I/O).In one embodiment, the instruction set architecture (ISA) may be implemented by one or more micro-architectures, which includes processor logic and circuits used to implement one or more instruction sets. Accordingly, processors with different micro-architectures can share at least a portion of a common instruction set. For example, Intel® Pentium 4 processors, Intel® Core™ processors, and processors from Advanced Micro Devices, Inc. of Sunnyvale CA implement nearly identical versions of the x86 instruction set (with some extensions that have been added with newer versions), but have different internal designs. Similarly, processors designed by other processor development companies, such as ARM Holdings, Ltd., MIPS, or their licensees or adopters, may share at least a portion a common instruction set, but may include different processor designs. For example, the same register architecture of the ISA may be implemented in different ways in different micro-architectures using new or well-known techniques, including dedicated physical registers, one or more dynamically allocated physical registers using a register renaming mechanism (e.g., the use of a Register Alias Table (RAT), a Reorder Buffer (ROB) and a retirement register file. In one embodiment, registers may include one or more registers, register architectures, register files, or other register sets that may or may not be addressable by a software programmer.In one embodiment, an instruction may include one or more instruction formats. In one embodiment, an instruction format may indicate various fields (number of bits, location of bits, etc.) to specify, among other things, the operation to be performed and the operand(s) on which that operation is to be performed. Some instruction formats may be further broken defined by instruction templates (or sub formats). For example, the instruction templates of a given instruction format may be defined to have different subsets of the instruction format's fields and/or defined to have a given field interpreted differently. In one embodiment, an instruction is expressed using an instruction format (and, if defined, in a given one of the instruction templates of that instruction format) and specifies or indicates the operation and the operands upon which the operation will operate.Scientific, financial, auto -vectorized general purpose, RMS (recognition, mining, and synthesis), and visual and multimedia applications (e.g., 2D/3D graphics, image processing, video compression/decompression, voice recognition algorithms and audio manipulation) may require the same operation to be performed on a large number of data items. In one embodiment, Single Instruction Multiple Data (SIMD) refers to a type of instruction that causes a processor to perform an operation on multiple data elements. SIMD technology may be used in processors that can logically divide the bits in a register into a number of fixed-sized or variable-sized data elements, each of which represents a separate value. For example, in one embodiment, the bits in a 64-bit register may be organized as a source operand containing four separate 16-bit data elements, each of which represents a separate 16-bit value. This type of data may be referred to as 'packed' data type or 'vector' data type, and operands of this data type are referred to as packed data operands or vector operands. In one embodiment, a packed data item or vector may be a sequence of packed data elements stored within a single register, and a packed data operand or a vector operand may a source or destination operand of a SIMD instruction (or 'packed data instruction' or a 'vector instruction'). In one embodiment, a SIMD instruction specifies a single vector operation to be performed on two source vector operands to generate a destination vector operand (also referred to as a result vector operand) of the same or different size, with the same or different number of data elements, and in the same or different data element order.SIMD technology, such as that employed by the Intel® Core™ processors having an instruction set including x86, MMX™, Streaming SIMD Extensions (SSE), SSE2, SSE3, SSE4.1 , and SSE4.2 instructions, ARM processors, such as the ARM Cortex® family of processors having an instruction set including the Vector Floating Point (VFP) and/or NEON instructions, and MIPS processors, such as the Loongson family of processors developed by the Institute of Computing Technology (ICT) of the Chinese Academy of Sciences, has enabled a significant improvement in application performance (Core™ and MMX™ are registered trademarks or trademarks of Intel Corporation of Santa Clara, Calif).In one embodiment, destination and source registers/data are generic terms to represent the source and destination of the corresponding data or operation. In some embodiments, they may be implemented by registers, memory, or other storage areas having other names or functions than those depicted. For example, in one embodiment, "DEST1" may be a temporary storage register or other storage area, whereas "SRC1" and "SRC2" may be a first and second source storage register or other storage area, and so forth. In other embodiments, two or more of the SRC and DEST storage areas may correspond to different data storage elements within the same storage area (e.g., a SIMD register). In one embodiment, one of the source registers may also act as a destination register by, for example, writing back the result of an operation performed on the first and second source data to one of the two source registers serving as a destination registers.Figure 1A is a block diagram of an exemplary computer system formed with a processor that includes execution units to execute an instruction in accordance with one embodiment of the present invention. System 100 includes a component, such as a processor 102 to employ execution units including logic to perform algorithms for process data, in accordance with the present invention, such as in the embodiment described herein. System 100 is representative of processing systems based on the PENTIUM®III, PENTIUM®4, Xeon™, Itanium®, XScale™ and/or StrongARM™ microprocessors available from Intel Corporation of Santa Clara,California, although other systems (including PCs having other microprocessors, engineering workstations, set-top boxes and the like) may also be used. In one embodiment, sample system 100 may execute a version of the WINDOWS™ operating system available from Microsoft Corporation of Redmond, Washington, although other operating systems (UNIX and Linux for example), embedded software, and/or graphical user interfaces, may also be used. Thus, embodiments of the present invention are not limited to any specific combination of hardware circuitry and software.Embodiments are not limited to computer systems. Alternative embodiments of the present invention can be used in other devices such as handheld devices and embedded applications. Some examples of handheld devices include cellular phones, Internet Protocol devices, digital cameras, personal digital assistants (PDAs), and handheld PCs. Embedded applications can include a micro controller, a digital signal processor (DSP), system on a chip, network computers (NetPC), set-top boxes, network hubs, wide area network (WAN) switches, or any other system that can perform one or more instructions in accordance with at least one embodiment.Figure 1A is a block diagram of a computer system 100 formed with a processor 102 that includes one or more execution units 108 to perform an algorithm to perform at least one instruction in accordance with one embodiment of the present invention. One embodiment may be described in the context of a single processor desktop or server system, but alternative embodiments can be included in a multiprocessor system. System 100 is an example of a 'hub' system architecture. The computer system 100 includes a processor 102 to process data signals. The processor 102 can be a complex instruction set computer (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a processor implementing a combination of instruction sets, or any other processor device, such as a digital signal processor, for example. The processor 102 is coupled to a processor bus 110 that can transmit data signals between the processor 102 and other components in the system 100. The elements of system 100 perform their conventional functions that are well known to those familiar with the art.In one embodiment, the processor 102 includes a Level 1 (LI) internal cache memory 104. Depending on the architecture, the processor 102 can have a single internal cache or multiple levels of internal cache. Alternatively, in another embodiment, the cache memory can reside external to the processor 102. Other embodiments can also include a combination of both internal and external caches depending on the particular implementation and needs. Register file 106 can store different types of data in various registers including integer registers, floating point registers, status registers, and instruction pointer register. Execution unit 108, including logic to perform integer and floating point operations, also resides in the processor 102. The processor 102 also includes a microcode (ucode) ROM that stores microcode for certain macroinstructions. For one embodiment, execution unit 108 includes logic to handle a packed instruction set 109. By including the packed instruction set 109 in the instruction set of a general-purpose processor 102, along with associated circuitry to execute the instructions, the operations used by many multimedia applications may be performed using packed data in a general-purpose processor 102. Thus, many multimedia applications can be accelerated and executed more efficiently by using the full width of a processor's data bus for performing operations on packed data. This can eliminate the need to transfer smaller units of data across the processor's data bus to perform one or more operations one data element at a time.Alternate embodiments of an execution unit 108 can also be used in micro controllers, embedded processors, graphics devices, DSPs, and other types of logic circuits. System 100 includes a memory 120. Memory 120 can be a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, flash memory device, or other memory device. Memory 120 can store instructions and/or data represented by data signals that can be executed by the processor 102.A system logic chip 116 is coupled to the processor bus 110 and memory 120. The system logic chip 116 in the illustrated embodiment is a memory controller hub (MCH). The processor 102 can communicate to the MCH 116 via a processor bus 110. The MCH 116 provides a high bandwidth memory path 118 to memory 120 for instruction and data storage and for storage of graphics commands, data and textures. The MCH 116 is to direct data signals between the processor 102, memory 120, and other components in the system 100 and to bridge the data signals between processor bus 110, memory 120, and system I/O 122. In some embodiments, the system logic chip 116 can provide a graphics port for coupling to a graphics controller 112. The MCH 116 is coupled to memory 120 through a memory interface 118. The graphics card 112 is coupled to the MCH 116 through an Accelerated Graphics Port (AGP) interconnect 114.System 100 uses a proprietary hub interface bus 122 to couple the MCH 116 to the I/O controller hub (ICH) 130. The ICH 130 provides direct connections to some I/O devices via a local I/O bus. The local I/O bus is a high-speed I/O bus for connecting peripherals to the memory 120, chipset, and processor 102. Some examples are the audio controller, firmware hub (flash BIOS) 128, wireless transceiver 126, data storage 124, legacy I/O controller containing user input and keyboard interfaces, a serial expansion port such as Universal Serial Bus (USB), and a network controller 134. The data storage device 124 can comprise a hard disk drive, a floppy disk drive, a CD-ROM device, a flash memory device, or other mass storage device.For another embodiment of a system, an instruction in accordance with one embodiment can be used with a system on a chip. One embodiment of a system on a chip comprises of a processor and a memory. The memory for one such system is a flash memory. The flash memory can be located on the same die as the processor and other system components.Additionally, other logic blocks such as a memory controller or graphics controller can also be located on a system on a chip.Figure IB illustrates a data processing system 140 which implements the principles of one embodiment of the present invention. It will be readily appreciated by one of skill in the art that the embodiments described herein can be used with alternative processing systems without departure from the scope of embodiments of the invention.Computer system 140 comprises a processing core 159 capable of performing at least one instruction in accordance with one embodiment. For one embodiment, processing core 159 represents a processing unit of any type of architecture, including but not limited to a CISC, a RISC or a VLIW type architecture. Processing core 159 may also be suitable for manufacture in one or more process technologies and by being represented on a machine readable media in sufficient detail, may be suitable to facilitate said manufacture.Processing core 159 comprises an execution unit 142, a set of register file(s) 145, and a decoder 144. Processing core 159 also includes additional circuitry (not shown) which is not necessary to the understanding of embodiments of the present invention. Execution unit 142 is used for executing instructions received by processing core 159. In addition to performing typical processor instructions, execution unit 142 can perform instructions in packed instruction set 143 for performing operations on packed data formats. Packed instruction set 143 includes instructions for performing embodiments of the invention and other packed instructions.Execution unit 142 is coupled to register file 145 by an internal bus. Register file 145 represents a storage area on processing core 159 for storing information, including data. As previously mentioned, it is understood that the storage area used for storing the packed data is not critical. Execution unit 142 is coupled to decoder 144. Decoder 144 is used for decoding instructions received by processing core 159 into control signals and/or microcode entry points. In response to these control signals and/or microcode entry points, execution unit 142 performs the appropriate operations. In one embodiment, the decoder is used to interpret the opcode of the instruction, which will indicate what operation should be performed on the corresponding data indicated within the instruction. Processing core 159 is coupled with bus 141 for communicating with various other system devices, which may include but are not limited to, for example, synchronous dynamic random access memory (SDRAM) control 146, static random access memory (SRAM) control 147, burst flash memory interface 148, personal computer memory card international association(PCMCIA)/compact flash (CF) card control 149, liquid crystal display (LCD) control 150, direct memory access (DMA) controller 151, and alternative bus master interface 152. In one embodiment, data processing system 140 may also comprise an I/O bridge 154 forcommunicating with various I/O devices via an I/O bus 153. Such I/O devices may include but are not limited to, for example, universal asynchronous receiver/transmitter (UART) 155, universal serial bus (USB) 156, Bluetooth wireless UART 157 and I/O expansion interface 158.One embodiment of data processing system 140 provides for mobile, network and/or wireless communications and a processing core 159 capable of performing SIMD operations including a text string comparison operation. Processing core 159 may be programmed with various audio, video, imaging and communications algorithms including discrete transformations such as a Walsh-Hadamard transform, a fast Fourier transform (FFT), a discrete cosine transform (DCT), and their respective inverse transforms; compression/decompression techniques such as color space transformation, video encode motion estimation or video decode motioncompensation; and modulation/demodulation (MODEM) functions such as pulse coded modulation (PCM).Figure 1C illustrates another alternative embodiments of a data processing system capable of executing instructions to provide SIMD vector population count functionality. In accordance with one alternative embodiment, data processing system 160 may include a main processor 166, a SIMD coprocessor 161, a cache memory 167, and an input/output system 168. Theinput/output system 168 may optionally be coupled to a wireless interface 169. SIMDcoprocessor 161 is capable of performing operations including instructions in accordance with one embodiment. Processing core 170 may be suitable for manufacture in one or more process technologies and by being represented on a machine readable media in sufficient detail, may be suitable to facilitate the manufacture of all or part of data processing system 160 including processing core 170.For one embodiment, SIMD coprocessor 161 comprises an execution unit 162 and a set of register file(s) 164. One embodiment of main processor 166 comprises a decoder 165 to recognize instructions of instruction set 163 including instructions in accordance with one embodiment for execution by execution unit 162. For alternative embodiments, SIMD coprocessor 161 also comprises at least part of decoder 165B to decode instructions of instruction set 163. Processing core 170 also includes additional circuitry (not shown) which is not necessary to the understanding of embodiments of the present invention.In operation, the main processor 166 executes a stream of data processing instructions that control data processing operations of a general type including interactions with the cache memory 167, and the input/output system 168. Embedded within the stream of data processing instructions are SIMD coprocessor instructions. The decoder 165 of main processor 166 recognizes these SIMD coprocessor instructions as being of a type that should be executed by an attached SIMD coprocessor 161. Accordingly, the main processor 166 issues these SIMD coprocessor instructions (or control signals representing SIMD coprocessor instructions) on the coprocessor bus 171 where from they are received by any attached SIMD coprocessors. In this case, the SIMD coprocessor 161 will accept and execute any received SIMD coprocessor instructions intended for it.Data may be received via wireless interface 169 for processing by the SIMD coprocessor instructions. For one example, voice communication may be received in the form of a digital signal, which may be processed by the SIMD coprocessor instructions to regenerate digital audio samples representative of the voice communications. For another example, compressed audio and/or video may be received in the form of a digital bit stream, which may be processed by the SIMD coprocessor instructions to regenerate digital audio samples and/or motion video frames. For one embodiment of processing core 170, main processor 166, and a SIMD coprocessor 161 are integrated into a single processing core 170 comprising an execution unit 162, a set of register file(s) 164, and a decoder 165 to recognize instructions of instruction set 163 including instructions in accordance with one embodiment.Figure 2 is a block diagram of the micro-architecture for a processor 200 that includes logic circuits to perform instructions in accordance with one embodiment of the present invention. In some embodiments, an instruction in accordance with one embodiment can be implemented to operate on data elements having sizes of byte, word, doubleword, quadword, etc., as well as datatypes, such as single and double precision integer and floating point datatypes. In one embodiment the in-order front end 201 is the part of the processor 200 that fetches instructions to be executed and prepares them to be used later in the processor pipeline. The front end 201 may include several units. In one embodiment, the instruction prefetcher 226 fetches instructions from memory and feeds them to an instruction decoder 228 which in turn decodes or interprets them. For example, in one embodiment, the decoder decodes a received instruction into one or more operations called "micro-instructions" or "micro-operations" (also called micro op or uops) that the machine can execute. In other embodiments, the decoder parses the instruction into an opcode and corresponding data and control fields that are used by the micro -architecture to perform operations in accordance with one embodiment. In one embodiment, the trace cache 230 takes decoded uops and assembles them into program ordered sequences or traces in the uop queue 234 for execution. When the trace cache 230 encounters a complex instruction, the microcode ROM 232 provides the uops needed to complete the operation.Some instructions are converted into a single micro-op, whereas others need several micro- ops to complete the full operation. In one embodiment, if more than four micro-ops are needed to complete a instruction, the decoder 228 accesses the microcode ROM 232 to do the instruction. For one embodiment, an instruction can be decoded into a small number of micro ops for processing at the instruction decoder 228. In another embodiment, an instruction can be stored within the microcode ROM 232 should a number of micro-ops be needed to accomplish the operation. The trace cache 230 refers to a entry point programmable logic array (PL A) to determine a correct micro -instruction pointer for reading the micro-code sequences to complete one or more instructions in accordance with one embodiment from the micro-code ROM 232. After the microcode ROM 232 finishes sequencing micro-ops for an instruction, the front end 201 of the machine resumes fetching micro-ops from the trace cache 230.The out-of-order execution engine 203 is where the instructions are prepared for execution. The out-of-order execution logic has a number of buffers to smooth out and re-order the flow of instructions to optimize performance as they go down the pipeline and get scheduled for execution. The allocator logic allocates the machine buffers and resources that each uop needs in order to execute. The register renaming logic renames logic registers onto entries in a register file. The allocator also allocates an entry for each uop in one of the two uop queues, one for memory operations and one for non-memory operations, in front of the instruction schedulers: memory scheduler, fast scheduler 202, slow/general floating point scheduler 204, and simple floating point scheduler 206. The uop schedulers 202, 204, 206, determine when a uop is ready to execute based on the readiness of their dependent input register operand sources and the availability of the execution resources the uops need to complete their operation. The fast scheduler 202 of one embodiment can schedule on each half of the main clock cycle while the other schedulers can only schedule once per main processor clock cycle. The schedulers arbitrate for the dispatch ports to schedule uops for execution.Register files 208, 210, sit between the schedulers 202, 204, 206, and the execution units 212, 214, 216, 218, 220, 222, 224 in the execution block 211. There is a separate register file 208, 210, for integer and floating point operations, respectively. Each register file 208, 210, of one embodiment also includes a bypass network that can bypass or forward just completed results that have not yet been written into the register file to new dependent uops. The integer register file 208 and the floating point register file 210 are also capable of communicating data with the other. For one embodiment, the integer register file 208 is split into two separate register files, one register file for the low order 32 bits of data and a second register file for the high order 32 bits of data. The floating point register file 210 of one embodiment has 128 bit wide entries because floating point instructions typically have operands from 64 to 128 bits in width.The execution block 211 contains the execution units 212, 214, 216, 218, 220, 222, 224, where the instructions are actually executed. This section includes the register files 208, 210, that store the integer and floating point data operand values that the micro-instructions need to execute. The processor 200 of one embodiment is comprised of a number of execution units: address generation unit (AGU) 212, AGU 214, fast ALU 216, fast ALU 218, slow ALU 220, floating point ALU 222, floating point move unit 224. For one embodiment, the floating point execution blocks 222, 224, execute floating point, MMX, SIMD, and SSE, or other operations. The floating point ALU 222 of one embodiment includes a 64 bit by 64 bit floating point divider to execute divide, square root, and remainder micro-ops. For embodiments of the present invention, instructions involving a floating point value may be handled with the floating point hardware. In one embodiment, the ALU operations go to the high-speed ALU execution units 216, 218. The fast ALUs 216, 218, of one embodiment can execute fast operations with an effective latency of half a clock cycle. For one embodiment, most complex integer operations go to the slow ALU 220 as the slow ALU 220 includes integer execution hardware for long latency type of operations, such as a multiplier, shifts, flag logic, and branch processing. Memory load/store operations are executed by the AGUs 212, 214. For one embodiment, the integer ALUs 216, 218, 220, are described in the context of performing integer operations on 64 bit data operands. In alternative embodiments, the ALUs 216, 218, 220, can be implemented to support a variety of data bits including 16, 32, 128, 256, etc. Similarly, the floating point units 222, 224, can be implemented to support a range of operands having bits of various widths. For one embodiment, the floating point units 222, 224, can operate on 128 bits wide packed data operands in conjunction with SIMD and multimedia instructions.In one embodiment, the uops schedulers 202, 204, 206, dispatch dependent operations before the parent load has finished executing. As uops are speculatively scheduled and executed in processor 200, the processor 200 also includes logic to handle memory misses. If a data load misses in the data cache, there can be dependent operations in flight in the pipeline that have left the scheduler with temporarily incorrect data. A replay mechanism tracks and re-executes instructions that use incorrect data. Only the dependent operations need to be replayed and the independent ones are allowed to complete. The schedulers and replay mechanism of one embodiment of a processor are also designed to catch instructions that provide SIMD vector population count functionality.The term "registers" may refer to the on-board processor storage locations that are used as part of instructions to identify operands. In other words, registers may be those that are usable from the outside of the processor (from a programmer's perspective). However, the registers of an embodiment should not be limited in meaning to a particular type of circuit. Rather, a register of an embodiment is capable of storing and providing data, and performing the functions described herein. The registers described herein can be implemented by circuitry within a processor using any number of different techniques, such as dedicated physical registers, dynamically allocated physical registers using register renaming, combinations of dedicated and dynamically allocated physical registers, etc. In one embodiment, integer registers store thirty- two bit integer data. A register file of one embodiment also contains eight multimedia SIMD registers for packed data. For the discussions below, the registers are understood to be data registers designed to hold packed data, such as 64 bits wide MMX™ registers (also referred to as 'mm' registers in some instances) in microprocessors enabled with MMX technology from Intel Corporation of Santa Clara, California. These MMX registers, available in both integer and floating point forms, can operate with packed data elements that accompany SIMD and SSE instructions. Similarly, 128 bits wide XMM registers relating to SSE2, SSE3, SSE4, or beyond (referred to generically as "SSEx") technology can also be used to hold such packed data operands. In one embodiment, in storing packed data and integer data, the registers do not need to differentiate between the two data types. In one embodiment, integer and floating point are either contained in the same register file or different register files. Furthermore, in one embodiment, floating point and integer data may be stored in different registers or the same registers.In the examples of the following figures, a number of data operands are described. Figure 3A illustrates various packed data type representations in multimedia registers according to one embodiment of the present invention. Fig. 3A illustrates data types for a packed byte 310, a packed word 320, and a packed doubleword (dword) 330 for 128 bits wide operands. The packed byte format 310 of this example is 128 bits long and contains sixteen packed byte data elements. A byte is defined here as 8 bits of data. Information for each byte data element is stored in bit 7 through bit 0 for byte 0, bit 15 through bit 8 for byte 1, bit 23 through bit 16 for byte 2, and finally bit 120 through bit 127 for byte 15. Thus, all available bits are used in the register. This storage arrangement increases the storage efficiency of the processor. As well, with sixteen data elements accessed, one operation can now be performed on sixteen data elements in parallel.Generally, a data element is an individual piece of data that is stored in a single register or memory location with other data elements of the same length. In packed data sequences relating to SSEx technology, the number of data elements stored in a XMM register is 128 bits divided by the length in bits of an individual data element. Similarly, in packed data sequences relating to MMX and SSE technology, the number of data elements stored in an MMX register is 64 bits divided by the length in bits of an individual data element. Although the data types illustrated in Fig. 3 A are 128 bit long, embodiments of the present invention can also operate with 64 bit wide, 256 bit wide, 512 bit wide, or other sized operands. The packed word format 320 of this example is 128 bits long and contains eight packed word data elements. Each packed word contains sixteen bits of information. The packed doubleword format 330 of Fig. 3A is 128 bits long and contains four packed doubleword data elements. Each packed doubleword data element contains thirty two bits of information. A packed quadword is 128 bits long and contains two packed quad-word data elements.Figure 3B illustrates alternative in-register data storage formats. Each packed data can include more than one independent data element. Three packed data formats are illustrated; packed half 341, packed single 342, and packed double 343. One embodiment of packed half 341, packed single 342, and packed double 343 contain fixed-point data elements. For an alternative embodiment one or more of packed half 341, packed single 342, and packed double 343 may contain floating-point data elements. One alternative embodiment of packed half 341 is one hundred twenty-eight bits long containing eight 16-bit data elements. One embodiment of packed single 342 is one hundred twenty-eight bits long and contains four 32-bit data elements. One embodiment of packed double 343 is one hundred twenty-eight bits long and contains two 64-bit data elements. It will be appreciated that such packed data formats may be further extended to other register lengths, for example, to 96-bits, 160-bits, 192-bits, 224-bits, 256-bits, 512-bits or more.Figure 3C illustrates various signed and unsigned packed data type representations in multimedia registers according to one embodiment of the present invention. Unsigned packed byte representation 344 illustrates the storage of an unsigned packed byte in a SIMD register. Information for each byte data element is stored in bit seven through bit zero for byte zero, bit fifteen through bit eight for byte one, bit twenty-three through bit sixteen for byte two, etc., and finally bit one hundred twenty through bit one hundred twenty-seven for byte fifteen. Thus, all available bits are used in the register. This storage arrangement can increase the storage efficiency of the processor. As well, with sixteen data elements accessed, one operation can now be performed on sixteen data elements in a parallel fashion. Signed packed byte representation 345 illustrates the storage of a signed packed byte. Note that the eighth bit of every byte data element is the sign indicator. Unsigned packed word representation 346 illustrates how word seven through word zero are stored in a SIMD register. Signed packed word representation 347 is similar to the unsigned packed word in-register representation 346. Note that the sixteenth bit of each word data element is the sign indicator. Unsigned packed doubleword representation 348 shows how doubleword data elements are stored. Signed packed doubleword representation 349 is similar to unsigned packed doubleword in-register representation 348. Note that the necessary sign bit is the thirty-second bit of each doubleword data element.Figure 3D is a depiction of one embodiment of an operation encoding (opcode) format 360, having thirty-two or more bits, and register/memory operand addressing modescorresponding with a type of opcode format described in the "Intel® 64 and IA-32 IntelArchitecture Software Developer's Manual Combined Volumes 2A and 2B: Instruction Set Reference A-Z," which is which is available from Intel Corporation, Santa Clara, CA on the world-wide-web (www) at intel.com/products/processor/manuals/. In one embodiment, and instruction may be encoded by one or more of fields 361 and 362. Up to two operand locations per instruction may be identified, including up to two source operand identifiers 364 and 365. For one embodiment, destination operand identifier 366 is the same as source operand identifier 364, whereas in other embodiments they are different. For an alternative embodiment, destination operand identifier 366 is the same as source operand identifier 365, whereas in other embodiments they are different. In one embodiment, one of the source operands identified by source operand identifiers 364 and 365 is overwritten by the results of the instruction, whereas in other embodiments identifier 364 corresponds to a source register element and identifier 365 corresponds to a destination register element. For one embodiment, operand identifiers 364 and 365 may be used to identify 32-bit or 64-bit source and destination operands.Figure 3E is a depiction of another alternative operation encoding (opcode) format 370, having forty or more bits. Opcode format 370 corresponds with opcode format 360 and comprises an optional prefix byte 378. An instruction according to one embodiment may be encoded by one or more of fields 378, 371, and 372. Up to two operand locations per instruction may be identified by source operand identifiers 374 and 375 and by prefix byte 378. For one embodiment, prefix byte 378 may be used to identify 32-bit or 64-bit source and destination operands. For one embodiment, destination operand identifier 376 is the same as source operand identifier 374, whereas in other embodiments they are different. For an alternative embodiment, destination operand identifier 376 is the same as source operand identifier 375, whereas in other embodiments they are different. In one embodiment, an instruction operates on one or more of the operands identified by operand identifiers 374 and 375 and one or more operands identified by the operand identifiers 374 and 375 is overwritten by the results of the instruction, whereas in other embodiments, operands identified by identifiers 374 and 375 are written to another data element in another register. Opcode formats 360 and 370 allow register to register, memory to register, register by memory, register by register, register by immediate, register to memory addressing specified in part by MOD fields 363 and 373 and by optional scale-index-base and displacement bytes.Turning next to Figure 3F, in some alternative embodiments, 64-bit (or 128-bit, or 256-bit, or 512-bit or more) single instruction multiple data (SIMD) arithmetic operations may be performed through a coprocessor data processing (CDP) instruction. Operation encoding (opcode) format 380 depicts one such CDP instruction having CDP opcode fields 382 and 389. The type of CDP instruction, for alternative embodiments, operations may be encoded by one or more of fields 383, 384, 387, and 388. Up to three operand locations per instruction may be identified, including up to two source operand identifiers 385 and 390 and one destination operand identifier 386. One embodiment of the coprocessor can operate on 8, 16, 32, and 64 bit values. For one embodiment, an instruction is performed on integer data elements. In some embodiments, an instruction may be executed conditionally, using condition field 381. For some embodiments, source data sizes may be encoded by field 383. In some embodiments, Zero (Z), negative (N), carry (C), and overflow (V) detection can be done on SIMD fields. For some instructions, the type of saturation may be encoded by field 384.Turning next to Figure 3G is a depiction of another alternative operation encoding(opcode) format 397, to provide SIMD vector population count functionality according to another embodiment, corresponding with a type of opcode format described in the "Intel® Advanced Vector Extensions Programming Reference," which is available from Intel Corp., Santa Clara, CA on the world-wide-web (www) at intel.com/products/processor/manuals/.The original x86 instruction set provided for a 1-byte opcode with various formats of address syllable and immediate operand contained in additional bytes whose presence was known from the first "opcode" byte. Additionally, there were certain byte values that were reserved as modifiers to the opcode (called prefixes, as they had to be placed before the instruction). When the original palette of 256 opcode bytes (including these special prefix values) was exhausted, a single byte was dedicated as an escape to a new set of 256 opcodes. As vector instructions (e.g., SIMD) were added, a need for more opcodes was generated, and the "two byte" opcode map also was insufficient, even when expanded through the use of prefixes. To this end, new instructions were added in additional maps which use 2 bytes plus an optional prefix as an identifier.Additionally, in order to facilitate additional registers in 64-bit mode, an additional prefix may be used (called "REX") in between the prefixes and the opcode (and any escape bytes necessary to determine the opcode). In one embodiment, the REX may have 4 "pay load" bits to indicate use of additional registers in 64-bit mode. In other embodiments it may have fewer or more than 4 bits. The general format of at least one instruction set (which corresponds generally with format 360 and/or format 370) is illustrated generically by the following:[prefixes] [rex] escape [escape2] opcode modrm (etc.)Opcode format 397 corresponds with opcode format 370 and comprises optional VEX prefix bytes 391 (beginning with C4 hex in one embodiment) to replace most other commonly used legacy instruction prefix bytes and escape codes. For example, the following illustrates an embodiment using two fields to encode an instruction, which may be used when a second escape code is present in the original instruction, or when extra bits (e.g, the XB and W fields) in the REX field need to be used. In the embodiment illustrated below, legacy escape is represented by a new escape value, legacy prefixes are fully compressed as part of the "payload" bytes, legacy prefixes are reclaimed and available for future expansion, the second escape code is compressed in a "map" field, with future map or feature space available, and new features are added (e.g., increased vector length and an additional source register specifier).o code modrm [*¾] [dkpj [.mm]o code modnn [disp] fisjm] new feaftare¾An instruction according to one embodiment may be encoded by one or more of fields 391 and 392. Up to four operand locations per instruction may be identified by field 391 in combination with source operand identifiers 374 and 375 and in combination with an optional scale-index-base (SIB) identifier 393, an optional displacement identifier 394, and an optional immediate byte 395. For one embodiment, VEX prefix bytes 391 may be used to identify 32-bit or 64-bit source and destination operands and/or 128-bit or 256-bit SIMD register or memory operands. For one embodiment, the functionality provided by opcode format 397 may be redundant with opcode format 370, whereas in other embodiments they are different. Opcode formats 370 and 397 allow register to register, memory to register, register by memory, register by register, register by immediate, register to memory addressing specified in part by MOD field 373 and by optional (SIB) identifier 393, an optional displacement identifier 394, and an optional immediate byte 395.Turning next to Figure 3H is a depiction of another alternative operation encoding(opcode) format 398, to provide SIMD vector population count functionality according to another embodiment. Opcode format 398 corresponds with opcode formats 370 and 397 and comprises optional EVEX prefix bytes 396 (beginning with 62 hex in one embodiment) to replace most other commonly used legacy instruction prefix bytes and escape codes and provide additional functionality. An instruction according to one embodiment may be encoded by one or more of fields 396 and 392. Up to four operand locations per instruction and a mask may be identified by field 396 in combination with source operand identifiers 374 and 375 and in combination with an optional scale-index-base (SIB) identifier 393, an optional displacement identifier 394, and an optional immediate byte 395. For one embodiment, EVEX prefix bytes396 may be used to identify 32-bit or 64-bit source and destination operands and/or 128-bit, 256- bit or 512-bit SIMD register or memory operands. For one embodiment, the functionality provided by opcode format 398 may be redundant with opcode formats 370 or 397, whereas in other embodiments they are different. Opcode format 398 allows register to register, memory to register, register by memory, register by register, register by immediate, register to memory addressing, with masks, specified in part by MOD field 373 and by optional (SIB) identifier 393, an optional displacement identifier 394, and an optional immediate byte 395. The general format of at least one instruction set (which corresponds generally with format 360 and/or format 370) is illustrated generically by the following:evexl RXBmmmmm WvwLpp evex4 opcode modrm [sib] [disp] [imm]For one embodiment an instruction encoded according to the EVEX format 398 may have additional "payload" bits that may be used to provide SIMD vector population countfunctionality with additional new features such as, for example, a user configurable mask register, or an additional operand, or selections from among 128-bit, 256-bit or 512-bit vector registers, or more registers from which to select, etc.For example, where VEX format 397 may be used to provide SIMD vector population count functionality without a mask, the EVEX format 398 may be used to provide SIMD vector population count functionality with an explicit user configurable mask. Additionally, where VEX format 397 may be used to provide SIMD vector population count functionality on 128-bit or 256-bit vector registers, EVEX format 398 may be used to provide SIMD vector population count functionality on 128-bit, 256-bit, 512-bit or larger (or smaller) vector registers.Example instructions to provide SIMD vector population count functionality vector population count functionality for genome sequencing and alignment are illustrated by the following examples:Instruction destination sourcel soured descriptionPOPCNT2 Regl Reg2/ For each 2-bit element in the register Reg2 or in aMeml vector at memory location, Meml, count thenumber of occurrences of binary values: 00, 01, 10 and 11 , and store the counts in the corresponding bytes (or words): 0, 1, 2 and 3 of the 32-bit (or 64- bit) register Regl .POPCNT2 Regl Reg2/ Mask For each unmasked 2-bit element in the registerMeml Reg2 or at memory location, Meml, count thenumber of occurrences of values: 00, 01, 10 and 11 , and store the counts in the corresponding bytes (or words) of the 32-bit (or 64-bit) register Regl .POPCNT2 Regl Reg2/ Imm8 For each 2-bit element in the register Reg2 or in aMeml vector at memory location, Meml, count thenumber of occurrences of binary values: 00, 01, 10 and 11 , which are equal to Imm8 and store the count in register Regl .VPOPCNT2 Vmml Vmm2/ For each 32-bit element in the register Vmm2 or inMeml a vector at memory location, Meml, count thenumber of occurrences of binary values: 00, 01, 10 and 11 , and store the counts in the corresponding bytes: 0, 1, 2 and 3 of the 32-bit element in the register Vmml corresponding to the register Vmm2 or memory location, Meml .VPOPCNT2 Vmml Vmm2/ Imm8 For each 32-bit element in the register Vmm2 or inMeml a vector at memory location, Meml, count thenumber of occurrences of binary values: 00, 01, 10 and 11 , which are equal to Imm8 and store the count in the 32-bit element in the register Vmmlcorresponding to the register Vmm2 or memory location, Meml .VPOPCNTB Vmml Vmm2/ Vmm3 For each 32-bit (or 64-bit) element in the registerMeml Vmm2 or in a vector at memory location, Meml, count the number of bytes equal to thecorresponding element in the register Vmm3 and store the counts into the corresponding 32-bit (or 64-bit) elements in the register Vmml .VPOPCNTB Vmml Vmm2/ Vmm3 For each 32-bit (or 64-bit) element in the registerMeml Vmm2 or in a vector at memory location, Meml, count the number of bytes equal to any of the four (or eight) bytes in a corresponding element in the register Vmm3 and store the count(s) into the corresponding bytes (or words) of the 32-bit (or 64-bit) elements in the register Vmml .It will be appreciated that SIMD population count instructions, as in the examples above, may be used for genome sequencing and alignment processing. Similar compression schemes are also employed in some other databases, data mining applications, and search applications, such that these applications may also use SIMD population count instructions, as shown in the examples above.A common operation in genome alignment is to count the occurrences of nucleotides within a string in order to match or partially match base-pair strings. With a packed data format (such as packedDna) the techniques might otherwise involve the use of look-up tables, together with shift and mask operations, and/or bitwise population counts together with logical operations in order to count the different nucleotide occurrences within a string. By using the SIMD population count instructions, as in the examples above, many of the operations formerly required to count the different nucleotide occurrences within a string may be eliminated. Thus the performance of applications such as genome sequencing and alignment processing, and generally for database applications, such as data mining, and search applications may be significantly improved.Figure 4A is a block diagram illustrating an in-order pipeline and a register renaming stage, out-of-order issue/execution pipeline according to at least one embodiment of the invention. Figure 4B is a block diagram illustrating an in-order architecture core and a register renaming logic, out-of-order issue/execution logic to be included in a processor according to at least one embodiment of the invention. The solid lined boxes in Figure 4A illustrate the in-order pipeline, while the dashed lined boxes illustrates the register renaming, out-of-orderissue/execution pipeline. Similarly, the solid lined boxes in Figure 4B illustrate the in-order architecture logic, while the dashed lined boxes illustrates the register renaming logic and out-of- order issue/execution logic.In Figure 4A, a processor pipeline 400 includes a fetch stage 402, a length decode stage 404, a decode stage 406, an allocation stage 408, a renaming stage 410, a scheduling (also known as a dispatch or issue) stage 412, a register read/memory read stage 414, an execute stage 416, a write back/memory write stage 418, an exception handling stage 422, and a commit stage 424.In Figure 4B, arrows denote a coupling between two or more units and the direction of the arrow indicates a direction of data flow between those units. Figure 4B shows processor core 490 including a front end unit 430 coupled to an execution engine unit 450, and both are coupled to a memory unit 470.The core 490 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, the core 490 may be a special-purpose core, such as, for example, a network or communication core, compression engine, graphics core, or the like.The front end unit 430 includes a branch prediction unit 432 coupled to an instruction cache unit 434, which is coupled to an instruction translation lookaside buffer (TLB) 436, which is coupled to an instruction fetch unit 438, which is coupled to a decode unit 440. The decode unit or decoder may decode instructions, and generate as an output one or more micro- operations, micro-code entry points, microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decoder may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. The instruction cache unit 434 is further coupled to a level 2 (L2) cache unit 476 in the memory unit 470. The decode unit 440 is coupled to a rename/allocator unit 452 in the execution engine unit 450.The execution engine unit 450 includes the rename/allocator unit 452 coupled to a retirement unit 454 and a set of one or more scheduler unit(s) 456. The scheduler unit(s) 456 represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler unit(s) 456 is coupled to the physical register file(s) unit(s) 458. Each of the physical register file(s) units 458 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point, etc., status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. The physical register file(s) unit(s) 458 is overlapped by the retirement unit 454 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s), using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.). Generally, the architectural registers are visible from the outside of the processor or from a programmer's perspective. The registers are not limited to any known particular type of circuit. Various different types of registers are suitable as long as they are capable of storing and providing data as described herein. Examples of suitable registers include, but are not limited to, dedicated physical registers, dynamically allocated physical registers using register renaming, combinations of dedicated and dynamically allocated physical registers, etc. The retirement unit 454 and the physical register file(s) unit(s) 458 are coupled to the execution cluster(s) 460. The execution cluster(s) 460 includes a set of one or more execution units 462 and a set of one or more memory access units 464. The execution units 462 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 456, physical register file(s) unit(s) 458, and execution cluster(s) 460 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster, and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of this pipeline has the memory access unit(s) 464). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution and the rest in-order.The set of memory access units 464 is coupled to the memory unit 470, which includes a data TLB unit 472 coupled to a data cache unit 474 coupled to a level 2 (L2) cache unit 476. In one exemplary embodiment, the memory access units 464 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 472 in the memory unit 470. The L2 cache unit 476 is coupled to one or more other levels of cache and eventually to a main memory.By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement the pipeline 400 as follows: 1) the instruction fetch 438 performs the fetch and length decoding stages 402 and 404; 2) the decode unit 440 performs the decode stage 406; 3) the rename/allocator unit 452 performs the allocation stage 408 and renaming stage 410; 4) the scheduler unit(s) 456 performs the schedule stage 412; 5) the physical register file(s) unit(s) 458 and the memory unit 470 perform the register read/memory read stage 414; the execution cluster 460 perform the execute stage 416; 6) the memory unit 470 and the physical register file(s) unit(s) 458 perform the write back/memory write stage 418; 7) various units may be involved in the exception handling stage 422; and 8) the retirement unit 454 and the physical register file(s) unit(s) 458 perform the commit stage 424.The core 490 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA).It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or acombination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology).While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor also includes a separate instruction and data cache units 434/474 and a shared L2 cache unit 476, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (LI) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be external to the core and/or the processor.Figure 5 is a block diagram of a single core processor and a multicore processor 500 with integrated memory controller and graphics according to embodiments of the invention. The solid lined boxes in Figure 5 illustrate a processor 500 with a single core 502 A, a system agent 510, a set of one or more bus controller units 516, while the optional addition of the dashed lined boxes illustrates an alternative processor 500 with multiple cores 502A-N, a set of one or more integrated memory controller unit(s) 514 in the system agent unit 510, and an integrated graphics logic 508.The memory hierarchy includes one or more levels of cache within the cores, a set or one or more shared cache units 506, and external memory (not shown) coupled to the set of integrated memory controller units 514. The set of shared cache units 506 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof. While in one embodiment a ring based interconnect unit 512 interconnects the integrated graphics logic 508, the set of shared cache units 506, and the system agent unit 510, alternative embodiments may use any number of well- known techniques for interconnecting such units.In some embodiments, one or more of the cores 502A-N are capable of multi-threading. The system agent 510 includes those components coordinating and operating cores 502A-N. The system agent unit 510 may include for example a power control unit (PCU) and a display unit. The PCU may be or include logic and components needed for regulating the power state of the cores 502A-N and the integrated graphics logic 508. The display unit is for driving one or more externally connected displays.The cores 502A-N may be homogenous or heterogeneous in terms of architecture and/or instruction set. For example, some of the cores 502A-N may be in order while others are out-of- order. As another example, two or more of the cores 502A-N may be capable of execution the same instruction set, while others may be capable of executing only a subset of that instruction set or a different instruction set.The processor may be a general-purpose processor, such as a Core™ i3, i5, i7, 2 Duo and Quad, Xeon™, Itanium™, XScale™ or StrongARM™ processor, which are available from Intel Corporation, of Santa Clara, Calif. Alternatively, the processor may be from another company, such as ARM Holdings, Ltd, MIPS, etc.. The processor may be a special-purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor, co-processor, embedded processor, or the like. The processor may be implemented on one or more chips. The processor 500 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS.Figures 6-8 are exemplary systems suitable for including the processor 500, while Figure 9 is an exemplary system on a chip (SoC) that may include one or more of the cores 502. Other system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, are also suitable. In general, a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable.Referring now to Figure 6, shown is a block diagram of a system 600 in accordance with one embodiment of the present invention. The system 600 may include one or more processors 610, 615, which are coupled to graphics memory controller hub (GMCH) 620. The optional nature of additional processors 615 is denoted in Figure 6 with broken lines.Each processor 610,615 may be some version of the processor 500. However, it should be noted that it is unlikely that integrated graphics logic and integrated memory control units would exist in the processors 610,615. Figure 6 illustrates that the GMCH 620 may be coupled to a memory 640 that may be, for example, a dynamic random access memory (DRAM). The DRAM may, for at least one embodiment, be associated with a non-volatile cache.The GMCH 620 may be a chipset, or a portion of a chipset. The GMCH 620 may communicate with the processor(s) 610, 615 and control interaction between the processor(s) 610, 615 and memory 640. The GMCH 620 may also act as an accelerated bus interface between the processor(s) 610, 615 and other elements of the system 600. For at least one embodiment, the GMCH 620 communicates with the processor(s) 610, 615 via a multi-drop bus, such as a frontside bus (FSB) 695.Furthermore, GMCH 620 is coupled to a display 645 (such as a flat panel display). GMCH 620 may include an integrated graphics accelerator. GMCH 620 is further coupled to an input/output (I/O) controller hub (ICH) 650, which may be used to couple various peripheral devices to system 600. Shown for example in the embodiment of Figure 6 is an external graphics device 660, which may be a discrete graphics device coupled to ICH 650, along with another peripheral device 670.Alternatively, additional or different processors may also be present in the system 600. For example, additional processor(s) 615 may include additional processors(s) that are the same as processor 610, additional processor(s) that are heterogeneous or asymmetric to processor 610, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processor. There can be a variety of differences between the physical resources 610, 615 in terms of a spectrum of metrics of merit including architectural, micro-architectural, thermal, power consumption characteristics, and the like. These differences may effectively manifest themselves as asymmetry and heterogeneity amongst the processors 610, 615. For at least one embodiment, the various processors 610, 615 may reside in the same die package.Referring now to Figure 7, shown is a block diagram of a second system 700 in accordance with an embodiment of the present invention. As shown in Figure 7, multiprocessor system 700 is a point-to-point interconnect system, and includes a first processor 770 and a second processor 780 coupled via a point-to-point interconnect 750. Each of processors 770 and 780 may be some version of the processor 500 as one or more of the processors 610,615.While shown with only two processors 770, 780, it is to be understood that the scope of the present invention is not so limited. In other embodiments, one or more additional processors may be present in a given processor.Processors 770 and 780 are shown including integrated memory controller units 772 and 782, respectively. Processor 770 also includes as part of its bus controller units point-to-point (P-P) interfaces 776 and 778; similarly, second processor 780 includes P-P interfaces 786 and 788. Processors 770, 780 may exchange information via a point-to-point (P-P) interface 750 using P-P interface circuits 778, 788. As shown in Figure 7, IMCs 772 and 782 couple the processors to respective memories, namely a memory 732 and a memory 734, which may be portions of main memory locally attached to the respective processors.Processors 770, 780 may each exchange information with a chipset 790 via individual P-P interfaces 752, 754 using point to point interface circuits 776, 794, 786, 798. Chipset 790 may also exchange information with a high-performance graphics circuit 738 via a high-performance graphics interface 739.A shared cache (not shown) may be included in either processor or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode.Chipset 790 may be coupled to a first bus 716 via an interface 796. In one embodiment, first bus 716 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present invention is not so limited.As shown in Figure 7, various I/O devices 714 may be coupled to first bus 716, along with a bus bridge 718 which couples first bus 716 to a second bus 720. In one embodiment, second bus 720 may be a low pin count (LPC) bus. Various devices may be coupled to second bus 720 including, for example, a keyboard and/or mouse 722, communication devices 727 and a storage unit 728 such as a disk drive or other mass storage device which may include instructions/code and data 730, in one embodiment. Further, an audio I/O 724 may be coupled to second bus 720. Note that other architectures are possible. For example, instead of the point-to-point architecture of Figure 7, a system may implement a multi-drop bus or other such architecture.Referring now to Figure 8, shown is a block diagram of a third system 800 in accordance with an embodiment of the present invention. Like elements in Figure 7 and Figure 8 bear like reference numerals, and certain aspects of Figure 7 have been omitted from Figure 8 in order to avoid obscuring other aspects of Figure 8.Figure 8 illustrates that the processors 870, 880 may include integrated memory and I/O control logic ("CL") 872 and 882, respectively. For at least one embodiment, the CL 872, 882 may include integrated memory controller units such as that described above in connection with Figures 5 and 7. In addition. CL 872, 882 may also include I/O control logic. Figure 8illustrates that not only are the memories 832, 834 coupled to the CL 872, 882, but also that I/O devices 814 are also coupled to the control logic 872, 882. Legacy I/O devices 815 are coupled to the chipset 890.Referring now to Figure 9, shown is a block diagram of a SoC 900 in accordance with an embodiment of the present invention. Similar elements in Figure 5 bear like reference numerals. Also, dashed lined boxes are optional features on more advanced SoCs. In Figure 9, an interconnect unit(s) 902 is coupled to: an application processor 910 which includes a set of one or more cores 502A-N and shared cache unit(s) 506; a system agent unit 510; a bus controller unit(s) 516; an integrated memory controller unit(s) 514; a set of one or more media processors 920 which may include integrated graphics logic 508, an image processor 924 for providing still and/or video camera functionality, an audio processor 926 for providing hardware audio acceleration, and a video processor 928 for providing video encode/decode acceleration; an static random access memory (SRAM) unit 930; a direct memory access (DMA) unit 932; and a display unit 940 for coupling to one or more external displays.Figure 10 illustrates a processor containing a central processing unit (CPU) and a graphics processing unit (GPU), which may perform at least one instruction according to oneembodiment. In one embodiment, an instruction to perform operations according to at least one embodiment could be performed by the CPU. In another embodiment, the instruction could be performed by the GPU. In still another embodiment, the instruction may be performed through a combination of operations performed by the GPU and the CPU. For example, in oneembodiment, an instruction in accordance with one embodiment may be received and decoded for execution on the GPU. However, one or more operations within the decoded instruction may be performed by a CPU and the result returned to the GPU for final retirement of the instruction. Conversely, in some embodiments, the CPU may act as the primary processor and the GPU as the co-processor.In some embodiments, instructions that benefit from highly parallel, throughput processors may be performed by the GPU, while instructions that benefit from the performance of processors that benefit from deeply pipelined architectures may be performed by the CPU. For example, graphics, scientific applications, financial applications and other parallel workloads may benefit from the performance of the GPU and be executed accordingly, whereas more sequential applications, such as operating system kernel or application code may be better suited for the CPU.In Figure 10, processor 1000 includes a CPU 1005, GPU 1010, image processor 1015, video processor 1020, USB controller 1025, UART controller 1030, SPI/SDIO controller 1035, display device 1040, High-Definition Multimedia Interface (HDMI) controller 1045, MIPI controller 1050, flash memory controller 1055, dual data rate (DDR) controller 1060, security engine 1065, and I2S/I2C (Integrated Interchip Sound/Inter-Integrated Circuit) interface 1070. Other logic and circuits may be included in the processor of Figure 10, including more CPUs or GPUs and other peripheral interface controllers.One or more aspects of at least one embodiment may be implemented by representative data stored on a machine -readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores" may be stored on a tangible, machine readable medium ("tape") and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor. For example, IP cores, such as the Cortex™ family of processors developed by ARM Holdings, Ltd. andLoongson IP cores developed the Institute of Computing Technology (ICT) of the Chinese Academy of Sciences may be licensed or sold to various customers or licensees, such as Texas Instruments, Qualcomm, Apple, or Samsung and implemented in processors produced by these customers or licensees.Figure 11 shows a block diagram illustrating the development of IP cores according to one embodiment. Storage 1130 includes simulation software 1120 and/or hardware or software model 1110. In one embodiment, the data representing the IP core design can be provided to the storage 1130 via memory 1140 (e.g., hard disk), wired connection (e.g., internet) 1150 or wireless connection 1160. The IP core information generated by the simulation tool and model can then be transmitted to a fabrication facility where it can be fabricated by a third party to perform at least one instruction in accordance with at least one embodiment.In some embodiments, one or more instructions may correspond to a first type or architecture (e.g., x86) and be translated or emulated on a processor of a different type or architecture (e.g., ARM). An instruction, according to one embodiment, may therefore be performed on any processor or processor type, including ARM, x86, MIPS, a GPU, or other processor type or architecture.Figure 12 illustrates how an instruction of a first type is emulated by a processor of a different type, according to one embodiment. In Figure 12, program 1205 contains some instructions that may perform the same or substantially the same function as an instruction according to one embodiment. However the instructions of program 1205 may be of a type and/or format that is different or incompatible with processor 1215, meaning the instructions of the type in program 1205 may not be able to be executed natively by the processor 1215.However, with the help of emulation logic, 1210, the instructions of program 1205 are translated into instructions that are natively capable of being executed by the processor 1215. In one embodiment, the emulation logic is embodied in hardware. In another embodiment, the emulation logic is embodied in a tangible, machine-readable medium containing software to translate instructions of the type in the program 1205 into the type natively executable by the processor 1215. In other embodiments, emulation logic is a combination of fixed- function or programmable hardware and a program stored on a tangible, machine-readable medium. In one embodiment, the processor contains the emulation logic, whereas in other embodiments, the emulation logic exists outside of the processor and is provided by a third party. In one embodiment, the processor is capable of loading the emulation logic embodied in a tangible, machine -readable medium containing software by executing microcode or firmware contained in or associated with the processor.Figure 13 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention. In the illustrated embodiment, the instruction converter is a software instruction converter, although alternatively the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof. Figure 13 shows a program in a high level language 1302 may be compiled using an x86 compiler 1304 to generate x86 binary code 1306 that may be natively executed by a processor with at least one x86 instruction set core 1316. The processor with at least one x86 instruction set core 1316 represents any processor that can perform substantially the same functions as a Intel processor with at least one x86 instruction set core by compatibly executing or otherwise processing (1) a substantial portion of the instruction set of the Intel x86 instruction set core or (2) object code versions of applications or other software targeted to run on an Intel processor with at least one x86 instruction set core, in order to achieve substantially the same result as an Intel processor with at least one x86 instruction set core. The x86 compiler 1304 represents a compiler that is operable to generate x86 binary code 1306 (e.g., object code) that can, with or without additional linkage processing, be executed on the processor with at least one x86 instruction set core 1316. Similarly, Figure 13 shows the program in the high level language 1302 may be compiled using an alternative instruction set compiler 1308 to generate alternative instruction set binary code 1310 that may be natively executed by a processor without at least one x86 instruction set core 1314 (e.g., a processor with cores that execute the MIPS instruction set of MIPS Technologies of Sunnyvale, CA and/or that execute the ARM instruction set of ARM Holdings of Sunnyvale, CA). The instruction converter 1312 is used to convert the x86 binary code 1306 into code that may be natively executed by the processor without an x86 instruction set core 1314. This converted code is not likely to be the same as the alternative instruction set binary code 1310 because an instruction converter capable of this is difficult to make; however, the converted code will accomplish the general operation and be made up of instructions from the alternative instruction set. Thus, the instruction converter 1312 represents software, firmware, hardware, or a combination thereof that, through emulation, simulation or any other process, allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute the x86 binary code 1306.Figure 14 illustrates a diagram for one embodiment of an example of genome sequencing and alignment processing which can make use of an instruction to provide SIMD vector population count functionality. The double helix 1401 comprises two antiparallel oriented strands of sugar phosphate backbone connected to each other through base pairs of four base nucleotides, thymine, cytosine, adenine and guanine by hydrogen bonds. Base pairs (e.g. 1410 and 1420) are made up of nucleotides such as 1403 organized in sequences along the sugar phosphate backbone as shown in 1402. For example base pair 1410 is made up of a guanine nucleotide 1412 and cytosine nucleotide 1414; base pair 1420 is made up of a thymine nucleotide 1422 and an adenine nucleotide 1424. The sequences of nucleotides are encoded, stored and processed by computer application software 1404 (e.g. as strings of characters, T, C, A and G, 1442 and 1444; and/or as sequences of 2-bit or sometimes 4-bit compressed encodings of base nucleotides, 1452 and 1454). The human genome represents a significant amount of information, and storing such large quantities of information usually involves representing the four base nucleotides, thymine, cytosine, adenine and guanine (T, C, A, G) as bit pairs. There are about 3 billion base pairs in the human genome, and at two bits per base (four choices), the human genome has about 6 billion bits or about 750 MB (storing one copy of each chromosome). A more common practice may be to represent each base nucleotide of the base pair with two bits of data, at least in an intermediate format, requiring about 1.4 GB of information. One format for storing sequences is known as, "packedDna." The DNA, or deoxyribonucleic acid, packed as two bits per base, is represented as binary 2-bit values: T = 00, C = 01, A = 10, G = 11. The first base is in the most significant 2-bits of a byte; the last base is in the least significant 2 bits. For example, the sequence TCAG is represented as 00011011 in binary (hexadecimal OxlB). DNA sequencing technologies require fast and accurate alignment programs, e.g. one of which based on backward searching with a Burrows- Wheeler Transform, builds huge arrays of base nucleotide occurrence counts for various sequence lengths, often on the fly. Thus, quickly counting the occurrences of nucleotides can significantly impact performance and memory storage requirements.Figure 15A illustrates a flow diagram for one embodiment of an example of vector sub- byte decompression in preparation to use of an instruction to provide SIMD vector population count functionality. Process 1501 and other processes herein disclosed are performed by processing blocks that may comprise dedicated hardware or software or firmware operation codes executable by general purpose machines or by special purpose machines or by a combination of both.The example illustrated is an example of vector decompression from a packed 2-bit per element format (e.g. such as packedDna) to an 8-bits per byte element format. Since two divides eight evenly each byte of the packed 2-bit per element format contains four elements—one of each possible initial bit alignment.In shuffle processing block 1509 of process 1501 a first byte, zero (0), and a second byte, one (1), of source 1512 containing at least a first two sub-byte elements, a and e, are shuffled or copied into a least significant portion of a first vector element (e.g. a 32-bit vector element) of vector 1515. A third byte, two (2), and a fourth byte, three (3), containing at least a second two sub-byte elements, i and m, are shuffled or copied into a most significant portion of the first vector element of vector 1515. Also shown in shuffle processing block 1509, a fifth byte, zero (0), and a sixth byte, one (1), of source 1512 containing at least a third two sub-byte elements, b and f, are shuffled or copied into a least significant portion of a second vector element of vector 1515, and a seventh byte, two (2), and an eighth byte, three (3), containing at least a fourth two sub-byte elements, j and n, are shuffled or copied into a most significant portion of the second vector element of vector 1515 in preparation for shifting. It will be appreciated that all of the first two and second two sub-byte elements may have the same initial bit alignments, and that all of the third two and fourth two sub-byte elements may also have the same initial bit alignments. Also shown in shuffle processing block 1509, a ninth byte, zero (0), and a tenth byte, one (1), of source 1512 containing at least a fifth two sub-byte elements, c and g, are shuffled or copied into a least significant portion of a third vector element of vector 1515, and an eleventh byte, two (2), and a twelfth byte, three (3), containing at least a sixth two sub-byte elements, k and o, are shuffled or copied into a most significant portion of the third vector element of vector 1515. A thirteenth byte, zero (0), and a fourteenth byte, one (1), of source 1512 containing at least a seventh two sub-byte elements, d and h, are shuffled or copied into a least significant portion of a fourth vector element of vector 1515, and a fifteenth byte, two (2), and a sixteenth byte, three (3), containing at least an eighth two sub-byte elements, 1 and p, are shuffled or copied into a most significant portion of the fourth vector element of vector 1515 in preparation for shifting.In shift processing block 1517 the first vector element of vector 1515, holding the first two and second two sub-byte elements (i.e. a, e, i and m) is shifted by a first shift count in vector 1522, zero (0); the second vector element, holding the third two and fourth two sub-byte elements (i.e. b, f, j and n) is shifted by a second shift count, two (2); the third vector element, holding the fifth two and sixth two sub-byte elements (i.e. c, g, k and o) is shifted by a third shift count, four (4); and the fourth vector element, holding the seventh two and eighth two sub-byte elements (i.e. d, h, 1 and p) is shifted by a fourth shift count, six (6), to align the sub-byte elements to a least significant bit of their respective byte in vector 1525. In one embodiment, these shifts are performed concurrently by SIMD shifters on 32-bit vector elements of vector 1515. In alternative embodiments, smaller or larger shifts may be used instead, and not all of the shifts may be performed concurrently.In shuffle processing 1528 a byte from each of the shifted first, second, third and fourth vector elements' least significant byte position is shuffled or copied into a first vector element (e.g. a 32-bit vector element) of vector 1530; a byte from each of the shifted first, second, third and fourth vector elements' second least significant byte position is shuffled or copied into a second vector element of vector 1530; a byte from each of the shifted first, second, third and fourth vector elements' second most significant byte position is shuffled or copied into a third vector element of vector 1530 and a byte from each of the shifted first, second, third and fourth vector elements' most significant byte position is shuffled or copied into a fourth vector element of vector 1530 to restore their original sub-byte order. In one embodiment, the shuffling or copying may be performed concurrently by SIMD shufflers according to a single micro- operation or micro-op generated from decoding one or more instructions to provide SIMD vector sub-byte decompression functionality. In alternative embodiments, the shuffling or copying may also be performed by SIMD shufflers or other SIMD execution units according to more than one micro-operation or micro-op.In AND processing block 1542 a number of most significant bits of each byte of the vector 1530 are corrected or masked (e.g. using vector 1541). In one embodiment, as shown, correcting the number of bits sets six bits to zero in each byte of the 32-bit vector element. In some embodiments, SIMD vector sub-byte decompression of process 1501 may be implemented as a sequence of macro instructions, or of microcode instructions, or as a combination of both.Figure 15B illustrates a flow diagram for an alternative embodiment of an example process 1502 of vector sub-byte decompression in preparation to use of an instruction to provide SIMD vector population count functionality.The example illustrated is an example of vector decompression from a packed 4-bit per element format to an 8-bits per byte element format. Since four also divides eight evenly each byte of the packed 4-bit per element format contains two elements—one of each possible initial bit alignment.In shuffle processing block 1510 of process 1502 a first byte, zero (0), and a second byte, two (2), of source 1514 containing at least a first two sub-byte elements, a and e, are shuffled or copied into a least significant portion of a first vector element (e.g. a 32-bit vector element) of vector 1515. A third byte, four (4), and a fourth byte, six (6), containing at least a second two sub-byte elements, i and m, are shuffled or copied into a most significant portion of the first vector element of vector 1515. Also shown in shuffle processing block 1510, a fifth byte, zero (0), and a sixth byte, two (2), of source 1514 containing at least a third two sub-byte elements, b and f, are shuffled or copied into a least significant portion of a second vector element of vector 1515, and a seventh byte, four (4), and an eighth byte, six (6), containing at least a fourth two sub-byte elements, j and n, are shuffled or copied into a most significant portion of the second vector element of vector 1515 in preparation for shifting. It will be appreciated that all of the first two and second two sub-byte elements may have the same initial bit alignments, and that all of the third two and fourth two sub-byte elements may also have the same initial bit alignments. Also shown in shuffle processing block 1510, a ninth byte, one (1), and a tenth byte, three (3), of source 1514 containing at least a fifth two sub-byte elements, c and g, are shuffled or copied into a least significant portion of a third vector element of vector 1515, and an eleventh byte, five (5), and a twelfth byte, seven (7), containing at least a sixth two sub-byte elements, k and o, are shuffled or copied into a most significant portion of the third vector element of vector 1515. A thirteenth byte, one (1), and a fourteenth byte, three (3), of source 1514 containing at least a seventh two sub-byte elements, d and h, are shuffled or copied into a least significant portion of a fourth vector element of vector 1515, and a fifteenth byte, five (5), and a sixteenth byte, seven (7), containing at least an eighth two sub-byte elements, 1 and p, are shuffled or copied into a most significant portion of the fourth vector element of vector 1515 in preparation for shifting.In shift processing block 1518 the first vector element of vector 1515, holding the first two and second two sub-byte elements (i.e. a, e, i and m) is shifted by a first shift count in vector 1522, zero (0); the second vector element, holding the third two and fourth two sub-byte elements (i.e. b, f, j and n) is shifted by a second shift count, four (4); the third vector element, holding the fifth two and sixth two sub-byte elements (i.e. c, g, k and o) is shifted by a third shift count, zero (0); and the fourth vector element, holding the seventh two and eighth two sub-byte elements (i.e. d, h, 1 and p) is shifted by a fourth shift count, four (4), to align the sub-byte elements to a least significant bit of their respective byte in vector 1525. In one embodiment, these shifts are performed concurrently by SIMD shifters on 32-bit vector elements of vector1515. In alternative embodiments, smaller or larger shifts may be used instead, and not all of the shifts may be performed concurrently.In shuffle processing 1528 a byte from each of the shifted first, second, third and fourth vector elements' least significant byte position is shuffled or copied into a first vector element (e.g. a 32-bit vector element) of vector 1530; a byte from each of the shifted first, second, third and fourth vector elements' second least significant byte position is shuffled or copied into a second vector element of vector 1530; a byte from each of the shifted first, second, third and fourth vector elements' second most significant byte position is shuffled or copied into a third vector element of vector 1530 and a byte from each of the shifted first, second, third and fourth vector elements' most significant byte position is shuffled or copied into a fourth vector element of vector 1530 to restore their original sub-byte order. In one embodiment, the shuffling or copying may be performed concurrently by SIMD shufflers according to a single micro- operation or micro-op generated from decoding one or more instructions to provide SIMD vector sub-byte decompression functionality. In alternative embodiments, the shuffling or copying may also be performed by SIMD shufflers or other SIMD execution units according to more than one micro-operation or micro-op.In AND processing block 1544 a number of most significant bits of each byte of the vector 1530 are corrected or masked (e.g. using vector 1543). In one embodiment, as shown, correcting the number of bits sets four bits to zero in each byte of the 32-bit vector element. In some embodiments, SIMD vector sub-byte decompression of process 1502 may be implemented as a sequence of macro instructions, or of microcode instructions, or as a combination of both.It will be appreciated that processes 1501 and 1502 may be especially useful prior to executing an instruction to provide SIMD vector population count functionality for packed byte data. On the other hand, when an instruction to provide SIMD vector population count functionality directly for a packed 2-bit data format, or for a packed 4-bit data format is supported, processing of processes 1501 and 1502 may become unnecessary.Figure 16A illustrates an embodiment of an apparatus for executing an instruction to provide SIMD vector population count functionality.Embodiments of apparatus 1601 may be part of a pipeline 400 (e.g. execution stage 416) or part of a core 490 (e.g. execution unit(s) 462) for execution of an instruction to provide SIMD population count functionality. Embodiments of apparatus 1601 may be coupled with vector registers (e.g. physical register files unit(s) 458) each comprising one or more variable plurality of n variable sized data fields to store values of one or more variable plurality of n variable sized data elements. Embodiments of apparatus 1601 may also be coupled with a decode stage (e.g. decode 406) or a decoder (e.g. decode unit 440) to decode an instruction specifying a vector population count operation and a packed data size (e.g. as part of the instruction mnemonic itself, or as an operand, or in a control register). One or more execution units (e.g. execution apparatus 1601) responsive to the decoded instruction, may read a plurality of bits, according to the specified packed data size, of each packed data field in a portion of a source vector 1612 (e.g. either stored in a memory or in a register) wherein each of a first plurality of packed data fields in that portion of the source vector is to store a plurality of bits according to the specified packed data size. In one embodiment shown in the example of apparatus 1601, plurality of bits stored in each of a first plurality of packed data fields is two. In alternative embodiments, some other plurality of bits may be stored in each of a first plurality of packed data fields.For example, in apparatus 1601 packed data fields are stored in each of one or more portions of a first plurality of n data fields of source vector 1612, such that each packed data field in a portion of the source vector 1612 is to store a second plurality of two bits. In processing block 1620, responsive to an instruction for a SIMD 2-bit population count operation being executed in a processor, the packed data fields in this portion of n data fields of source vector 1612 are read and the occurrences of values equal to a predetermined value (e.g. 00 binary) are counted by first comparing the values read from the packed data fields in this portion for equality with the predetermined value and then counting the number of equalities in the POP 1630 counter. In one embodiment of an instruction for a SIMD 2-bit population count the predetermined value (e.g. 00 binary) may be specified by the instruction as an immediate operand. In another embodiment the predetermined value may be one of a predetermined fixed set of values 1642. In another embodiment the predetermined value may be one of a set of values 1642 specified by the instruction as one or more elements in a register operand. The result of processing block 1620, the counted occurrences equal to the predetermined value (e.g. 00 binary) may be stored in a portion of a destination 1652 corresponding to the portion of the n data fields of the source vector 1612, as one or more counts for each of the corresponding one or more predetermined values (e.g. 1642).Optionally in processing block 1621, further responsive to the instruction for a SIMD 2-bit population count operation being executed, the occurrences of values in the packed data fields in this portion of n data fields of source vector 1612 equal to a second optional predetermined value (e.g. 01 binary) are counted by first comparing the values read from the packed data fields in this portion for equality with the second predetermined value and then counting the number of equalities in the POP 1631 counter. In one embodiment of an instruction for a SIMD 2-bit population count the second optional predetermined value (e.g. 01 binary) may be specified by the instruction as part of an immediate operand. In another embodiment the secondpredetermined value may also be one of a predetermined fixed set of values 1642. In another embodiment the second predetermined value may also be one of a set of values 1642 specified by the instruction as one or more elements in a register operand. The result of processing block 1621, the counted occurrences equal to the second predetermined value (e.g. 01 binary) may also be stored in a portion of a destination 1652 corresponding to the portion of the n data fields of the source vector 1612, as one or more counts for each of the corresponding one or more predetermined values (e.g. 1642).Optionally in processing block 1622, further responsive to the instruction for a SIMD 2-bit population count operation being executed, the occurrences of values in the packed data fields in this portion of n data fields of source vector 1612 equal to a third optional predetermined value (e.g. 10 binary) are counted by first comparing the values read from the packed data fields in this portion for equality with the third predetermined value and then counting the number of equalities in the POP 1632 counter. In one embodiment of an instruction for a SIMD 2-bit population count the third optional predetermined value (e.g. 10 binary) may be specified by the instruction as part of an immediate operand. In another embodiment the third predetermined value may also be one of a predetermined fixed set of values 1642. In another embodiment the third predetermined value may also be one of a set of values 1642 specified by the instruction as one or more elements in a register operand. The result of processing block 1622, the counted occurrences equal to the third predetermined value (e.g. 10 binary) may also be stored in a portion of a destination 1652 corresponding to the portion of the n data fields of the source vector 1612, as one or more counts for each of the corresponding one or more predetermined values (e.g. 1642).Optionally in processing block 1623, further responsive to the instruction for a SIMD 2-bit population count operation being executed, the occurrences of values in the packed data fields in this portion of n data fields of source vector 1612 equal to a fourth optional predetermined value (e.g. 11 binary) are counted by first comparing the values read from the packed data fields in this portion for equality with the fourth predetermined value and then counting the number of equalities in the POP 1633 counter. In one embodiment of an instruction for a SIMD 2-bit population count the fourth optional predetermined value (e.g. 11 binary) may be specified by the instruction as part of an immediate operand. In another embodiment the fourthpredetermined value may also be one of a predetermined fixed set of values 1642. In another embodiment the fourth predetermined value may also be one of a set of values 1642 specified by the instruction as one or more elements in a register operand. The result of processing block1623, the counted occurrences equal to the fourth predetermined value (e.g. 11 binary) may also be stored in a portion of a destination 1652 corresponding to the portion of the n data fields of the source vector 1612, as one or more counts for each of the corresponding one or more predetermined values (e.g. 1642).It will be appreciated that SIMD population count instructions may be used for genome sequencing and alignment processing. Similar compression schemes are also employed more generally in other databases, data mining applications, and search applications, such that these applications may also use SIMD population count instructions.Common operations in genome alignment are counting the occurrences of nucleotides within a string in order to match or partially match base-pair strings. With a packed data format (such as packedDna) the techniques that might otherwise involve the use of look-up tables, together with shift and mask operations in order to count the different nucleotide occurrences within a string, may use SIMD population count instructions instead. By using the SIMD population count instructions, many of the operations formerly required to count the different nucleotide occurrences within a string may be eliminated. Thus the performance of applications such as genome sequencing and alignment processing, and more generally for database applications, such as data mining, and search applications may be significantly improved.Figure 16B illustrates an alternative embodiment of an apparatus 1602 for executing an instruction to provide SIMD vector population count functionality. Embodiments of apparatus 1602 may also be part of a pipeline 400 (e.g. execution stage 416) or part of a core 490 (e.g. execution unit(s) 462) for execution of an instruction to provide SIMD population count functionality. Embodiments of apparatus 1602 may be coupled with vector registers (e.g.physical register files unit(s) 458) each comprising one or more variable plurality of n variable sized data fields to store values of one or more variable plurality of n variable sized data elements. Embodiments of apparatus 1602 may also be coupled with a decode stage (e.g. decode 406) or a decoder (e.g. decode unit 440) to decode an instruction specifying a vector population count operation and a packed data size (e.g. as part of the instruction mnemonic itself, or as an operand, or in a control register). One or more execution units (e.g. execution apparatus 1602) responsive to the decoded instruction, may read a plurality of bits, according to the specified packed data size, of each packed data field in a portion of a source vector 1612 (e.g. either stored in a memory or in a register) wherein each of a first plurality of packed data fields in that portion of the source vector is to store a plurality of bits according to the specified packed data size. In one embodiment shown in the example of apparatus 1602, the plurality of bits stored in each of a first plurality of packed data fields is four bits. In alternative embodiments, some other plurality of bits may be stored in each of a first plurality of packed data fields.For example, in apparatus 1602 packed data fields are stored in each of one or more portions of a first plurality of n data fields of source vector 1614, such that each packed data field in a portion of the source vector 1614 is to store a second plurality of four bits. In processing block 1640, responsive to an instruction for a SIMD 4-bit population count operation being executed in a processor, the packed data fields in this portion of n data fields of source vector 1614 are read and the occurrences of values equal to a predetermined value (e.g. T) are counted by first comparing the values read from the packed data fields in this portion for equality with the predetermined value and then counting the number of equalities in the POP 1630 counter. In one embodiment of an instruction for a SIMD 4-bit population count the predetermined value (e.g. T) may be specified by the instruction as an immediate operand. In another embodiment the predetermined value may be one of a predetermined fixed set of values 1644. In another embodiment the predetermined value may be one of a set of values 1644 specified by the instruction as one or more elements in a register operand. The result of processing block 1640, the counted occurrences equal to the predetermined value (e.g. T) may be stored in a portion of a destination 1654 corresponding to the portion of the n data fields of the source vector 1614, as one or more counts for each of the corresponding one or more predetermined values (e.g. 1644).Optionally in processing block 1641, further responsive to the instruction for a SIMD 4-bit population count operation being executed, the occurrences of values in the packed data fields in this portion of n data fields of source vector 1614 equal to a second optional predetermined value (e.g. C) are counted by first comparing the values read from the packed data fields in this portion for equality with the second predetermined value and then counting the number of equalities in the POP 1631 counter. In one embodiment of an instruction for a SIMD 4-bit population count the second optional predetermined value (e.g. C) may be specified by the instruction as part of an immediate operand. In another embodiment the second predetermined value may also be one of a predetermined fixed set of values 1644. In another embodiment the second predetermined value may also be one of a set of values 1644 specified by the instruction as one or more elements in a register operand. The result of processing block 1641, the counted occurrences equal to the second predetermined value (e.g. C) may also be stored in a portion of a destination 1654 corresponding to the portion of the n data fields of the source vector 1614, as one or more counts for each of the corresponding one or more predetermined values (e.g. 1644).Optionally in processing block 1642, further responsive to the instruction for a SIMD 4-bit population count operation being executed, the occurrences of values in the packed data fields in this portion of n data fields of source vector 1614 equal to a third optional predetermined value (e.g. A) are counted by first comparing the values read from the packed data fields in this portion for equality with the third predetermined value and then counting the number of equalities in the POP 1632 counter. In one embodiment of an instruction for a SIMD 4-bit population count the third optional predetermined value (e.g. A) may be specified by the instruction as part of an immediate operand. In another embodiment the third predetermined value may also be one of a predetermined fixed set of values 1644. In another embodiment the third predetermined value may also be one of a set of values 1644 specified by the instruction as one or more elements in a register operand. The result of processing block 1642, the counted occurrences equal to the third predetermined value (e.g. A) may also be stored in a portion of a destination 1654 corresponding to the portion of the n data fields of the source vector 1614, as one or more counts for each of the corresponding one or more predetermined values (e.g. 1644).Optionally in processing block 1643, further responsive to the instruction for a SIMD 4-bit population count operation being executed, the occurrences of values in the packed data fields in this portion of n data fields of source vector 1614 equal to a fourth optional predetermined value (e.g. G) are counted by first comparing the values read from the packed data fields in this portion for equality with the fourth predetermined value and then counting the number of equalities in the POP 1633 counter. In one embodiment of an instruction for a SIMD 4-bit population count the fourth optional predetermined value (e.g. G) may be specified by the instruction as part of an immediate operand. In another embodiment the fourth predetermined value may also be one of a predetermined fixed set of values 1644. In another embodiment the fourth predetermined value may also be one of a set of values 1644 specified by the instruction as one or more elements in a register operand. The result of processing block 1643, the counted occurrences equal to the fourth predetermined value (e.g. G) may also be stored in a portion of a destination 1654 corresponding to the portion of the n data fields of the source vector 1614, as one or more counts for each of the corresponding one or more predetermined values (e.g. 1644).Figure 16C illustrates another alternative embodiment of an apparatus 1603 for executing an instruction to provide SIMD vector population count functionality. Embodiments of apparatus 1603 may also be part of a pipeline 400 (e.g. execution stage 416) or part of a core 490 (e.g. execution unit(s) 462) for execution of an instruction to provide SIMD population count functionality. Embodiments of apparatus 1603 may be coupled with vector registers (e.g.physical register files unit(s) 458) each comprising one or more variable plurality of n variable sized data fields to store values of one or more variable plurality of n variable sized data elements. Embodiments of apparatus 1603 may also be coupled with a decode stage (e.g. decode 406) or a decoder (e.g. decode unit 440) to decode an instruction specifying a vector population count operation and a packed data size (e.g. as part of the instruction mnemonic itself, or as an operand, or in a control register). One or more execution units (e.g. execution apparatus 1603) responsive to the decoded instruction, may read a plurality of bits, according to the specified packed data size, of each packed data field in a portion of a source vector 1618 (e.g. either stored in a memory or in a register) wherein each of a first plurality of packed data fields in that portion of the source vector is to store a plurality of bits according to the specified packed data size. In one embodiment shown in the example of apparatus 1603, the plurality of bits stored in each of a first plurality of packed data fields is eight bits. In alternative embodiments, some other plurality of bits may be stored in each of a first plurality of packed data fields.For example, in apparatus 1603 packed data fields are stored in each of one or more portions of a first plurality of n data fields of source vector 1618, such that each packed data field in a portion of the source vector 1618 is to store a second plurality of eight bits. In processing block 1680, responsive to an instruction for a SIMD 8-bit population count operation being executed in a processor, the packed data fields in this portion of n data fields of source vector 1618 are read and the occurrences of values equal to a predetermined value (e.g. 0x58) are counted by first comparing the values read from the packed data fields in this portion for equality with the predetermined value and then counting the number of equalities in the POP 1630 counter. In one embodiment of an instruction for a SIMD 8-bit population count thepredetermined value (e.g. 0x58) may be specified by the instruction as an immediate operand. In another embodiment the predetermined value may be one of a predetermined fixed set of values 1644. In another embodiment the predetermined value may be one of a set of values 1644 specified by the instruction as one or more elements in a register operand. The result of processing block 1680, the counted occurrences equal to the predetermined value (e.g. 0x58) may be stored in a portion of a destination 1654 corresponding to the portion of the n data fields of the source vector 1618, as one or more counts for each of the corresponding one or more predetermined values (e.g. 1644).Optionally in processing block 1681, further responsive to the instruction for a SIMD 8-bit population count operation being executed, the occurrences of values in the packed data fields in this portion of n data fields of source vector 1618 equal to a second optional predetermined value (e.g. 0x43) are counted by first comparing the values read from the packed data fields in this portion for equality with the second predetermined value and then counting the number of equalities in the POP 1631 counter. In one embodiment of an instruction for a SIMD 8-bit population count the second optional predetermined value (e.g. 0x43) may be specified by the instruction as part of an immediate operand. In another embodiment the second predetermined value may also be one of a predetermined fixed set of values 1644. In another embodiment the second predetermined value may also be one of a set of values 1644 specified by the instruction as one or more elements in a register operand. The result of processing block 1681, the counted occurrences equal to the second predetermined value (e.g. 0x43) may also be stored in a portion of a destination 1654 corresponding to the portion of the n data fields of the source vector 1618, as one or more counts for each of the corresponding one or more predetermined values (e.g. 1644).Optionally in processing block 1682, further responsive to the instruction for a SIMD 8-bit population count operation being executed, the occurrences of values in the packed data fields in this portion of n data fields of source vector 1618 equal to a third optional predetermined value (e.g. 0x41) are counted by first comparing the values read from the packed data fields in this portion for equality with the third predetermined value and then counting the number of equalities in the POP 1632 counter. In one embodiment of an instruction for a SIMD 8-bit population count the third optional predetermined value (e.g. 0x41) may be specified by the instruction as part of an immediate operand. In another embodiment the third predetermined value may also be one of a predetermined fixed set of values 1644. In another embodiment the third predetermined value may also be one of a set of values 1644 specified by the instruction as one or more elements in a register operand. The result of processing block 1682, the counted occurrences equal to the third predetermined value (e.g. 0x41) may also be stored in a portion of a destination 1654 corresponding to the portion of the n data fields of the source vector 1618, as one or more counts for each of the corresponding one or more predetermined values (e.g. 1644).Optionally in processing block 1683, further responsive to the instruction for a SIMD 8-bit population count operation being executed, the occurrences of values in the packed data fields in this portion of n data fields of source vector 1618 equal to a fourth optional predetermined value (e.g. 0x47) are counted by first comparing the values read from the packed data fields in this portion for equality with the fourth predetermined value and then counting the number of equalities in the POP 1633 counter. In one embodiment of an instruction for a SIMD 8-bit population count the fourth optional predetermined value (e.g. 0x47) may be specified by the instruction as part of an immediate operand. In another embodiment the fourth predetermined value may also be one of a predetermined fixed set of values 1644. In another embodiment the fourth predetermined value may also be one of a set of values 1644 specified by the instruction as one or more elements in a register operand. The result of processing block 1683, the counted occurrences equal to the fourth predetermined value (e.g. 0x47) may also be stored in a portion of a destination 1654 corresponding to the portion of the n data fields of the source vector 1618, as one or more counts for each of the corresponding one or more predetermined values (e.g. 1644).Figure 16D illustrates another alternative embodiment of an apparatus 1604 for executing an instruction to provide SIMD vector population count functionality. Embodiments of apparatus 1604 may also be part of a pipeline 400 (e.g. execution stage 416) or part of a core 490 (e.g. execution unit(s) 462) for execution of an instruction to provide SIMD population count functionality. Embodiments of apparatus 1604 may be coupled with vector registers (e.g.physical register files unit(s) 458) each comprising one or more variable plurality of n variable sized data fields to store values of one or more variable plurality of n variable sized data elements. Embodiments of apparatus 1604 may also be coupled with a decode stage (e.g. decode 406) or a decoder (e.g. decode unit 440) to decode an instruction specifying a vector population count operation and a packed data size (e.g. as part of the instruction mnemonic itself, or as an operand, or in a control register). One or more execution units (e.g. execution apparatus 1604) responsive to the decoded instruction, may read a plurality of bits, according to the specified packed data size, of each packed data field in a portion of a source vector 1618 (e.g. either stored in a memory or in a register) wherein each of a first plurality of packed data fields in that portion of the source vector is to store a plurality of bits according to the specified packed data size. In one embodiment shown in the example of apparatus 1604, the plurality of bits stored in each of a first plurality of packed data fields is eight bits. In alternative embodiments, some other plurality of bits may be stored in each of a first plurality of packed data fields.For example, in apparatus 1604 packed data fields are stored in each of one or more portions of a plurality of n data fields of source vector 1618, such that each packed data field in a portion of the source vector 1618 is to store a plurality of eight bits. In processing block 1684, responsive to one embodiment of an instruction for a SIMD 8-bit population count operation being executed in a processor, the packed data fields in a portion (e.g. the least significant portion) of n data fields of source vector 1618 are read and the occurrences of values equal to a predetermined value (e.g. 0x58) are counted by first comparing the values read from the packed data fields in this portion for equality with the predetermined value and then counting the number of equalities in the POP 1634 counter. In one embodiment of an instruction for a SIMD 8-bit population count the predetermined value (e.g. 0x58) may be specified by the instruction as an immediate operand. In another embodiment the predetermined value may be one of a set of values 1644 specified by the instruction as one or more elements in a register operand. The result of processing block 1684, the counted occurrences (e.g. in the least significant portion) equal to the predetermined value (e.g. 0x58) may be stored in a portion of a destination 1654 corresponding to the portion of the n data fields of the source vector 1618, as one or more counts for the corresponding one or more predetermined values (e.g. 0x58).In processing block 1685, further responsive to the instruction for a SIMD 8-bit population count operation being executed, the occurrences of values in the packed data fields in a portion (e.g. the second least significant portion) of n data fields of source vector 1618 equal to a second optional predetermined value (e.g. 0x43) are counted by first comparing the values read from the packed data fields in this portion for equality with the second predetermined value and then counting the number of equalities in the POP 1635 counter. In one embodiment of an instruction for a SIMD 8-bit population count the second optional predetermined value (e.g. 0x43) may be specified by the instruction as part of an immediate operand. In another embodiment the second predetermined value may also be one of a set of values 1644 specified by the instruction as one or more elements in a register operand. The result of processing block 1685, the counted occurrences (e.g. in the second least significant portion) equal to the second predetermined value (e.g. 0x43) may also be stored in a portion of a destination 1654 corresponding to the portion of the n data fields of the source vector 1618, as one or more counts for the corresponding one or more predetermined values (e.g. 0x43).In processing block 1686, further responsive to the instruction for a SIMD 8-bit population count operation being executed, the occurrences of values in the packed data fields in a portion (e.g. the third least significant portion) of n data fields of source vector 1618 equal to a third optional predetermined value (e.g. 0x41) are counted by first comparing the values read from the packed data fields in this portion for equality with the third predetermined value and then counting the number of equalities in the POP 1636 counter. In one embodiment of an instruction for a SIMD 8-bit population count the third optional predetermined value (e.g. 0x41) may be specified by the instruction as part of an immediate operand. In another embodiment the third predetermined value may also be one of a set of values 1644 specified by the instruction as one or more elements in a register operand. The result of processing block 1686, the counted occurrences (e.g. in the third least significant portion) equal to the third predetermined value (e.g. 0x41) may also be stored in a portion of a destination 1654 corresponding to the portion of the n data fields of the source vector 1618, as one or more counts for the corresponding one or more predetermined values (e.g. 0x41).In processing block 1687, further responsive to the instruction for a SIMD 8-bit population count operation being executed, the occurrences of values in the packed data fields in a portion (e.g. the fourth least significant portion) of n data fields of source vector 1618 equal to a fourth optional predetermined value (e.g. 0x47) are counted by first comparing the values read from the packed data fields in this portion for equality with the fourth predetermined value and then counting the number of equalities in the POP 1637 counter. In one embodiment of an instruction for a SIMD 8-bit population count the fourth optional predetermined value (e.g. 0x47) may be specified by the instruction as part of an immediate operand. In another embodiment the fourth predetermined value may also be one of a set of values 1644 specified by the instruction as one or more elements in a register operand. The result of processing block 1687, the counted occurrences (e.g. in the fourth least significant portion) equal to the fourth predetermined value (e.g. 0x47) may also be stored in a portion of a destination 1654 corresponding to the portion of the n data fields of the source vector 1618, as one or more counts for the corresponding one or more predetermined values (e.g. 0x47).Figure 16E illustrates another alternative embodiment of an apparatus for executing an instruction to provide SIMD vector population count functionality. Embodiments of apparatus 1605 may also be part of a pipeline 400 (e.g. execution stage 416) or part of a core 490 (e.g. execution unit(s) 462) for execution of an instruction to provide SIMD population count functionality. Embodiments of apparatus 1605 may be coupled with vector registers (e.g.physical register files unit(s) 458) each comprising one or more variable plurality of n variable sized data fields to store values of one or more variable plurality of n variable sized data elements. Embodiments of apparatus 1605 may also be coupled with a decode stage (e.g. decode 406) or a decoder (e.g. decode unit 440) to decode an instruction specifying a vector population count operation and a packed data size (e.g. as part of the instruction mnemonic itself, or as an operand, or in a control register). One or more execution units (e.g. execution apparatus 1605) responsive to the decoded instruction, may read a plurality of bits, according to the specified packed data size, of each packed data field in a portion of a source vector 1618 (e.g. either stored in a memory or in a register) wherein each of a first plurality of packed data fields in that portion of the source vector is to store a plurality of bits according to the specified packed data size. In one embodiment shown in the example of apparatus 1605, the plurality of bits stored in each of a first plurality of packed data fields is eight bits. In alternative embodiments, some other plurality of bits may be stored in each of a first plurality of packed data fields.For example, in apparatus 1605 packed data fields are stored in each of one or more portions of a plurality of n data fields of source vector 1618, such that each packed data field in a portion of the source vector 1618 is to store a plurality of eight bits. In processing block 1648, responsive to one embodiment of an instruction for a SIMD 8-bit population count operation being executed in a processor, the packed data fields in a portion (e.g. the least significant portion) of n data fields of source vector 1618 are read and the occurrences of values equal to one or more predetermined values (e.g. 1644) are counted by first comparing the values read from the packed data fields in this portion for equality with each of the one or morepredetermined values (e.g. 1644) and then counting the number of equalities in the POP 1643 counters. In one embodiment of an instruction for a SIMD 8-bit population count one or more predetermined values (e.g. 1644) may be specified by the instruction as an immediate operand. In another embodiment the one or more predetermined values may be one of a set of values 1644 specified by the instruction as one or more elements in a register operand. The result of processing block 1648, the counted occurrences (e.g. in the least significant portion) equal to each of the one or more predetermined values (e.g. 1644) may be stored in a portion of a destination 1650 corresponding to the portion of the n data fields of the source vector 1618, as one or more counts for each of the corresponding one or more predetermined values (e.g. 1644).In processing block 1658, further responsive to the instruction for a SIMD 8-bit population count operation being executed, the occurrences of values in the packed data fields in a portion (e.g. the second least significant portion) of n data fields of source vector 1618 equal to each of the one or more predetermined values (e.g. 1644) are counted by first comparing the values read from the packed data fields in this portion for equality with the one or more predetermined values (e.g. 1644) and then counting the number of equalities in the POP 1653 counters. The result of processing block 1658, the counted occurrences (e.g. in the second least significant portion) equal to each of the one or more predetermined values (e.g. 1644) may also be stored in a portion of a destination 1650 corresponding to the portion of the n data fields of the source vector 1618, as one or more counts for each of the corresponding one or more predetermined values (e.g. 1644).In processing block 1668, further responsive to the instruction for a SIMD 8-bit population count operation being executed, the occurrences of values in the packed data fields in a portion (e.g. the third least significant portion) of n data fields of source vector 1618 equal to one or more predetermined values (e.g. 1644) are counted by first comparing the values read from the packed data fields in this portion for equality with the one or more predetermined values and then counting the number of equalities in the POP 1663 counters. The result of processing block 1668, the counted occurrences (e.g. in the third least significant portion) equal to each of the one or more predetermined values (e.g. 1644) may also be stored in a portion of a destination 1650 corresponding to the portion of the n data fields of the source vector 1618, as one or more counts for each of the corresponding one or more predetermined values (e.g. 1644).In processing block 1678, further responsive to the instruction for a SIMD 8-bit population count operation being executed, the occurrences of values in the packed data fields in a portion (e.g. the fourth least significant portion) of n data fields of source vector 1618 equal to one or more predetermined values (e.g. 1644) are counted by first comparing the values read from the packed data fields in this portion for equality with the one or more predetermined values and then counting the number of equalities in the POP 1673 counter. The result of processing block 1678, the counted occurrences (e.g. in the fourth least significant portion) equal to each of the one or more predetermined values (e.g. 1644) may also be stored in a portion of a destination 1650 corresponding to the portion of the n data fields of the source vector 1618, as one or more counts for each of the corresponding one or more predetermined values (e.g. 1644).It will be appreciated that the apparatus disclosed herein for executing SIMD population count instructions may be used for genome sequencing and alignment processing to improve computational efficiency and reduce power consumption. Similar compression schemes are also employed more generally in other databases, data mining applications, and search applications, such that these applications may also use apparatus disclosed herein for executing SIMD population count instructions to improve computational efficiency and reduce powerconsumption.Figure 17A illustrates a flow diagram for one embodiment of an example process 1701 for executing an instruction to provide SIMD vector population count functionality. In processing block 1710 of process 1701 data fields are stored, in one or more portions of n data fields of a source vector, such that each data field in a portion of the source vector is to store a plurality of bits. In processing block 1720 an instruction for a SIMD population count operation is executed in a processor. Then in processing block 1730, for the data fields in this portion of the n data fields of the source vector, the occurrences of values equal to one or more predetermined values are counted. In processing block 1740, the counted occurrences are stored in a portion of the destination corresponding to the portion of the n data fields of the source vector, as one or more counts for each of the corresponding one or more predetermined values. In processing block 1790, a determination is made whether or not the process 1701 is finished processing all potions of the source vector. It not, processing reiterates beginning in processing block 1730. Otherwise processing ends in processing block 1799.Figure 17B illustrates a flow diagram for an alternative embodiment of an example process 1702 for executing an instruction to provide SIMD vector population count functionality. In processing block 1712 of process 1702 data fields are stored, in one or more portions of n 2- bit data fields of a source vector, such that each data field in a portion of the source vector is to store a pair of bits. In processing block 1722 an instruction for a SIMD 2-bit population count operation is executed in a processor. Then in processing block 1732, for the 2-bit data fields in this portion of the n 2-bit data fields of the source vector, the occurrences of values equal to one or more predetermined values are counted. In processing block 1742, the counted occurrences are stored in a portion of the destination corresponding to the portion of the n 2-bit data fields of the source vector, as one or more counts for each of the corresponding one or morepredetermined values. In processing block 1790, a determination is made whether or not the process 1702 is finished processing all potions of the source vector. It not, processing reiterates beginning in processing block 1732. Otherwise processing ends in processing block 1799.Figure 17C illustrates a flow diagram for another alternative embodiment of an example process 1704 for executing an instruction to provide SIMD vector population count functionality. In processing block 1714 of process 1704 data fields are stored, in one or more portions of n 4- bit data fields of a source vector, such that each data field in a portion of the source vector is to store a pair of bits. In processing block 1724 an instruction for a SIMD 4-bit population count operation is executed in a processor. Then in processing block 1734, for the 4-bit data fields in this portion of the n 4-bit data fields of the source vector, the occurrences of values equal to one or more predetermined values are counted. In processing block 1744, the counted occurrences are stored in a portion of the destination corresponding to the portion of the n 4-bit data fields of the source vector, as one or more counts for each of the corresponding one or morepredetermined values. In processing block 1790, a determination is made whether or not the process 1704 is finished processing all potions of the source vector. It not, processing reiterates beginning in processing block 1734. Otherwise processing ends in processing block 1799.Figure 17D illustrates a flow diagram for another alternative embodiment of an example process 1708 for executing an instruction to provide SIMD vector population count functionality. In processing block 1718 of process 1708 data fields are stored, in one or more portions of n 8- bit data fields of a source vector, such that each data field in a portion of the source vector is to store a pair of bits. In processing block 1728 an instruction for a SIMD 8-bit population count operation is executed in a processor. Then in processing block 1738, for the 8-bit data fields in this portion of the n 8-bit data fields of the source vector, the occurrences of values equal to one or more predetermined values are counted. In processing block 1748, the counted occurrences are stored in a portion of the destination corresponding to the portion of the n 8-bit data fields of the source vector, as one or more counts for each of the corresponding one or morepredetermined values. In processing block 1790, a determination is made whether or not the process 1708 is finished processing all potions of the source vector. It not, processing reiterates beginning in processing block 1738. Otherwise processing ends in processing block 1799.Figure 18A illustrates a flow diagram for one embodiment of an example process 1801 for executing an instruction to provide SIMD vector population count functionality. In processing block 1810 of process 1801 packed data fields are stored, in each of one or more portions of a source vector, such that each packed data field in a portion of the source vector is to store a plurality of bits. In processing block 1820 an instruction is decoded, the instruction specifying a SIMD population count operation and a packed data size. Then in processing block 1830, responsive to the decoded instruction, the plurality of bits are read of each of the packed data fields in a portion of the one or more portions of the source vector. In processing block 1840, for the packed data fields in this portion of the source vector, the occurrences of values equal to one or more predetermined values are counted. In processing block 1850, the counted occurrences are stored in a portion of the destination corresponding to the source vector portion, as one or more counts corresponding to the one or more predetermined values. In processing block 1890, a determination is made whether or not the process 1801 is finished processing all potions of the source vector. It not, processing reiterates beginning in processing block 1830. Otherwise processing ends in processing block 1899.Figure 18B illustrates a flow diagram for an alternative embodiment of an example process 1802 for executing an instruction to provide SIMD vector population count functionality. In processing block 1812 of process 1802 packed data fields are stored, in each of one or more portions of a source vector, such that each packed data field in a portion of the source vector is to store a pair of bits. In processing block 1822 an instruction is decoded, the instruction specifying a SIMD population count operation and a packed data size. Then in processing block 1832, responsive to the decoded instruction, the pair of bits are read of each of the packed data fields in a portion of the one or more portions of the source vector. In processing block 1842, for the packed data fields in this portion of the source vector, the occurrences of values equal to one or more predetermined values are counted. In processing block 1852, the counted occurrences are stored in a portion of the destination corresponding to the source vector portion, as one or more counts corresponding to the one or more predetermined values. In processing block 1890, a determination is made whether or not the process 1802 is finished processing all potions of the source vector. It not, processing reiterates beginning in processing block 1832. Otherwise processing ends in processing block 1899.Figure 18C illustrates a flow diagram for another alternative embodiment of an example process for executing an instruction to provide SIMD vector population count functionality. In processing block 1814 of process 1804 packed data fields are stored, in each of one or more portions of a source vector, such that each packed data field in a portion of the source vector is to store a 4-bit nibble of bits. In processing block 1824 an instruction is decoded, the instruction specifying a SIMD population count operation and a packed data size. Then in processing block 1834, responsive to the decoded instruction, the nibble of bits are read of each of the packed data fields in a portion of the one or more portions of the source vector. In processing block 1844, for the packed data fields in this portion of the source vector, the occurrences of values equal to one or more predetermined values are counted. In processing block 1854, the counted occurrences are stored in a portion of the destination corresponding to the source vector portion, as one or more counts corresponding to the one or more predetermined values. In processing block 1890, a determination is made whether or not the process 1804 is finished processing all potions of the source vector. It not, processing reiterates beginning in processing block 1834. Otherwise processing ends in processing block 1899.Figure 18D illustrates a flow diagram for another alternative embodiment of an example process for executing an instruction to provide SIMD vector population count functionality. In processing block 1818 of process 1808 packed data fields are stored, in each of one or more portions of a source vector, such that each packed data field in a portion of the source vector is to store a byte of data. In processing block 1828 an instruction is decoded, the instruction specifying a SIMD population count operation and a packed data size. Then in processing block 1838, responsive to the decoded instruction, the byte of data is read of each of the packed data fields in a portion of the one or more portions of the source vector. In processing block 1848, for the packed data fields in this portion of the source vector, the occurrences of values equal to one or more predetermined values are counted. In processing block 1858, the counted occurrences are stored in a portion of the destination corresponding to the source vector portion, as one or more counts corresponding to the one or more predetermined values. In processing block 1890, a determination is made whether or not the process 1808 is finished processing all potions of the source vector. It not, processing reiterates beginning in processing block 1838. Otherwise processing ends in processing block 1899.It will be appreciated that SIMD population count instructions may be used to improve genome sequencing and alignment processing efficiency. Similar compression schemes are also employed more generally in other databases, data mining applications, and search applications, such that these applications may also use SIMD population count instructions to improve efficiency.Common operations in genome alignment are counting the occurrences of nucleotides within a string in order to match or partially match base-pair strings. With a packed data format (such as packedDna) the techniques that might otherwise involve the use of look-up tables, together with shift and mask operations in order to count the different nucleotide occurrences within a string, may use SIMD population count instructions instead. By using the SIMD population count instructions, many of the operations formerly required to count the different nucleotide occurrences within a string may be eliminated. Thus the performance of applications such as genome sequencing and alignment processing, and more generally for database applications, such as data mining, and search applications may be significantly improved.Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches. Embodiments of the invention may be implemented as computer programs or program code executing onprogrammable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.Program code may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores" may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor.Such machine -readable storage media may include, without limitation, non-transitory, tangible arrangements of articles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritable's (CD-RWs), and magneto -optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.Accordingly, embodiments of the invention also include non-transitory, tangible machine- readable media containing instructions or containing design data, such as Hardware Description Language (HDL), which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also be referred to as program products.In some cases, an instruction converter may be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter may translate (e.g., using static binary translation, dynamic binary translation including dynamic compilation), morph, emulate, or otherwise convert an instruction to one or more other instructions to be processed by the core. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on processor, off processor, or part on and part off processor.Thus, techniques for performing one or more instructions according to at least one embodiment are disclosed. While certain exemplary embodiments have been described and shown in the accompanying drawings, it is to be understood that such embodiments are merely illustrative of and not restrictive on the broad invention, and that this invention not be limited to the specific constructions and arrangements shown and described, since various other modifications may occur to those ordinarily skilled in the art upon studying this disclosure. In an area of technology such as this, where growth is fast and further advancements are not easily foreseen, the disclosed embodiments may be readily modifiable in arrangement and detail as facilitated by enabling technological advancements without departing from the principles of the present disclosure or the scope of the accompanying claims. |
The invention comprises processing deposited oxide and grown oxide materials. In one implementation, a substrate is provided to have outwardly exposed grown oxide material and having deposited oxide material. The grown oxide material is etched substantially selective relative to the deposited oxide material. In another considered aspect, a silicon surface is thermally oxidized to form substantially undoped silicon dioxide over a substrate. A substantially undoped silicon dioxide layer is chemical vapor deposited over the substrate, with at least some of the thermally grown silicon dioxide being outwardly exposed. The exposed thermally grown silicon dioxide layer is vapor etched substantially selective relative to the deposited silicon dioxide layer using an etch chemistry comprising substantially anhydrous HF and an organic primer. |
What is claimed is: 1. A semiconductor processing method comprising: depositing a substantially undoped silicon dioxide layer over a substrate; depositing substantially undoped silicon onto the deposited silicon dioxide layer; thermally oxidizing the silicon to form a thermal silicon dioxide layer on the deposited silicon dioxide layer; and etching at least a portion of the thermal silicon dioxide layer selectively relative to the deposited silicon dioxide layer. 2. The method of claim 1 wherein etching chemistry during etching comprises vapor HF, and the etching is conducted at a temperature ranging from about 60 DEG C. to about 150 DEG C. 3. The method of claim 1 wherein etching chemistry during etching comprises vapor HF, and the etching is conducted at a pressure ranging from about 10 Torr to about 300 Torr. 4. The method of claim 1 wherein etching chemistry during etching comprises vapor HF, and the etching is conducted at a temperature ranging from about 60 DEG C. to about 150 DEG C. and at a pressure ranging from about 10 Torr to about 300 Torr. 5. A semiconductor processing method comprising: chemical vapor depositing a substantially undoped silicon dioxide layer over a substrate by decomposition of tetraethylorthosilicate; depositing substantially undoped silicon onto the deposited silicon dioxide layer; thermally oxidizing the silicon to form a thermal silicon dioxide layer on the deposited silicon dioxide layer; and etching at least a portion of the thermal silicon dioxide layer selectively relative to the deposited silicon dioxide layer using an etch chemistry comprising substantially anhydrous HF and an organic primer. 6. The method of claim 5 wherein the substantially anhydrous HF has less than or equal to 0.1% water by volume. 7. The method of claim 1 wherein the etching comprises using an etch chemistry comprising substantially anhydrous HF. 8. The method of claim 1 wherein the etching comprises using an etch chemistry comprising substantially anhydrous HF having less than or equal to 0.1% water by volume. 9. The method of claim 1 wherein the deposited oxide material is not outwardly exposed immediately prior to commencing said etching of the thermal oxide material. 10. The method of claim 1 wherein the deposited oxide material is outwardly exposed immediately prior to commencing said etching of the thermal oxide material. 11. The method of claim 1 wherein the etching comprises using an etch chemistry comprising substantially anhydrous HF and an organic primer. 12. The method of claim 11 wherein the organic primer is selected from the group consisting of alcohols and ketones and mixtures thereof. 13. The method of claim 12 wherein the organic primer comprises an alcohol. 14. The method of claim 12 wherein the organic primer comprises a ketone. 15. The method of claim 1 wherein the etching comprises using an etch chemistry comprising substantially anhydrous HF having less than or equal to 0.1% water by volume, and an organic primer. 16. The method of claim 15 wherein the organic primer is selected from the group consisting of alcohols and ketones and mixtures thereof. 17. The method of claim 16 wherein the organic primer comprises an alcohol. 18. The method of claim 16 wherein the organic primer comprises a ketone. 19. The method of claim 5 wherein the deposited oxide material is not outwardly exposed immediately prior to commencing said etching of the thermal oxide material. 20. The method of claim 5 wherein the deposited oxide material is outwardly exposed immediately prior to commencing said etching of the thermal oxide material. 21. The method of claim 5 wherein the organic primer is selected from the group consisting of alcohols and ketones and mixtures thereof. 22. The method of claim 21 wherein the organic primer comprises an alcohol. 23. The method of claim 21 wherein the organic primer comprises a ketone. |
TECHNICAL FIELD This invention relates to semiconductor processing methods, including, for example, methods of preparing a silicon wafer for fabrication of integrated circuitry. BACKGROUND OF THE INVENTION Integrated circuitry is typically fabricated on and within semiconductor substrates, such a bulk monocrystalline silicon wafers. In the context of this document, the term "semiconductive substrate" is defined to mean any construction comprising semiconductive material, including, but not limited to, bulk semiconductive materials such as a semiconductive wafer (either alone or in assemblies comprising other materials thereon), and semiconductive material layers (either alone or in assemblies comprising other materials). The term "substrate" refers to any supporting structure, including, but not limited to, the semiconductive substrates described above. Electrical components fabricated on substrates, and particularly bulk semiconductor wafers, are isolated from adjacent devices by insulating materials, such as insulating oxides. One isolation technique uses shallow trench isolation, whereby trenches are cut into a substrate and are subsequently filled with insulating oxide, such as undoped silicon dioxide deposited by plasma-enhanced decomposition of tetraethylorthosilicate (PETEOS). In the context of this document, "substantially undoped" means a layer having a dopant concentration which is less than or equal to 10@18 atoms/cm@3. The insulating material is typically planarized back to define isolated trenches filled with oxide. Subsequently, a previously formed pad oxide layer is removed from over the substrate to expose silicon for processing. Unfortunately, removal of the pad oxide also etches the TEOS deposited oxide and can undesirably form "keyholes" in the shallow trench isolation oxide. Although the invention spawned primarily out of these concerns, the artisan will appreciate applicability of the following invention in other areas of semiconductor processing. SUMMARY OF INVENTION The invention comprises processing deposited oxide and grown oxide materials. In one implementation, a substrate is provided to have outwardly exposed grown oxide material and having deposited oxide material. The grown oxide material is etched substantially selective relative to the deposited oxide material. In another considered aspect, a silicon surface is thermally oxidized to form substantially undoped silicon dioxide over a substrate. A substantially undoped silicon dioxide layer is chemical vapor deposited over the substrate, with at least some of the thermally grown silicon dioxide being outwardly exposed. The exposed thermally grown silicon dioxide layer is vapor etched substantially selective relative to the deposited silicon dioxide layer using an etch chemistry comprising substantially anhydrous HF and an organic primer. BRIEF DESCRIPTION OF THE DRAWINGS Preferred embodiments of the invention are described below with reference to the following accompanying drawings. FIG. 1 is a sectional view of a semiconductor wafer fragment at one processing step in accordance with the invention. FIG. 2 is a view of the FIG. 1 wafer at a processing step subsequent to that shown by FIG. 1. FIG. 3 is a view of the FIG. 1 wafer at a processing step subsequent to that shown by FIG. 2. FIG. 4 is a view of the FIG. 1 wafer at a processing step subsequent to that shown by FIG. 3. FIG. 5 is a view of the FIG. 1 wafer at a processing step subsequent to that shown by FIG. 4. FIG. 6 is a view of the FIG. 1 wafer at a processing step subsequent to that shown by FIG. 5. FIG. 7 is a sectional view of an alternate embodiment semiconductor wafer fragment at an alternate processing step in accordance with an aspect of the invention. FIG. 8 is a view of the FIG. 7 wafer at a processing step subsequent to that shown by FIG. 8. FIG. 9 is a view of the FIG. 7 wafer at a processing step subsequent to that shown by FIG. 8. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS This disclosure of the invention is submitted in furtherance of the constitutional purposes of the U.S. Patent Laws "to promote the progress of science and useful arts" (Article 1, Section 8). The discussion proceeds initially with reference to FIGS. 1-6 for a first-described embodiment of the invention. FIG. 1 illustrates a semiconductor wafer fragment 10 comprised of a bulk monocrystalline silicon substrate 12. An oxide layer 14, such as silicon dioxide, is formed over bulk silicon wafer 12 to form a pad/protection of oxide layer. Such could be formed by any technique, such as thermally oxidizing the outer silicon surface of substrate 12 in a steam ambient at 800 DEG C.-1150 DEG C. for 1-120 minutes to form a substantially undoped silicon dioxide layer 14 to a thickness of 40-200 Angstroms. A silicon nitride layer 16 is formed over thermal silicon dioxide layer 14, for example by chemical vapor deposition. Such will principally serve as an etch or polishing stop layer as will be apparent subsequently. Referring to FIG. 2, a series of circuitry isolation trenches 18 and 20 are formed through silicon nitride layer 16, thermal silicon dioxide layer 14 and within bulk silicon wafer 12. Referring to FIG. 3, a deposited oxide material 22 is formed over wafer 10 to fill circuitry isolation trenches 18 and 20. Layer 22 preferably comprises a substantially undoped silicon dioxide provided by plasma enhanced chemical vapor deposition from decomposition of tetraethylorthosilicate. Thus, ideally both material 22 and layer 14 are substantially undoped. Further in this embodiment, the thermally grown oxide is provided before the deposited oxide, with the thermally grown oxide also being provided before formation of the circuitry isolation trenches. Referring to FIG. 4, deposited silicon dioxide layer 22 is planarized polished, such as by chemical-mechanical polishing, in a manner which is substantially selective relative to silicon nitride layer 16, with layer 16 thus forming an etch stop layer. This provides but one example of removing deposited oxide from outwardly of trenches 18 and 20, and providing a thermally grown oxide layer over the substrate outwardly of the trenches. Referring to FIG. 5, silicon nitride layer 16 is etched substantially selective relative to thermal silicon dioxide layer 14 and deposited silicon dioxide layer 22, leaving outwardly exposed substantially undoped deposited silicon dioxide and outwardly exposed thermal silicon dioxide. An example chemistry would include a wet H3 PO4 etch. Referring to FIG. 6, the exposed thermally grown oxide material 14 is etched substantially selective relative to the exposed deposited oxide 22. Thus in this embodiment, the deposited oxide material is outwardly exposed at commencing of the etching of the grown oxide material substantially selective relative to the deposited oxide material. The preferred etching is vapor etching, which also etches the thermal oxide substantially selective relative to underlying silicon, using an etch chemistry comprising substantially anhydrous HF and an organic primer. In the context of this document, "substantially anhydrous" means having no greater than 10% water by volume of the HF fraction of the etching chemistry. Most preferably, the substantially anhydrous HF fraction has less than or equal to 0.1% water by volume. Preferred organic primers include alcohols and ketones and mixtures thereof, with methanol being but one example. A preferred temperature and pressure range during the vapor etching is from about 50 DEG C. to about 150 DEG C. and a pressure from about 10 Torr to about 300 Torr. One reduction-to-practice example included anhydrous HF having less than 0.1% water at a flow rate of 180 sccm, and N2 flow rate of 750 sccm, and CH3 0H at 175 sccm. The temperature was 120 DEG C. and pressure was 100 Torr. Selectivity in etch rate of the thermally grown silicon dioxide to the chemical vapor deposited silicon dioxide by PETEOS was approximately 171:1. An alternate embodiment is described with reference to FIGS. 7-9. In the first described embodiment, the deposited oxide material was outwardly exposed along with the thermally grown oxide material at the point of commencing of the substantially selective etching of the grown oxide material. The FIGS. 7-9 embodiment provides but one example of a technique whereby the deposited oxide material is not outwardly exposed at the commencing of the selective etching of the grown oxide material. In this embodiment, like numerals are utilized from the first described embodiment, with differences being indicated with the suffix "a" or with different numerals. Referring to FIG. 7, a substantially undoped silicon dioxide layer 40 is deposited over a substrate 12 of the illustrated wafer fragment 10a. The preferred technique is as described above utilizing PETEOS. A layer 50 of substantially undoped silicon is deposited onto silicon dioxide layer 40. Layer 50 comprises, for example, polysilicon chemical vapor deposited using a silane as a source gas. Referring to FIG. 8, silicon layer 50 is thermally oxidized, preferably in an H2 O ambient, to form a thermal silicon dioxide layer 60 on deposited silicon dioxide layer 40. Referring to FIG. 9, a photoresist layer can be deposited and patterned (not shown) to outwardly expose only or at least a portion of thermally grown silicon dioxide layer 60. Subsequently, the exposed portion of thermal silicon dioxide layer 60 is etched substantially selective relative to deposited silicon dioxide layer 40 using an etch chemistry as described above, namely substantially anhydrous HF and an organic primer to produce the illustrated selective etch of FIG. 9. The above described preferred embodiment facilitates preservation of deposited oxide thickness and minimizing or avoiding keyhole formation in shallow trench isolation when stripping thermal oxide from the active device regions. Ultraviolet light is preferably not used in the process. In compliance with the statute, the invention has been described in language more or less specific as to structural and methodical features. It is to be understood, however, that the invention is not limited to the specific features shown and described, since the means herein disclosed comprise preferred forms of putting the invention into effect. The invention is, therefore, claimed in any of its forms or modifications within the proper scope of the appended claims appropriately interpreted in accordance with the doctrine of equivalents. |
Multi-tile Memory Management for Detecting Cross Tile Access, Providing Multi-Tile Inference Scaling with multicasting of data via copy operation, and Providing Page Migration are disclosed herein. In one embodiment, a graphics processor for a multi-tile architecture includes a first graphics processing unit (GPU) having a memory and a memory controller, a second graphics processing unit (GPU) having a memory and a cross-GPU fabric to communicatively couple the first and second GPUs. The memory controller is configured to determine whether frequent cross tile memory accesses occur from the first GPU to the memory of the second GPU in the multi- GPU configuration and to send a message to initiate a data transfer mechanism when frequent cross tile memory accesses occur from the first GPU to the memory of the second GPU. |
CLAIMSWhat is claimed is:1. A graphics processor having a multi-tile architecture, comprising:a first graphics processing unit (GPU) having a memory and a memory controller;a second graphics processing unit (GPU) having a memory; anda cross-GPU fabric to communicatively couple the first and second GPUs, wherein the memory controller is configured to determine whether frequent cross tile memory accesses occur between the first GPU and the second GPU in the multi-GPU configuration and to cause initiation of a data transfer mechanism when frequent cross tile memory accesses occur between the first GPU and the second GPU.2. The graphics processor of claim 1, further comprising:a hardware counter to count cross tile memory accesses between the first GPU and the second GPU.3. The graphics processor of claim 2, wherein the memory controller is configured to determine whether frequent cross tile memory accesses occur between the first GPU and the second GPU in the multi-GPU configuration using data from the hardware counter.4. The graphics processor of claim 3, wherein the data transfer mechanism to cause data that is being accessed frequently by the second GPU to be transferred or copied to the memory of the second GPU.5. The graphics processor of claim 1, wherein the data transfer mechanism to cause data that is being accessed frequently by the first GPU to be transferred or copied to the memory of the first GPU.6. The graphics processor of claim 1, wherein the memory controller is configured to detect transfer patterns automatically including accesses between the first and second GPUs.7. The graphics processor of claim 1, wherein the memory controller is configured to detect transfer patterns automatically including accesses to page N of the memory of the second GPU and to start transferring pages N+l and N+2 prior to requests for pages N+l and N+2.8. A graphics processing unit (GPU) of a multi-GPU architecture, comprising:processing resources to perform graphics operations;a memory; anda memory controller, wherein the memory controller is configured to determine whether frequent cross tile memory accesses occur between the GPU and a remote memory of a remote GPU in the multi-GPU configuration and to cause initiation of a data transfer mechanism when frequent cross tile memory accesses occur between the GPU and the remote memory of the remote GPU.9. The graphics processor of claim 8, further comprising:a hardware counter to count cross tile memory accesses from the GPU to the remote memory of the remote GPU.10. The GPU of claim 9, wherein the memory controller is configured to determine whether frequent cross tile memory accesses occur between the GPU and the remote memory of the remote GPU in the multi-GPU configuration using data from the hardware counter.11. The GPU of claim 10, wherein the data transfer mechanism to cause data that is being accessed frequently by the remote GPU to be transferred or copied to the remote memory.12. The GPU of claim 8, wherein the data transfer mechanism to cause data that is being accessed frequently by the GPU to be transferred or copied to the memory of the GPU.13. The GPU of claim 8, wherein the memory controller is configured to detect transfer patterns automatically between the GPU and the remote GPU.14. The GPU of claim 8, wherein the memory controller is configured to detect transfer patterns automatically including accesses to page N of the remote memory and to start transferring pages N+l and N+2 prior to requests for pages N+l and N+2.15. A computer- implemented method to provide a data transfer mechanism for a multiple GPU configuration, the computer-implemented method comprises:monitoring cross tile memory accesses from a local GPU to one or more remote GPUs in the multi-GPU configuration.
determining, with a memory controller, whether frequent cross tile memory accesses occur from a local GPU to one or more remote GPUs in the multi-GPU configuration; and sending a message to initiate the data transfer mechanism when frequent cross tile memory accesses occur from the local GPU to one or more remote GPUs in the multi-GPU configuration.16. The computer-implemented method of claim 15, further comprising: receiving, with a graphics driver, the message from the memory controller and to provide the data transfer mechanism in response to receiving the message.17. The computer-implemented method of claim 15, wherein the data transfer mechanismaccesses a page table to provide a translation of virtual addresses to physical addresses.18. The computer-implemented method of claim 15, wherein the data transfer mechanism to transfer or copy the data that is being accessed frequently by the local GPU to the local memory of the local GPU and to local memory of at least one other GPU.19. The computer-implemented method of claim 15, wherein the data transfer mechanism to transfer or copy the data that is being accessed frequently by the local GPU to multiple tiles or GPUs to enable split frame rendering with a first GPU handling rendering for a first portion of a display and a second GPU handling rendering for a second different portion of the display.20. The computer-implemented method of claim 15, further comprising: performing a page allocation to local memory of the local GPU when a first access to a page in a remote GPU memory occurs. |
MULTI-TILE MEMORY MANAGEMENT FOR DETECTING CROSS TILE ACCESS, PROVIDING MULTI-TILE INFERENCE SCALING, AND PROVIDING OPTIMALPAGE MIGRATIONCROSS-REFERENCE TO RELATED APPLICATIONS[0001] The present application is related to and, under 35 U.S.C. 119(e), claims the benefit of and priority to U.S. Provisional Applications 62/819,337, entitled GRAPHICS PROCESSING, by Abhishek Appu, et al., filed March 15, 2019 (Attorney Docket No. AC0271-Z), 62/819,435, entitled GRAPHICS DATA PROCESSING, by Lakshminarayanan Striramassarma, et ak, filed March 15, 2019 (Attorney Docket No. AC0285-Z), and 62/819,361, entitled SYSTEMS AND METHODS FOR PARTITIONNING CACHE TO REDUCE CACHE ACCESS LATENCY, by Subramaniam Maiyuran, et ak, filed March 15, 2019 (Attorney Docket No. AC0286-Z), the contents of all are incorporated herein by reference.FIELD[0002] This disclosure relates generally to data processing and more particularly to data processing via a general-purpose graphics processing unit.BACKGROUND OF THE DISCLOSURE[0003] Current parallel graphics data processing includes systems and methods developed to perform specific operations on graphics data such as, for example, linear interpolation, tessellation, rasterization, texture mapping, depth testing, etc. Traditionally, graphics processors used fixed function computational units to process graphics data; however, more recently, portions of graphics processors have been made programmable, enabling such processors to support a wider variety of operations for processing vertex and fragment data.[0004] To further increase performance, graphics processors typically implement processing techniques such as pipelining that attempt to process, in parallel, as much graphics data as possible throughout the different parts of the graphics pipeline. Parallel graphics processors with single instruction, multiple thread (SIMT) architectures are designed to maximize the amount of parallel processing in the graphics pipeline. In an SIMT architecture, groups of parallel threads attempt to execute program instructions synchronously together as often as possible to increase processing efficiency. A general overview of software and hardware for SIMT architectures can be found in Shane Cook, CUDA Programming Chapter 3, pages 37-51 (2013).BRIEF DESCRIPTION OF THE DRAWINGS[0005] So that the manner in which the above recited features of the present embodiments can be understood in detail, a more particular description of the embodiments, briefly
summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments and are therefore not to be considered limiting of its scope.[0006] FIG. 1 is a block diagram illustrating a computer system configured to implement one or more aspects of the embodiments described herein;[0007] FIG. 2A-2D illustrate parallel processor components;[0008] FIG. 3A-3C are block diagrams of graphics multiprocessors and multiprocessor- based GPUs;[0009] FIG. 4A-4F illustrate an exemplary architecture in which a plurality of GPUs is communicatively coupled to a plurality of multi-core processors;[0010] FIG. 5 illustrates a graphics processing pipeline;[0011] FIG. 6 illustrates a machine learning software stack;[0012] FIG. 7 illustrates a general-purpose graphics processing unit;[0013] FIG. 8 illustrates a multi-GPU computing system;[0014] FIG. 9A-9B illustrate layers of exemplary deep neural networks;[0015] FIG. 10 illustrates an exemplary recurrent neural network;[0016] FIG. 11 illustrates training and deployment of a deep neural network;[0017] FIG. 12 is a block diagram illustrating distributed learning;[0018] FIG. 13 illustrates an exemplary inferencing system on a chip (SOC) suitable for performing inferencing using a trained model;[0019] FIG. 14 is a block diagram of a processing system;[0020] FIG. 15A-15C illustrate computing systems and graphics processors;[0021] FIG. 16A-16C illustrate block diagrams of additional graphics processor and compute accelerator architectures;[0022] FIG. 17 is a block diagram of a graphics processing engine of a graphics processor;[0023] FIG. 18A-18B illustrate thread execution logic including an array of processing elements employed in a graphics processor core;[0024] FIG. 19 illustrates an additional execution unit;[0025] FIG. 20 is a block diagram illustrating a graphics processor instruction formats;[0026] FIG. 21 is a block diagram of an additional graphics processor architecture;[0027] FIG. 22A-22B illustrate a graphics processor command format and command sequence;[0028] FIG. 23 illustrates exemplary graphics software architecture for a data processing system;[0029] FIG. 24A is a block diagram illustrating an IP core development system;
[0030] FIG. 24B illustrates a cross-section side view of an integrated circuit package assembly;[0031] FIG. 24C illustrates a package assembly that includes multiple units of hardware logic chiplets connected to a substrate (e.g., base die);[0032] FIG. 24D illustrates a package assembly including interchangeable chiplets;[0033] FIG. 25 is a block diagram illustrating an exemplary system on a chip integrated circuit;[0034] FIG. 26A-26B are block diagrams illustrating exemplary graphics processors for use within an SoC;[0035] FIG. 27 shows a system 2700 that includes a processor 2707 coupled to a graphics processor 2702 having multiple GPUs.[0036] FIG. 28 illustrates a computer-implemented method 2800 for detecting cross tile access to provide a page transfer mechanism for a graphics processor in accordance with one embodiment.[0037] FIG. 29 shows multi-tile inference scaling with multicasting for a multi-GPU configuration in accordance with one embodiment.[0038] As illustrated in FIG. 30, in one optional implementation a unified memory addressable via a common virtual memory address space used to access the physical processor memories 3001-3002 and GPU memories 3020-3023 is utilized.[0039] FIG. 31 illustrates a fabric interconnect 3124 that can enable communication between graphics engine tiles 3100A-3100D and components such as the video codec 3106 and one or more copy engines 3104.[0040] FIG. 32 illustrates a computer-implemented method 3200 for reading data once and then copying the data multiple times with multicast to new destinations such as different tiles for a graphics processor in accordance with one embodiment.[0041] FIG. 33 illustrates a multi-GPU system having a distributed memory model in accordance with one embodiment.[0042] FIG. 34 illustrates a page sharing table for each GPU in accordance with one embodiment.[0043] FIG. 35 illustrates a network table for a number of hops between GPUs in accordance with one embodiment.[0044] FIG. 36 illustrates a computer- implemented method 3600 for optimal page migration between GPUs for a multi-GPU configuration in accordance with one embodiment.
DETAILED DESCRIPTION[0045] A graphics processing unit (GPU) is communicatively coupled to host/processor cores to accelerate, for example, graphics operations, machine-learning operations, pattern analysis operations, and/or various general-purpose GPU (GPGPU) functions. The GPU may be communicatively coupled to the host processor/cores over a bus or another interconnect (e.g., a high-speed interconnect such as PCIe or NVLink). Alternatively, the GPU may be integrated on the same package or chip as the cores and communicatively coupled to the cores over an internal processor bus/interconnect (i.e., internal to the package or chip). Regardless of the manner in which the GPU is connected, the processor cores may allocate work to the GPU in the form of sequences of commands/instructions contained in a work descriptor. The GPU then uses dedicated circuitry/logic for efficiently processing these commands/instructions.[0046] In the following description, numerous specific details are set forth to provide a more thorough understanding. However, it will be apparent to one of skill in the art that the embodiments described herein may be practiced without one or more of these specific details. In other instances, well-known features have not been described to avoid obscuring the details of the present embodiments.System Overview[0047] FIG. 1 is a block diagram illustrating a computing system 100 configured to implement one or more aspects of the embodiments described herein. The computing system 100 includes a processing subsystem 101 having one or more processor(s) 102 and a system memory 104 communicating via an interconnection path that may include a memory hub 105. The memory hub 105 may be a separate component within a chipset component or may be integrated within the one or more processor(s) 102. The memory hub 105 couples with an I/O subsystem 111 via a communication link 106. The I/O subsystem 111 includes an I/O hub 107 that can enable the computing system 100 to receive input from one or more input device(s) 108. Additionally, the I/O hub 107 can enable a display controller, which may be included in the one or more processor(s) 102, to provide outputs to one or more display device(s) 110A. In one embodiment the one or more display device(s) 110A coupled with the I/O hub 107 can include a local, internal, or embedded display device.[0048] The processing subsystem 101, for example, includes one or more parallel processor(s) 112 coupled to memory hub 105 via a bus or other communication link 113. The communication link 113 may be one of any number of standards-based communication link technologies or protocols, such as, but not limited to PCI Express, or may be a vendor specific communications interface or communications fabric. The one or more parallel processor(s) 112 may form a computationally focused parallel or vector processing system that can include a large
number of processing cores and/or processing clusters, such as a many integrated core (MIC) processor. For example, the one or more parallel processor(s) 112 form a graphics processing subsystem that can output pixels to one of the one or more display device(s) 110A coupled via the FO Hub 107. The one or more parallel processor(s) 112 can also include a display controller and display interface (not shown) to enable a direct connection to one or more display device(s) 110B.[0049] Within the FO subsystem 111, a system storage unit 114 can connect to the FO hub 107 to provide a storage mechanism for the computing system 100. An FO switch 116 can be used to provide an interface mechanism to enable connections between the FO hub 107 and other components, such as a network adapter 118 and/or wireless network adapter 119 that may be integrated into the platform, and various other devices that can be added via one or more add-in device(s) 120. The add-in device(s) 120 may also include, for example, one or more external graphics processor devices and/or compute accelerators. The network adapter 118 can be an Ethernet adapter or another wired network adapter. The wireless network adapter 119 can include one or more of a Wi-Fi, Bluetooth, near field communication (NFC), or other network device that includes one or more wireless radios.[0050] The computing system 100 can include other components not explicitly shown, including USB or other port connections, optical storage drives, video capture devices, and the like, may also be connected to the FO hub 107. Communication paths interconnecting the various components in FIG. 1 may be implemented using any suitable protocols, such as PCI (Peripheral Component Interconnect) based protocols (e.g., PCI-Express), or any other bus or point-to-point communication interfaces and/or protocol(s), such as the NV-Link high-speed interconnect, or interconnect protocols known in the art.[0051] The one or more parallel processor(s) 112 may incorporate circuitry optimized for graphics and video processing, including, for example, video output circuitry, and constitutes a graphics processing unit (GPU). Alternatively or additionally, the one or more parallel processor(s) 112 can incorporate circuitry optimized for general purpose processing, while preserving the underlying computational architecture, described in greater detail herein.Components of the computing system 100 may be integrated with one or more other system elements on a single integrated circuit. For example, the one or more parallel processor(s) 112, memory hub 105, processor(s) 102, and FO hub 107 can be integrated into a system on chip (SoC) integrated circuit. Alternatively, the components of the computing system 100 can be integrated into a single package to form a system in package (SIP) configuration. In one embodiment at least a portion of the components of the computing system 100 can be integrated
into a multi-chip module (MCM), which can be interconnected with other multi-chip modules into a modular computing system.[0052] It will be appreciated that the computing system 100 shown herein is illustrative and that variations and modifications are possible. The connection topology, including the number and arrangement of bridges, the number of processor(s) 102, and the number of parallel processor(s) 112, may be modified as desired. For instance, system memory 104 can be connected to the processor(s) 102 directly rather than through a bridge, while other devices communicate with system memory 104 via the memory hub 105 and the processor(s) 102. In other alternative topologies, the parallel processor(s) 112 are connected to the I/O hub 107 or directly to one of the one or more processor(s) 102, rather than to the memory hub 105. In other embodiments, the I/O hub 107 and memory hub 105 may be integrated into a single chip. It is also possible that two or more sets of processor(s) 102 are attached via multiple sockets, which can couple with two or more instances of the parallel processor(s) 112.[0053] Some of the particular components shown herein are optional and may not be included in all implementations of the computing system 100. For example, any number of add in cards or peripherals may be supported, or some components may be eliminated. Furthermore, some architectures may use different terminology for components similar to those illustrated in FIG. 1. For example, the memory hub 105 may be referred to as a Northbridge in some architectures, while the I/O hub 107 may be referred to as a Southbridge.[0054] FIG. 2A illustrates a parallel processor 200. The parallel processor 200 may be a GPU, GPGPU or the like as described herein. The various components of the parallel processor 200 may be implemented using one or more integrated circuit devices, such as programmable processors, application specific integrated circuits (ASICs), or field programmable gate arrays (FPGA). The illustrated parallel processor 200 may be the, or one of the parallel processor(s) 112 shown in FIG. 1.[0055] The parallel processor 200 includes a parallel processing unit 202. The parallel processing unit includes an I/O unit 204 that enables communication with other devices, including other instances of the parallel processing unit 202. The I/O unit 204 may be directly connected to other devices. For instance, the I/O unit 204 connects with other devices via the use of a hub or switch interface, such as memory hub 105. The connections between the memory hub 105 and the I/O unit 204 form a communication link 113. Within the parallel processing unit 202, the I/O unit 204 connects with a host interface 206 and a memory crossbar 216, where the host interface 206 receives commands directed to performing processing operations and the memory crossbar 216 receives commands directed to performing memory operations.
[0056] When the host interface 206 receives a command buffer via the I/O unit 204, the host interface 206 can direct work operations to perform those commands to a front end 208. In one embodiment the front end 208 couples with a scheduler 210, which is configured to distribute commands or other work items to a processing cluster array 212. The scheduler 210 ensures that the processing cluster array 212 is properly configured and in a valid state before tasks are distributed to the processing clusters of the processing cluster array 212. The scheduler 210 may be implemented via firmware logic executing on a microcontroller. The microcontroller implemented scheduler 210 is configurable to perform complex scheduling and work distribution operations at coarse and fine granularity, enabling rapid preemption and context switching of threads executing on the processing array 212. Preferably, the host software can prove workloads for scheduling on the processing array 212 via one of multiple graphics processing doorbells. The workloads can then be automatically distributed across the processing array 212 by the scheduler 210 logic within the scheduler microcontroller.[0057] The processing cluster array 212 can include up to“N” processing clusters (e.g., cluster 214A, cluster 214B, through cluster 214N). Each cluster 214A-214N of the processing cluster array 212 can execute a large number of concurrent threads. The scheduler 210 can allocate work to the clusters 214A-214N of the processing cluster array 212 using various scheduling and/or work distribution algorithms, which may vary depending on the workload arising for each type of program or computation. The scheduling can be handled dynamically by the scheduler 210, or can be assisted in part by compiler logic during compilation of program logic configured for execution by the processing cluster array 212. Optionally, different clusters 214A-214N of the processing cluster array 212 can be allocated for processing different types of programs or for performing different types of computations.[0058] The processing cluster array 212 can be configured to perform various types of parallel processing operations. For example, the cluster array 212 is configured to perform general-purpose parallel compute operations. For example, the processing cluster array 212 can include logic to execute processing tasks including filtering of video and/or audio data, performing modeling operations, including physics operations, and performing datatransformations.[0059] The processing cluster array 212 is configured to perform parallel graphics processing operations. In such embodiments in which the parallel processor 200 is configured to perform graphics processing operations, the processing cluster array 212 can include additional logic to support the execution of such graphics processing operations, including, but not limited to texture sampling logic to perform texture operations, as well as tessellation logic and other vertex processing logic. Additionally, the processing cluster array 212 can be configured to
execute graphics processing related shader programs such as, but not limited to vertex shaders, tessellation shaders, geometry shaders, and pixel shaders. The parallel processing unit 202 can transfer data from system memory via the I/O unit 204 for processing. During processing the transferred data can be stored to on-chip memory (e.g., parallel processor memory 222) during processing, then written back to system memory.[0060] In embodiments in which the parallel processing unit 202 is used to perform graphics processing, the scheduler 210 may be configured to divide the processing workload into approximately equal sized tasks, to better enable distribution of the graphics processing operations to multiple clusters 214A-214N of the processing cluster array 212. In some of these embodiments, portions of the processing cluster array 212 can be configured to perform different types of processing. For example a first portion may be configured to perform vertex shading and topology generation, a second portion may be configured to perform tessellation and geometry shading, and a third portion may be configured to perform pixel shading or other screen space operations, to produce a rendered image for display. Intermediate data produced by one or more of the clusters 214A-214N may be stored in buffers to allow the intermediate data to be transmitted between clusters 214A-214N for further processing.[0061] During operation, the processing cluster array 212 can receive processing tasks to be executed via the scheduler 210, which receives commands defining processing tasks from front end 208. For graphics processing operations, processing tasks can include indices of data to be processed, e.g., surface (patch) data, primitive data, vertex data, and/or pixel data, as well as state parameters and commands defining how the data is to be processed (e.g., what program is to be executed). The scheduler 210 may be configured to fetch the indices corresponding to the tasks or may receive the indices from the front end 208. The front end 208 can be configured to ensure the processing cluster array 212 is configured to a valid state before the workload specified by incoming command buffers (e.g., batch-buffers, push buffers, etc.) is initiated.[0062] Each of the one or more instances of the parallel processing unit 202 can couple with parallel processor memory 222. The parallel processor memory 222 can be accessed via the memory crossbar 216, which can receive memory requests from the processing cluster array 212 as well as the I/O unit 204. The memory crossbar 216 can access the parallel processor memory 222 via a memory interface 218. The memory interface 218 can include multiple partition units (e.g., partition unit 220 A, partition unit 220B, through partition unit 220N) that can each couple to a portion (e.g., memory unit) of parallel processor memory 222. The number of partition units 220A-220N may be configured to be equal to the number of memory units, such that a first partition unit 220A has a corresponding first memory unit 224A, a second partition unit 220B has a corresponding memory unit 224B, and an Nth partition unit 220N has a corresponding Nth
memory unit 224N. In other embodiments, the number of partition units 220A-220N may not be equal to the number of memory devices.[0063] The memory units 224A-224N can include various types of memory devices, including dynamic random-access memory (DRAM) or graphics random access memory, such as synchronous graphics random access memory (SGRAM), including graphics double data rate (GDDR) memory. Optionally, the memory units 224A-224N may also include 3D stacked memory, including but not limited to high bandwidth memory (HBM). Persons skilled in the art will appreciate that the specific implementation of the memory units 224A-224N can vary, and can be selected from one of various conventional designs. Render targets, such as frame buffers or texture maps may be stored across the memory units 224A-224N, allowing partition units 220A-220N to write portions of each render target in parallel to efficiently use the available bandwidth of parallel processor memory 222. In some embodiments, a local instance of the parallel processor memory 222 may be excluded in favor of a unified memory design that utilizes system memory in conjunction with local cache memory.[0064] Optionally, any one of the clusters 214A-214N of the processing cluster array 212 has the ability to process data that will be written to any of the memory units 224A-224N within parallel processor memory 222. The memory crossbar 216 can be configured to transfer the output of each cluster 214A-214N to any partition unit 220A-220N or to another cluster 214A- 214N, which can perform additional processing operations on the output. Each cluster 214A- 214N can communicate with the memory interface 218 through the memory crossbar 216 to read from or write to various external memory devices. In one of the embodiments with the memory crossbar 216 the memory crossbar 216 has a connection to the memory interface 218 to communicate with the I/O unit 204, as well as a connection to a local instance of the parallel processor memory 222, enabling the processing units within the different processing clusters 214A-214N to communicate with system memory or other memory that is not local to the parallel processing unit 202. Generally, the memory crossbar 216 may, for example, by able to use virtual channels to separate traffic streams between the clusters 214A-214N and the partition units 220A-220N.[0065] While a single instance of the parallel processing unit 202 is illustrated within the parallel processor 200, any number of instances of the parallel processing unit 202 can be included. For example, multiple instances of the parallel processing unit 202 can be provided on a single add-in card, or multiple add-in cards can be interconnected. The different instances of the parallel processing unit 202 can be configured to inter-operate even if the different instances have different numbers of processing cores, different amounts of local parallel processor memory, and/or other configuration differences. Optionally, some instances of the parallel
processing unit 202 can include higher precision floating point units relative to other instances. Systems incorporating one or more instances of the parallel processing unit 202 or the parallel processor 200 can be implemented in a variety of configurations and form factors, including but not limited to desktop, laptop, or handheld personal computers, servers, workstations, game consoles, and/or embedded systems.[0066] FIG. 2B is a block diagram of a partition unit 220. The partition unit 220 may be an instance of one of the partition units 220A-220N of FIG. 2A. As illustrated, the partition unit 220 includes an L2 cache 221, a frame buffer interface 225, and a ROP 226 (raster operations unit). The L2 cache 221 is a read/write cache that is configured to perform load and store operations received from the memory crossbar 216 and ROP 226. Read misses and urgent write back requests are output by L2 cache 221 to frame buffer interface 225 for processing. Updates can also be sent to the frame buffer via the frame buffer interface 225 for processing. In one embodiment the frame buffer interface 225 interfaces with one of the memory units in parallel processor memory, such as the memory units 224A-224N of FIG. 2A (e.g., within parallel processor memory 222). The partition unit 220 may additionally or alternatively also interface with one of the memory units in parallel processor memory via a memory controller (not shown).[0067] In graphics applications, the ROP 226 is a processing unit that performs raster operations such as stencil, z test, blending, and the like. The ROP 226 then outputs processed graphics data that is stored in graphics memory. In some embodiments the ROP 226 includes compression logic to compress depth or color data that is written to memory and decompress depth or color data that is read from memory. The compression logic can be losslesscompression logic that makes use of one or more of multiple compression algorithms. The type of compression that is performed by the ROP 226 can vary based on the statistical characteristics of the data to be compressed. For example, in one embodiment, delta color compression is performed on depth and color data on a per-tile basis.[0068] The ROP 226 may be included within each processing cluster (e.g., cluster 214A- 214N of FIG. 2A) instead of within the partition unit 220. In such embodiment, read and write requests for pixel data are transmitted over the memory crossbar 216 instead of pixel fragment data. The processed graphics data may be displayed on a display device, such as one of the one or more display device(s) 110 of FIG. 1, routed for further processing by the processor(s) 102, or routed for further processing by one of the processing entities within the parallel processor 200 of FIG. 2A.[0069] FIG. 2C is a block diagram of a processing cluster 214 within a parallel processing unit. For example, the processing cluster is an instance of one of the processing clusters 214A- 214N of FIG. 2A. The processing cluster 214 can be configured to execute many threads in
parallel, where the term“thread” refers to an instance of a particular program executing on a particular set of input data. Optionally, single-instruction, multiple-data (SIMD) instruction issue techniques may be used to support parallel execution of a large number of threads without providing multiple independent instruction units. Alternatively, single-instruction, multiple- thread (SIMT) techniques may be used to support parallel execution of a large number of generally synchronized threads, using a common instruction unit configured to issue instructions to a set of processing engines within each one of the processing clusters. Unlike a SIMD execution regime, where all processing engines typically execute identical instructions, SIMT execution allows different threads to more readily follow divergent execution paths through a given thread program. Persons skilled in the art will understand that a SIMD processing regime represents a functional subset of a SIMT processing regime.[0070] Operation of the processing cluster 214 can be controlled via a pipeline manager 232 that distributes processing tasks to SIMT parallel processors. The pipeline manager 232 receives instructions from the scheduler 210 of FIG. 2 A and manages execution of those instructions via a graphics multiprocessor 234 and/or a texture unit 236. The illustrated graphics multiprocessor 234 is an exemplary instance of a SIMT parallel processor. However, various types of SIMT parallel processors of differing architectures may be included within the processing cluster 214. One or more instances of the graphics multiprocessor 234 can be included within a processing cluster 214. The graphics multiprocessor 234 can process data and a data crossbar 240 can be used to distribute the processed data to one of multiple possible destinations, including other shader units. The pipeline manager 232 can facilitate the distribution of processed data by specifying destinations for processed data to be distributed via the data crossbar 240.[0071] Each graphics multiprocessor 234 within the processing cluster 214 can include an identical set of functional execution logic (e.g., arithmetic logic units, load-store units, etc.). The functional execution logic can be configured in a pipelined manner in which new instructions can be issued before previous instructions are complete. The functional execution logic supports a variety of operations including integer and floating-point arithmetic, comparison operations, Boolean operations, bit-shifting, and computation of various algebraic functions. The same functional-unit hardware could be leveraged to perform different operations and any combination of functional units may be present.[0072] The instructions transmitted to the processing cluster 214 constitutes a thread. A set of threads executing across the set of parallel processing engines is a thread group. A thread group executes the same program on different input data. Each thread within a thread group can be assigned to a different processing engine within a graphics multiprocessor 234. A thread group may include fewer threads than the number of processing engines within the graphics
multiprocessor 234. When a thread group includes fewer threads than the number of processing engines, one or more of the processing engines may be idle during cycles in which that thread group is being processed. A thread group may also include more threads than the number of processing engines within the graphics multiprocessor 234. When the thread group includes more threads than the number of processing engines within the graphics multiprocessor 234, processing can be performed over consecutive clock cycles. Optionally, multiple thread groups can be executed concurrently on the graphics multiprocessor 234.[0073] The graphics multiprocessor 234 may include an internal cache memory to perform load and store operations. Optionally, the graphics multiprocessor 234 can forego an internal cache and use a cache memory (e.g., LI cache 248) within the processing cluster 214. Each graphics multiprocessor 234 also has access to L2 caches within the partition units (e.g., partition units 220A-220N of FIG. 2A) that are shared among all processing clusters 214 and may be used to transfer data between threads. The graphics multiprocessor 234 may also access off-chip global memory, which can include one or more of local parallel processor memory and/or system memory. Any memory external to the parallel processing unit 202 may be used as global memory. Embodiments in which the processing cluster 214 includes multiple instances of the graphics multiprocessor 234 can share common instructions and data, which may be stored in the LI cache 248.[0074] Each processing cluster 214 may include an MMU 245 (memory management unit) that is configured to map virtual addresses into physical addresses. In other embodiments, one or more instances of the MMU 245 may reside within the memory interface 218 of FIG. 2 A. The MMU 245 includes a set of page table entries (PTEs) used to map a virtual address to a physical address of a tile and optionally a cache line index. The MMU 245 may include address translation lookaside buffers (TLB) or caches that may reside within the graphics multiprocessor 234 or the LI cache or processing cluster 214. The physical address is processed to distribute surface data access locality to allow efficient request interleaving among partition units. The cache line index may be used to determine whether a request for a cache line is a hit or miss.[0075] In graphics and computing applications, a processing cluster 214 may be configured such that each graphics multiprocessor 234 is coupled to a texture unit 236 for performing texture mapping operations, e.g., determining texture sample positions, reading texture data, and filtering the texture data. Texture data is read from an internal texture LI cache (not shown) or in some embodiments from the LI cache within graphics multiprocessor 234 and is fetched from an L2 cache, local parallel processor memory, or system memory, as needed. Each graphics multiprocessor 234 outputs processed tasks to the data crossbar 240 to provide the processed task to another processing cluster 214 for further processing or to store the processed task in an L2
cache, local parallel processor memory, or system memory via the memory crossbar 216. A preROP 242 (pre -raster operations unit) is configured to receive data from graphicsmultiprocessor 234, direct data to ROP units, which may be located with partition units as described herein (e.g., partition units 220A-220N of FIG. 2A). The preROP 242 unit can perform optimizations for color blending, organize pixel color data, and perform address translations.[0076] It will be appreciated that the core architecture described herein is illustrative and that variations and modifications are possible. Any number of processing units, e.g., graphics multiprocessor 234, texture units 236, preROPs 242, etc., may be included within a processing cluster 214. Further, while only one processing cluster 214 is shown, a parallel processing unit as described herein may include any number of instances of the processing cluster 214.Optionally, each processing cluster 214 can be configured to operate independently of other processing clusters 214 using separate and distinct processing units, LI caches, etc.[0077] FIG. 2D shows an example of the graphics multiprocessor 234 in which the graphics multiprocessor 234 couples with the pipeline manager 232 of the processing cluster 214. The graphics multiprocessor 234 has an execution pipeline including but not limited to an instruction cache 252, an instruction unit 254, an address mapping unit 256, a register file 258, one or more general purpose graphics processing unit (GPGPU) cores 262, and one or more load/store units 266. The GPGPU cores 262 and load/store units 266 are coupled with cache memory 272 and shared memory 270 via a memory and cache interconnect 268. The graphics multiprocessor 234 may additionally include tensor and/or ray-tracing cores 263 that include hardware logic to accelerate matrix and/or ray-tracing operations.[0078] The instruction cache 252 may receive a stream of instructions to execute from the pipeline manager 232. The instructions are cached in the instruction cache 252 and dispatched for execution by the instruction unit 254. The instruction unit 254 can dispatch instructions as thread groups (e.g., warps), with each thread of the thread group assigned to a different execution unit within GPGPU core 262. An instruction can access any of a local, shared, or global address space by specifying an address within a unified address space. The address mapping unit 256 can be used to translate addresses in the unified address space into a distinct memory address that can be accessed by the load/store units 266.[0079] The register file 258 provides a set of registers for the functional units of the graphics multiprocessor 234. The register file 258 provides temporary storage for operands connected to the data paths of the functional units (e.g., GPGPU cores 262, load/store units 266) of the graphics multiprocessor 234. The register file 258 may be divided between each of the functional units such that each functional unit is allocated a dedicated portion of the register file
258. For example, the register file 258 may be divided between the different warps being executed by the graphics multiprocessor 234.[0080] The GPGPU cores 262 can each include floating point units (FPUs) and/or integer arithmetic logic units (ALUs) that are used to execute instructions of the graphics multiprocessor 234. In some implementations, the GPGPU cores 262 can include hardware logic that may otherwise reside within the tensor and/or ray-tracing cores 263. The GPGPU cores 262 can be similar in architecture or can differ in architecture. For example and in one embodiment, a first portion of the GPGPU cores 262 include a single precision FPU and an integer ALU while a second portion of the GPGPU cores include a double precision FPU. Optionally, the FPUs can implement the IEEE 754-2008 standard for floating point arithmetic or enable variable precision floating point arithmetic. The graphics multiprocessor 234 can additionally include one or more fixed function or special function units to perform specific functions such as copy rectangle or pixel blending operations. One or more of the GPGPU cores can also include fixed or special function logic.[0081] The GPGPU cores 262 may include SIMD logic capable of performing a single instruction on multiple sets of data. Optionally, GPGPU cores 262 can physically execute SIMD4, SIMD8, and SIMD16 instructions and logically execute SIMD1, SIMD2, and SIMD32 instructions. The SIMD instructions for the GPGPU cores can be generated at compile time by a shader compiler or automatically generated when executing programs written and compiled for single program multiple data (SPMD) or SIMT architectures. Multiple threads of a program configured for the SIMT execution model can be executed via a single SIMD instruction. For example and in one embodiment, eight SIMT threads that perform the same or similar operations can be executed in parallel via a single SIMD8 logic unit.[0082] The memory and cache interconnect 268 is an interconnect network that connects each of the functional units of the graphics multiprocessor 234 to the register file 258 and to the shared memory 270. For example, the memory and cache interconnect 268 is a crossbar interconnect that allows the load/store unit 266 to implement load and store operations between the shared memory 270 and the register file 258. The register file 258 can operate at the same frequency as the GPGPU cores 262, thus data transfer between the GPGPU cores 262 and the register file 258 is very low latency. The shared memory 270 can be used to enablecommunication between threads that execute on the functional units within the graphics multiprocessor 234. The cache memory 272 can be used as a data cache for example, to cache texture data communicated between the functional units and the texture unit 236. The shared memory 270 can also be used as a program managed cached. Threads executing on the GPGPU
cores 262 can programmatically store data within the shared memory in addition to the automatically cached data that is stored within the cache memory 272.[0083] FIG. 3A-3C illustrate additional graphics multiprocessors, according toembodiments. FIG. 3A-3B illustrate graphics multiprocessors 325, 350, which are related to the graphics multiprocessor 234 of FIG. 2C and may be used in place of one of those. Therefore, the disclosure of any features in combination with the graphics multiprocessor 234 herein also discloses a corresponding combination with the graphics multiprocessor(s) 325, 350, but is not limited to such. Fig. 3C illustrates a graphics processing unit (GPU) 380 which includes dedicated sets of graphics processing resources arranged into multi-core groups 365A-365N, which correspond to the graphics multiprocessors 325, 350. The illustrated graphics multiprocessors 325, 350 and the multi-core groups 365A-365N can be streamingmultiprocessors (SM) capable of simultaneous execution of a large number of execution threads.[0084] The graphics multiprocessor 325 of FIG. 3A includes multiple additional instances of execution resource units relative to the graphics multiprocessor 234 of FIG. 2D. For example, the graphics multiprocessor 325 can include multiple instances of the instruction unit 332A- 332B, register file 334A-334B, and texture unit(s) 344A-344B. The graphics multiprocessor 325 also includes multiple sets of graphics or compute execution units (e.g., GPGPU core 336A- 336B, tensor core 337A-337B, ray-tracing core 338A-338B) and multiple sets of load/store units 340A-340B. The execution resource units have a common instruction cache 330, texture and/or data cache memory 342, and shared memory 346.[0085] The various components can communicate via an interconnect fabric 327. The interconnect fabric 327 may include one or more crossbar switches to enable communication between the various components of the graphics multiprocessor 325. The interconnect fabric 327 may be a separate, high-speed network fabric layer upon which each component of the graphics multiprocessor 325 is stacked. The components of the graphics multiprocessor 325 communicate with remote components via the interconnect fabric 327. For example, the GPGPU cores 336A-336B, 337A-337B, and 3378A-338B can each communicate with shared memory 346 via the interconnect fabric 327. The interconnect fabric 327 can arbitrate communication within the graphics multiprocessor 325 to ensure a fair bandwidth allocation between components.[0086] The graphics multiprocessor 350 of FIG. 3B includes multiple sets of execution resources 356A-356D, where each set of execution resource includes multiple instruction units, register files, GPGPU cores, and load store units, as illustrated in FIG. 2D and FIG. 3A. The execution resources 356A-356D can work in concert with texture unit(s) 360A-360D for texture operations, while sharing an instruction cache 354, and shared memory 353. For example, the
execution resources 356A-356D can share an instruction cache 354 and shared memory 353, as well as multiple instances of a texture and/or data cache memory 358A-358B. The various components can communicate via an interconnect fabric 352 similar to the interconnect fabric 327 of FIG. 3 A.[0087] Persons skilled in the art will understand that the architecture described in FIG. 1, 2A-2D, and 3A-3B are descriptive and not limiting as to the scope of the present embodiments. Thus, the techniques described herein may be implemented on any properly configured processing unit, including, without limitation, one or more mobile application processors, one or more desktop or server central processing units (CPUs) including multi-core CPUs, one or more parallel processing units, such as the parallel processing unit 202 of FIG. 2A, as well as one or more graphics processors or special purpose processing units, without departure from the scope of the embodiments described herein.[0088] The parallel processor or GPGPU as described herein may be communicatively coupled to host/processor cores to accelerate graphics operations, machine-learning operations, pattern analysis operations, and various general-purpose GPU (GPGPU) functions. The GPU may be communicatively coupled to the host processor/cores over a bus or other interconnect (e.g., a high-speed interconnect such as PCIe or NVLink). In other embodiments, the GPU may be integrated on the same package or chip as the cores and communicatively coupled to the cores over an internal processor bus/interconnect (i.e., internal to the package or chip). Regardless of the manner in which the GPU is connected, the processor cores may allocate work to the GPU in the form of sequences of commands/instructions contained in a work descriptor. The GPU then uses dedicated circuitry /logic for efficiently processing these commands/instructions.[0089] Fig. 3C illustrates a graphics processing unit (GPU) 380 which includes dedicated sets of graphics processing resources arranged into multi-core groups 365A-365N. While the details of only a single multi-core group 365A are provided, it will be appreciated that the other multi-core groups 365B-365N may be equipped with the same or similar sets of graphics processing resources. Details described with respect to the multi-core groups 365A-365N may also apply to any graphics multiprocessor 234, 325, 350 described herein.[0090] As illustrated, a multi-core group 365A may include a set of graphics cores 370, a set of tensor cores 371, and a set of ray tracing cores 372. A scheduler/dispatcher 368 schedules and dispatches the graphics threads for execution on the various cores 370, 371, 372. A set of register files 369 store operand values used by the cores 370, 371, 372 when executing the graphics threads. These may include, for example, integer registers for storing integer values, floating point registers for storing floating point values, vector registers for storing packed data
elements (integer and/or floating-point data elements) and tile registers for storing tensor/matrix values. The tile registers may be implemented as combined sets of vector registers.[0091] One or more combined level 1 (LI) caches and shared memory units 373 store graphics data such as texture data, vertex data, pixel data, ray data, bounding volume data, etc., locally within each multi-core group 365A. One or more texture units 374 can also be used to perform texturing operations, such as texture mapping and sampling. A Level 2 (L2) cache 375 shared by all or a subset of the multi-core groups 365A-365N stores graphics data and/or instructions for multiple concurrent graphics threads. As illustrated, the L2 cache 375 may be shared across a plurality of multi-core groups 365A-365N. One or more memory controllers 367 couple the GPU 380 to a memory 366 which may be a system memory (e.g., DRAM) and/or a dedicated graphics memory (e.g., GDDR6 memory).[0092] Input/output (I/O) circuitry 363 couples the GPU 380 to one or more I/O devices 362 such as digital signal processors (DSPs), network controllers, or user input devices. An on-chip interconnect may be used to couple the I/O devices 362 to the GPU 380 and memory 366. One or more I/O memory management units (IOMMUs) 364 of the I/O circuitry 363 couple the I/O devices 362 directly to the system memory 366. Optionally, the IOMMU 364 manages multiple sets of page tables to map virtual addresses to physical addresses in system memory 366. The I/O devices 362, CPU(s) 361, and GPU(s) 380 may then share the same virtual address space.[0093] In one implementation of the IOMMU 364, the IOMMU 364 supports virtualization. In this case, it may manage a first set of page tables to map guest/graphics virtual addresses to guest/graphics physical addresses and a second set of page tables to map the guest/graphics physical addresses to system/host physical addresses (e.g., within system memory 366). The base addresses of each of the first and second sets of page tables may be stored in control registers and swapped out on a context switch (e.g., so that the new context is provided with access to the relevant set of page tables). While not illustrated in Fig. 3C, each of the cores 370, 371, 372 and/or multi-core groups 365A-365N may include translation lookaside buffers (TLBs) to cache guest virtual to guest physical translations, guest physical to host physical translations, and guest virtual to host physical translations.[0094] The CPUs 361, GPUs 380, and I/O devices 362 may be integrated on a single semiconductor chip and/or chip package. The illustrated memory 366 may be integrated on the same chip or may be coupled to the memory controllers 367 via an off-chip interface. In one implementation, the memory 366 comprises GDDR6 memory which shares the same virtual address space as other physical system-level memories, although the underlying principles described herein are not limited to this specific implementation.
[0095] The tensor cores 371 may include a plurality of execution units specifically designed to perform matrix operations, which are the fundamental compute operation used to perform deep learning operations. For example, simultaneous matrix multiplication operations may be used for neural network training and inferencing. The tensor cores 371 may perform matrix processing using a variety of operand precisions including single precision floating-point (e.g.,32 bits), half-precision floating point (e.g., 16 bits), integer words (16 bits), bytes (8 bits), and half-bytes (4 bits). For example, a neural network implementation extracts features of each rendered scene, potentially combining details from multiple frames, to construct a high-quality final image.[0096] In deep learning implementations, parallel matrix multiplication work may be scheduled for execution on the tensor cores 371. The training of neural networks, in particular, requires a significant number matrix dot product operations. In order to process an inner-product formulation of an N x N x N matrix multiply, the tensor cores 371 may include at least N dot- product processing elements. Before the matrix multiply begins, one entire matrix is loaded into tile registers and at least one column of a second matrix is loaded each cycle for N cycles. Each cycle, there are N dot products that are processed.[0097] Matrix elements may be stored at different precisions depending on the particular implementation, including 16-bit words, 8-bit bytes (e.g., INT8) and 4-bit half-bytes (e.g.,INT4). Different precision modes may be specified for the tensor cores 371 to ensure that the most efficient precision is used for different workloads (e.g., such as inferencing workloads which can tolerate quantization to bytes and half-bytes).[0098] The ray tracing cores 372 may accelerate ray tracing operations for both real-time ray tracing and non-real-time ray tracing implementations. In particular, the ray tracing cores 372 may include ray traversal/intersection circuitry for performing ray traversal using bounding volume hierarchies (BVHs) and identifying intersections between rays and primitives enclosed within the BVH volumes. The ray tracing cores 372 may also include circuitry for performing depth testing and culling (e.g., using a Z buffer or similar arrangement). In one implementation, the ray tracing cores 372 perform traversal and intersection operations in concert with the image denoising techniques described herein, at least a portion of which may be executed on the tensor cores 371. For example, the tensor cores 371 may implement a deep learning neural network to perform denoising of frames generated by the ray tracing cores 372. However, the CPU(s) 361, graphics cores 370, and/or ray tracing cores 372 may also implement all or a portion of the denoising and/or deep learning algorithms.[0099] In addition, as described above, a distributed approach to denoising may be employed in which the GPU 380 is in a computing device coupled to other computing devices over a
network or high-speed interconnect. In this distributed approach, the interconnected computing devices may share neural network leaming/training data to improve the speed with which the overall system learns to perform denoising for different types of image frames and/or different graphics applications.[0100] The ray tracing cores 372 may process all BVH traversal and/or ray-primitive intersections, saving the graphics cores 370 from being overloaded with thousands of instructions per ray. For example, each ray tracing core 372 includes a first set of specialized circuitry for performing bounding box tests (e.g., for traversal operations) and/or a second set of specialized circuitry for performing the ray-triangle intersection tests (e.g., intersecting rays which have been traversed). Thus, for example, the multi-core group 365A can simply launch a ray probe, and the ray tracing cores 372 independently perform ray traversal and intersection and return hit data (e.g., a hit, no hit, multiple hits, etc.) to the thread context. The other cores 370, 371 are freed to perform other graphics or compute work while the ray tracing cores 372 perform the traversal and intersection operations.[0101] Optionally, each ray tracing core 372 may include a traversal unit to perform BVH testing operations and/or an intersection unit which performs ray -primitive intersection tests.The intersection unit generates a“hit”,“no hit”, or“multiple hit” response, which it provides to the appropriate thread. During the traversal and intersection operations, the execution resources of the other cores (e.g., graphics cores 370 and tensor cores 371) are freed to perform other forms of graphics work.[0102] In one optional embodiment described below, a hybrid rasterization/ray tracing approach is used in which work is distributed between the graphics cores 370 and ray tracing cores 372.[0103] The ray tracing cores 372 (and/or other cores 370, 371) may include hardware support for a ray tracing instruction set such as Microsoft’s DirectX Ray Tracing (DXR) which includes a DispatchRays command, as well as ray-generation, closest-hit, any -hit, and miss shaders, which enable the assignment of unique sets of shaders and textures for each object. Another ray tracing platform which may be supported by the ray tracing cores 372, graphics cores 370 and tensor cores 371 is Vulkan 1.1.85. Note, however, that the underlying principles described herein are not limited to any particular ray tracing ISA.[0104] In general, the various cores 372, 371, 370 may support a ray tracing instruction set that includes instructions/functions for one or more of ray generation, closest hit, any hit, ray- primitive intersection, per-primitive and hierarchical bounding box construction, miss, visit, and exceptions. More specifically, a preferred embodiment includes ray tracing instructions to perform one or more of the following functions:
[0105] Ray Generation - Ray generation instructions may be executed for each pixel, sample, or other user-defined work assignment.[0106] Closest Hit - A closest hit instruction may be executed to locate the closest intersection point of a ray with primitives within a scene.[0107] Any Hit - An any hit instruction identifies multiple intersections between a ray and primitives within a scene, potentially to identify a new closest intersection point.[0108] Intersection - An intersection instruction performs a ray-primitive intersection test and outputs a result.[0109] Per-nrimitive Bounding box Construction - This instruction builds a bounding box around a given primitive or group of primitives (e.g., when building a new BVH or other acceleration data structure).[0110] Miss - Indicates that a ray misses all geometry within a scene, or specified region of a scene.[0111] Visit - Indicates the children volumes a ray will traverse.[0112] Exceptions - Includes various types of exception handlers (e.g., invoked for various error conditions).Techniques for GPU to Host Processor Interconnection[0113] FIG. 4A illustrates an exemplary architecture in which a plurality of GPUs 410-413, e.g. such as the parallel processors 200 shown in FIG. 2A, are communicatively coupled to a plurality of multi-core processors 405-406 over high-speed links 440A-440D (e.g., buses, point- to-point interconnects, etc.). The high-speed links 440A-440D may support a communication throughput of 4GB/s, 30GB/s, 80GB/s or higher, depending on the implementation. Various interconnect protocols may be used including, but not limited to, PCIe 4.0 or 5.0 and NVLink 2.0. However, the underlying principles described herein are not limited to any particular communication protocol or throughput.[0114] Two or more of the GPUs 410-413 may be interconnected over high-speed links 442A-442B, which may be implemented using the same or different protocols/links than those used for high-speed links 440A-440D. Similarly, two or more of the multi-core processors 405- 406 may be connected over high speed link 443 which may be symmetric multi -processor (SMP) buses operating at 20GB/s, 30GB/s, 120GB/s or higher. Alternatively, all communication between the various system components shown in FIG. 4A may be accomplished using the same protocols/links (e.g., over a common interconnection fabric). As mentioned, however, the underlying principles described herein are not limited to any particular type of interconnect technology.
[0115] Each multi-core processor 405-406 may be communicatively coupled to a processor memory 401-402, via memory interconnects 430A-430B, respectively, and each GPU 410-413 is communicatively coupled to GPU memory 420-423 over GPU memory interconnects 450A- 450D, respectively. The memory interconnects 430A-430B and 450A-450D may utilize the same or different memory access technologies. By way of example, and not limitation, the processor memories 401-402 and GPU memories 420-423 may be volatile memories such as dynamic random-access memories (DRAMs) (including stacked DRAMs), Graphics DDR SDRAM (GDDR) (e.g., GDDR5, GDDR6), or High Bandwidth Memory (HBM) and/or may be non-volatile memories such as 3D XPoint or Nano-Ram. For example, some portion of the memories may be volatile memory and another portion may be non-volatile memory (e.g., using a two-level memory (2LM) hierarchy).[0116] As described below, although the various processors 405-406 and GPUs 410-413 may be physically coupled to a particular memory 401-402, 420-423, respectively, a unified memory architecture may be implemented in which the same virtual system address space (also referred to as the“effective address” space) is distributed among all of the various physical memories. For example, processor memories 401-402 may each comprise 64GB of the system memory address space and GPU memories 420-423 may each comprise 32GB of the system memory address space (resulting in a total of 256GB addressable memory in this example).[0117] FIG. 4B illustrates additional optional details for an interconnection between a multi core processor 407 and a graphics acceleration module 446. The graphics acceleration module 446 may include one or more GPU chips integrated on a line card which is coupled to the processor 407 via the high-speed link 440. Alternatively, the graphics acceleration module 446 may be integrated on the same package or chip as the processor 407.[0118] The illustrated processor 407 includes a plurality of cores 460A-460D, each with a translation lookaside buffer 461A-461D and one or more caches 462A-462D. The cores may include various other components for executing instructions and processing data which are not illustrated to avoid obscuring the underlying principles of the components described herein (e.g., instruction fetch units, branch prediction units, decoders, execution units, reorder buffers, etc.). The caches 462A-462D may comprise level 1 (LI) and level 2 (L2) caches. In addition, one or more shared caches 456 may be included in the caching hierarchy and shared by sets of the cores 460A-460D. For example, one embodiment of the processor 407 includes 24 cores, each with its own LI cache, twelve shared L2 caches, and twelve shared L3 caches. In this embodiment, one of the L2 and L3 caches are shared by two adjacent cores. The processor 407 and the graphics accelerator integration module 446 connect with system memory 441, which may include processor memories 401-402.
[0119] Coherency is maintained for data and instructions stored in the various caches 462A- 462D, 456 and system memory 441 via inter-core communication over a coherence bus 464. For example, each cache may have cache coherency logic/circuitry associated therewith to communicate to over the coherence bus 464 in response to detected reads or writes to particular cache lines. In one implementation, a cache snooping protocol is implemented over the coherence bus 464 to snoop cache accesses. Cache snooping/coherency techniques are well understood by those of skill in the art and will not be described in detail here to avoid obscuring the underlying principles described herein.[0120] A proxy circuit 425 may be provided that communicatively couples the graphics acceleration module 446 to the coherence bus 464, allowing the graphics acceleration module 446 to participate in the cache coherence protocol as a peer of the cores. In particular, an interface 435 provides connectivity to the proxy circuit 425 over high-speed link 440 (e.g., a PCIe bus, NVLink, etc.) and an interface 437 connects the graphics acceleration module 446 to the high-speed link 440.[0121] In one implementation, an accelerator integration circuit 436 provides cache management, memory access, context management, and interrupt management services on behalf of a plurality of graphics processing engines 431, 432, N of the graphics acceleration module 446. The graphics processing engines 431, 432, N may each comprise a separate graphics processing unit (GPU). Alternatively, the graphics processing engines 431, 432, N may comprise different types of graphics processing engines within a GPU such as graphics execution units, media processing engines (e.g., video encoders/decoders), samplers, and blit engines. In other words, the graphics acceleration module may be a GPU with a plurality of graphics processing engines 431-432, N or the graphics processing engines 431-432, N may be individual GPUs integrated on a common package, line card, or chip.[0122] The accelerator integration circuit 436 may include a memory management unit (MMU) 439 for performing various memory management functions such as virtual-to-physical memory translations (also referred to as effective-to-real memory translations) and memory access protocols for accessing system memory 441. The MMU 439 may also include a translation lookaside buffer (TLB) (not shown) for caching the virtual/effective to physical/real address translations. In one implementation, a cache 438 stores commands and data for efficient access by the graphics processing engines 431-432, N. The data stored in cache 438 and graphics memories 433-434, M may be kept coherent with the core caches 462A-462D, 456 and system memory 411. As mentioned, this may be accomplished via proxy circuit 425 which takes part in the cache coherency mechanism on behalf of cache 438 and memories 433-434, M (e.g.,
sending updates to the cache 438 related to modifications/accesses of cache lines on processor caches 462A-462D, 456 and receiving updates from the cache 438).[0123] A set of registers 445 store context data for threads executed by the graphics processing engines 431-432, N and a context management circuit 448 manages the thread contexts. For example, the context management circuit 448 may perform save and restore operations to save and restore contexts of the various threads during contexts switches (e.g., where a first thread is saved and a second thread is stored so that the second thread can be execute by a graphics processing engine). For example, on a context switch, the context management circuit 448 may store current register values to a designated region in memory (e.g., identified by a context pointer). It may then restore the register values when returning to the context. An interrupt management circuit 447, for example, may receive and processes interrupts received from system devices.[0124] In one implementation, virtual/effective addresses from a graphics processing engine 431 are translated to real/physical addresses in system memory 411 by the MMU 439.Optionally, the accelerator integration circuit 436 supports multiple (e.g., 4, 8, 16) graphics accelerator modules 446 and/or other accelerator devices. The graphics accelerator module 446 may be dedicated to a single application executed on the processor 407 or may be shared between multiple applications. Optionally, a virtualized graphics execution environment is provided in which the resources of the graphics processing engines 431-432, N are shared with multiple applications or virtual machines (VMs). The resources may be subdivided into“slices” which are allocated to different VMs and/or applications based on the processing requirements and priorities associated with the VMs and/or applications.[0125] Thus, the accelerator integration circuit 436 acts as a bridge to the system for the graphics acceleration module 446 and provides address translation and system memory cache services. In one embodiment, to facilitate the bridging functionality, the accelerator integration circuit 436 may also include shared I/O 497 (e.g., PCIe, USB) and hardware to enable system control of voltage, clocking, performance, thermals, and security. The shared I/O 497 may utilize separate physical connections or may traverse the high-speed link 440. In addition, the accelerator integration circuit 436 may provide virtualization facilities for the host processor to manage virtualization of the graphics processing engines, interrupts, and memory management.[0126] Because hardware resources of the graphics processing engines 431-432, N are mapped explicitly to the real address space seen by the host processor 407, any host processor can address these resources directly using an effective address value. One optional function of the accelerator integration circuit 436 is the physical separation of the graphics processing engines 431-432, N so that they appear to the system as independent units.
[0127] One or more graphics memories 433-434, M may be coupled to each of the graphics processing engines 431-432, N, respectively. The graphics memories 433-434, M store instructions and data being processed by each of the graphics processing engines 431-432, N.The graphics memories 433-434, M may be volatile memories such as DRAMs (including stacked DRAMs), GDDR memory (e.g., GDDR5, GDDR6), or HBM, and/or may be non volatile memories such as 3D XPoint or Nano-Ram.[0128] To reduce data traffic over the high-speed link 440, biasing techniques may be used to ensure that the data stored in graphics memories 433-434, M is data which will be used most frequently by the graphics processing engines 431-432, N and preferably not used by the cores 460A-460D (at least not frequently). Similarly, the biasing mechanism attempts to keep data needed by the cores (and preferably not the graphics processing engines 431-432, N) within the caches 462A-462D, 456 of the cores and system memory 411.[0129] According to a variant shown in FIG. 4C the accelerator integration circuit 436 is integrated within the processor 407. The graphics processing engines 431-432, N communicate directly over the high-speed link 440 to the accelerator integration circuit 436 via interface 437 and interface 435 (which, again, may be utilize any form of bus or interface protocol). The accelerator integration circuit 436 may perform the same operations as those described with respect to FIG. 4B, but potentially at a higher throughput given its close proximity to the coherency bus 464 and caches 462A-462D, 456.[0130] The embodiments described may support different programming models including a dedicated-process programming model (no graphics acceleration module virtualization) and shared programming models (with virtualization). The latter may include programming models which are controlled by the accelerator integration circuit 436 and programming models which are controlled by the graphics acceleration module 446.[0131] In the embodiments of the dedicated process model, graphics processing engines 431- 432, N may be dedicated to a single application or process under a single operating system. The single application can funnel other application requests to the graphics engines 431-432, N, providing virtualization within a VM/partition.[0132] In the dedicated-process programming models, the graphics processing engines 431- 432, N, may be shared by multiple VM/application partitions. The shared models require a system hypervisor to virtualize the graphics processing engines 431-432, N to allow access by each operating system. For single-partition systems without a hypervisor, the graphics processing engines 431-432, N are owned by the operating system. In both cases, the operating system can virtualize the graphics processing engines 431-432, N to provide access to each process or application.
[0133] For the shared programming model, the graphics acceleration module 446 or an individual graphics processing engine 431-432, N selects a process element using a process handle. The process elements may be stored in system memory 411 and be addressable using the effective address to real address translation techniques described herein. The process handle may be an implementation-specific value provided to the host process when registering its context with the graphics processing engine 431-432, N (that is, calling system software to add the process element to the process element linked list). The lower 16-bits of the process handle may be the offset of the process element within the process element linked list.[0134] FIG. 4D illustrates an exemplary accelerator integration slice 490. As used herein, a “slice” comprises a specified portion of the processing resources of the accelerator integration circuit 436. Application effective address space 482 within system memory 411 stores process elements 483. The process elements 483 may be stored in response to GPU invocations 481 from applications 480 executed on the processor 407. A process element 483 contains the process state for the corresponding application 480. A work descriptor (WD) 484 contained in the process element 483 can be a single job requested by an application or may contain a pointer to a queue of jobs. In the latter case, the WD 484 is a pointer to the job request queue in the application’s address space 482.[0135] The graphics acceleration module 446 and/or the individual graphics processing engines 431-432, N can be shared by all or a subset of the processes in the system. For example, the technologies described herein may include an infrastructure for setting up the process state and sending a WD 484 to a graphics acceleration module 446 to start a job in a virtualized environment.[0136] In one implementation, the dedicated-process programming model is implementation- specific. In this model, a single process owns the graphics acceleration module 446 or an individual graphics processing engine 431. Because the graphics acceleration module 446 is owned by a single process, the hypervisor initializes the accelerator integration circuit 436 for the owning partition and the operating system initializes the accelerator integration circuit 436 for the owning process at the time when the graphics acceleration module 446 is assigned.[0137] In operation, a WD fetch unit 491 in the accelerator integration slice 490 fetches the next WD 484 which includes an indication of the work to be done by one of the graphics processing engines of the graphics acceleration module 446. Data from the WD 484 may be stored in registers 445 and used by the MMU 439, interrupt management circuit 447 and/or context management circuit 448 as illustrated. For example, the MMU 439 may include segment/page walk circuitry for accessing segment/page tables 486 within the OS virtual address space 485. The interrupt management circuit 447 may process interrupt events 492 received
from the graphics acceleration module 446. When performing graphics operations, an effective address 493 generated by a graphics processing engine 431-432, N is translated to a real address by the MMU 439.[0138] The same set of registers 445 may be duplicated for each graphics processing engine 431-432, N and/or graphics acceleration module 446 and may be initialized by the hypervisor or operating system. Each of these duplicated registers may be included in an accelerator integration slice 490. Exemplary registers that may be initialized by the hypervisor are shown inTable 1.Table 1 - Hypervisor Initialized Registers[0139] Exemplary registers that may be initialized by the operating system are shown inTable 2.Table 2 - Operating System Initialized Registers[0140] Each WD 484 may be specific to a particular graphics acceleration module 446 and/or graphics processing engine 431-432, N. It contains all the information a graphics processing engine 431-432, N requires to do its work or it can be a pointer to a memory location where the application has set up a command queue of work to be completed.[0141] FIG. 4E illustrates additional optional details of a shared model. It includes a hypervisor real address space 498 in which a process element list 499 is stored. The hypervisor real address space 498 is accessible via a hypervisor 496 which virtualizes the graphics acceleration module engines for the operating system 495.[0142] The shared programming models allow for all or a subset of processes from all or a subset of partitions in the system to use a graphics acceleration module 446. There are two programming models where the graphics acceleration module 446 is shared by multiple processes and partitions: time-sliced shared and graphics directed shared.[0143] In this model, the system hypervisor 496 owns the graphics acceleration module 446 and makes its function available to all operating systems 495. For a graphics acceleration module 446 to support virtualization by the system hypervisor 496, the graphics acceleration module 446 may adhere to the following requirements: 1) An application’s job request must be autonomous (that is, the state does not need to be maintained between jobs), or the graphics acceleration module 446 must provide a context save and restore mechanism. 2) An application’s job request is guaranteed by the graphics acceleration module 446 to complete in a specified amount of time, including any translation faults, or the graphics acceleration module 446 provides the ability to preempt the processing of the job. 3) The graphics acceleration module 446 must be guaranteed fairness between processes when operating in the directed shared programming model.[0144] For the shared model, the application 480 may be required to make an operating system 495 system call with a graphics acceleration module 446 type, a work descriptor (WD), an authority mask register (AMR) value, and a context save/restore area pointer (CSRP). The graphics acceleration module 446 type describes the targeted acceleration function for the system call. The graphics acceleration module 446 type may be a system-specific value. The WD is formatted specifically for the graphics acceleration module 446 and can be in the form of a graphics acceleration module 446 command, an effective address pointer to a user-defined structure, an effective address pointer to a queue of commands, or any other data structure to describe the work to be done by the graphics acceleration module 446. In one embodiment, the AMR value is the AMR state to use for the current process. The value passed to the operating
system is similar to an application setting the AMR. If the accelerator integration circuit 436 and graphics acceleration module 446 implementations do not support a User Authority Mask Override Register (UAMOR), the operating system may apply the current UAMOR value to the AMR value before passing the AMR in the hypervisor call. The hypervisor 496 may optionally apply the current Authority Mask Override Register (AMOR) value before placing the AMR into the process element 483. The CSRP may be one of the registers 445 containing the effective address of an area in the application’s address space 482 for the graphics acceleration module 446 to save and restore the context state. This pointer is optional if no state is required to be saved between jobs or when a job is preempted. The context save/restore area may be pinned system memory.[0145] Upon receiving the system call, the operating system 495 may verify that the application 480 has registered and been given the authority to use the graphics acceleration module 446. The operating system 495 then calls the hypervisor 496 with the information shown in Table 3.Table 3 - OS to Hypervisor Call Parameters[0146] Upon receiving the hypervisor call, the hypervisor 496 verifies that the operating system 495 has registered and been given the authority to use the graphics acceleration module 446. The hypervisor 496 then puts the process element 483 into the process element linked list for the corresponding graphics acceleration module 446 type. The process element may include the information shown in Table 4.Table 4 - Process Element Information[0147] The hypervisor may initialize a plurality of accelerator integration slice 490 registers 445.[0148] As illustrated in FIG. 4F, in one optional implementation a unified memory addressable via a common virtual memory address space used to access the physical processor memories 401-402 and GPU memories 420-423 is employed. In this implementation, operations executed on the GPUs 410-413 utilize the same virtual/effective memory address space to access the processors memories 401-402 and vice versa, thereby simplifying programmability. A first portion of the virtual/effective address space may be allocated to the processor memory 401, a second portion to the second processor memory 402, a third portion to the GPU memory 420, and so on. The entire virtual/effective memory space (sometimes referred to as the effective address space) may thereby be distributed across each of the processor memories 401-402 and GPU memories 420-423, allowing any processor or GPU to access any physical memory with a virtual address mapped to that memory.[0149] B as/coherence management circuitry 494A-494E within one or more of the MMUs439A-439E may be provided that ensures cache coherence between the caches of the host processors (e.g., 405) and the GPUs 410-413 and implements biasing techniques indicating the physical memories in which certain types of data should be stored. While multiple instances of bias/coherence management circuitry 494A-494E are illustrated in FIG. 4F, the bias/coherence
circuitry may be implemented within the MMU of one or more host processors 405 and/or within the accelerator integration circuit 436.[0150] The GPU-attached memory 420-423 may be mapped as part of system memory, and accessed using shared virtual memory (SVM) technology, but without suffering the typical performance drawbacks associated with full system cache coherence. The ability to GPU- attached memory 420-423 to be accessed as system memory without onerous cache coherence overhead provides a beneficial operating environment for GPU offload. This arrangement allows the host processor 405 software to setup operands and access computation results, without the overhead of tradition I/O DMA data copies. Such traditional copies involve driver calls, interrupts and memory mapped I/O (MMIO) accesses that are all inefficient relative to simple memory accesses. At the same time, the ability to access GPU attached memory 420-423 without cache coherence overheads can be critical to the execution time of an offloaded computation. In cases with substantial streaming write memory traffic, for example, cache coherence overhead can significantly reduce the effective write bandwidth seen by a GPU 410- 413. The efficiency of operand setup, the efficiency of results access, and the efficiency of GPU computation all play a role in determining the effectiveness of GPU offload.[0151] A selection of between GPU bias and host processor bias may be driven by a bias tracker data structure. A bias table may be used, for example, which may be a page-granular structure (i.e., controlled at the granularity of a memory page) that includes 1 or 2 bits per GPU- attached memory page. The bias table may be implemented in a stolen memory range of one or more GPU-attached memories 420-423, with or without a bias cache in the GPU 410-413 (e.g., to cache frequently/recently used entries of the bias table). Alternatively, the entire bias table may be maintained within the GPU.[0152] In one implementation, the bias table entry associated with each access to the GPU- attached memory 420-423 is accessed prior the actual access to the GPU memory, causing the following operations. First, local requests from the GPU 410-413 that find their page in GPU bias are forwarded directly to a corresponding GPU memory 420-423. Local requests from the GPU that find their page in host bias are forwarded to the processor 405 (e.g., over a high-speed link as discussed above). Optionally, requests from the processor 405 that find the requested page in host processor bias complete the request like a normal memory read. Alternatively, requests directed to a GPU-biased page may be forwarded to the GPU 410-413. The GPU may then transition the page to a host processor bias if it is not currently using the page.[0153] The bias state of a page can be changed either by a software -based mechanism, a hardware-assisted software-based mechanism, or, for a limited set of cases, a purely hardware- based mechanism.
[0154] One mechanism for changing the bias state employs an API call (e.g. OpenCL), which, in turn, calls the GPU’s device driver which, in turn, sends a message (or enqueues a command descriptor) to the GPU directing it to change the bias state and, for some transitions, perform a cache flushing operation in the host. The cache flushing operation is required for a transition from host processor 405 bias to GPU bias, but is not required for the opposite transition.[0155] Cache coherency may be maintained by temporarily rendering GPU-biased pages uncacheable by the host processor 405. To access these pages, the processor 405 may request access from the GPU 410 which may or may not grant access right away, depending on the implementation. Thus, to reduce communication between the host processor 405 and GPU 410 it is beneficial to ensure that GPU-biased pages are those which are required by the GPU but not the host processor 405 and vice versa.Graphics Processing Pipeline[0156] FIG. 5 illustrates a graphics processing pipeline 500. A graphics multiprocessor, such as graphics multiprocessor 234 as in FIG. 2D, graphics multiprocessor 325 of FIG. 3 A, graphics multiprocessor 350 of FIG. 3B can implement the illustrated graphics processing pipeline 500. The graphics multiprocessor can be included within the parallel processing subsystems as described herein, such as the parallel processor 200 of FIG. 2A, which may be related to the parallel processor(s) 112 of FIG. 1 and may be used in place of one of those. The various parallel processing systems can implement the graphics processing pipeline 500 via one or more instances of the parallel processing unit (e.g., parallel processing unit 202 of FIG. 2A) as described herein. For example, a shader unit (e.g., graphics multiprocessor 234 of FIG. 2C) may be configured to perform the functions of one or more of a vertex processing unit 504, a tessellation control processing unit 508, a tessellation evaluation processing unit 512, a geometry processing unit 516, and a fragment/pixel processing unit 524. The functions of data assembler 502, primitive assemblers 506, 514, 518, tessellation unit 510, rasterizer 522, and raster operations unit 526 may also be performed by other processing engines within a processing cluster (e.g., processing cluster 214 of FIG. 2A) and a corresponding partition unit (e.g., partition unit 220A-220N of FIG. 2A). The graphics processing pipeline 500 may also be implemented using dedicated processing units for one or more functions. It is also possible that one or more portions of the graphics processing pipeline 500 are performed by parallel processing logic within a general-purpose processor (e.g., CPU). Optionally, one or more portions of the graphics processing pipeline 500 can access on-chip memory (e.g., parallel processor memory 222 as in FIG. 2A) via a memory interface 528, which may be an instance of the memory interface 218 of
FIG. 2A. The graphics processor pipeline 500 may also be implemented via a multi-core group 365A as in FIG. 3C.[0157] The data assembler 502 is a processing unit that may collect vertex data for surfaces and primitives. The data assembler 502 then outputs the vertex data, including the vertex attributes, to the vertex processing unit 504. The vertex processing unit 504 is a programmable execution unit that executes vertex shader programs, lighting and transforming vertex data as specified by the vertex shader programs. The vertex processing unit 504 reads data that is stored in cache, local or system memory for use in processing the vertex data and may be programmed to transform the vertex data from an object-based coordinate representation to a world space coordinate space or a normalized device coordinate space.[0158] A first instance of a primitive assembler 506 receives vertex attributes from the vertex processing unit 504. The primitive assembler 506 readings stored vertex attributes as needed and constructs graphics primitives for processing by tessellation control processing unit 508. The graphics primitives include triangles, line segments, points, patches, and so forth, as supported by various graphics processing application programming interfaces (APIs).[0159] The tessellation control processing unit 508 treats the input vertices as control points for a geometric patch. The control points are transformed from an input representation from the patch (e.g., the patch’s bases) to a representation that is suitable for use in surface evaluation by the tessellation evaluation processing unit 512. The tessellation control processing unit 508 can also compute tessellation factors for edges of geometric patches. A tessellation factor applies to a single edge and quantifies a view-dependent level of detail associated with the edge. A tessellation unit 510 is configured to receive the tessellation factors for edges of a patch and to tessellate the patch into multiple geometric primitives such as line, triangle, or quadrilateral primitives, which are transmitted to a tessellation evaluation processing unit 512. The tessellation evaluation processing unit 512 operates on parameterized coordinates of the subdivided patch to generate a surface representation and vertex attributes for each vertex associated with the geometric primitives.[0160] A second instance of a primitive assembler 514 receives vertex attributes from the tessellation evaluation processing unit 512, reading stored vertex attributes as needed, and constructs graphics primitives for processing by the geometry processing unit 516. The geometry processing unit 516 is a programmable execution unit that executes geometry shader programs to transform graphics primitives received from primitive assembler 514 as specified by the geometry shader programs. The geometry processing unit 516 may be programmed to subdivide the graphics primitives into one or more new graphics primitives and calculate parameters used to rasterize the new graphics primitives.
[0161] The geometry processing unit 516 may be able to add or delete elements in the geometry stream. The geometry processing unit 516 outputs the parameters and vertices specifying new graphics primitives to primitive assembler 518. The primitive assembler 518 receives the parameters and vertices from the geometry processing unit 516 and constructs graphics primitives for processing by a viewport scale, cull, and clip unit 520. The geometry processing unit 516 reads data that is stored in parallel processor memory or system memory for use in processing the geometry data. The viewport scale, cull, and clip unit 520 performs clipping, culling, and viewport scaling and outputs processed graphics primitives to a rasterizer 522.[0162] The rasterizer 522 can perform depth culling and other depth-based optimizations. The rasterizer 522 also performs scan conversion on the new graphics primitives to generate fragments and output those fragments and associated coverage data to the fragment/pixel processing unit 524. The fragment/pixel processing unit 524 is a programmable execution unit that is configured to execute fragment shader programs or pixel shader programs. The fragment/pixel processing unit 524 transforming fragments or pixels received from rasterizer 522, as specified by the fragment or pixel shader programs. For example, the fragment/pixel processing unit 524 may be programmed to perform operations included but not limited to texture mapping, shading, blending, texture correction and perspective correction to produce shaded fragments or pixels that are output to a raster operations unit 526. The fragment/pixel processing unit 524 can read data that is stored in either the parallel processor memory or the system memory for use when processing the fragment data. Fragment or pixel shader programs may be configured to shade at sample, pixel, tile, or other granularities depending on the sampling rate configured for the processing units.[0163] The raster operations unit 526 is a processing unit that performs raster operations including, but not limited to stencil, z-test, blending, and the like, and outputs pixel data as processed graphics data to be stored in graphics memory (e.g., parallel processor memory 222 as in FIG. 2A, and/or system memory 104 as in FIG 1), to be displayed on the one or more display device(s) 110 or for further processing by one of the one or more processor(s) 102 or parallel processor(s) 112. The raster operations unit 526 may be configured to compress z or color data that is written to memory and decompress z or color data that is read from memory.Machine Learning Overview[0164] The architecture described above can be applied to perform training and inference operations using machine learning models. Machine learning has been successful at solving many kinds of tasks. The computations that arise when training and using machine learning algorithms (e.g., neural networks) lend themselves naturally to efficient parallel
implementations. Accordingly, parallel processors such as general-purpose graphic processing units (GPGPUs) have played a significant role in the practical implementation of deep neural networks. Parallel graphics processors with single instruction, multiple thread (SIMT) architectures are designed to maximize the amount of parallel processing in the graphics pipeline. In an SIMT architecture, groups of parallel threads attempt to execute program instructions synchronously together as often as possible to increase processing efficiency. The efficiency provided by parallel machine learning algorithm implementations allows the use of high capacity networks and enables those networks to be trained on larger datasets.[0165] A machine learning algorithm is an algorithm that can learn based on a set of data.For example, machine learning algorithms can be designed to model high-level abstractions within a data set. For example, image recognition algorithms can be used to determine which of several categories to which a given input belong; regression algorithms can output a numerical value given an input; and pattern recognition algorithms can be used to generate translated text or perform text to speech and/or speech recognition.[0166] An exemplary type of machine learning algorithm is a neural network. There are many types of neural networks; a simple type of neural network is a feedforward network. A feedforward network may be implemented as an acyclic graph in which the nodes are arranged in layers. Typically, a feedforward network topology includes an input layer and an output layer that are separated by at least one hidden layer. The hidden layer transforms input received by the input layer into a representation that is useful for generating output in the output layer. The network nodes are fully connected via edges to the nodes in adjacent layers, but there are no edges between nodes within each layer. Data received at the nodes of an input layer of a feedforward network are propagated (i.e.,“fed forward”) to the nodes of the output layer via an activation function that calculates the states of the nodes of each successive layer in the network based on coefficients (“weights”) respectively associated with each of the edges connecting the layers. Depending on the specific model being represented by the algorithm being executed, the output from the neural network algorithm can take various forms.[0167] Before a machine learning algorithm can be used to model a particular problem, the algorithm is trained using a training data set. Training a neural network involves selecting a network topology, using a set of training data representing a problem being modeled by the network, and adjusting the weights until the network model performs with a minimal error for all instances of the training data set. For example, during a supervised learning training process for a neural network, the output produced by the network in response to the input representing an instance in a training data set is compared to the“correct” labeled output for that instance, an error signal representing the difference between the output and the labeled output is calculated,
and the weights associated with the connections are adjusted to minimize that error as the error signal is backward propagated through the layers of the network. The network is considered “trained” when the errors for each of the outputs generated from the instances of the training data set are minimized.[0168] The accuracy of a machine learning algorithm can be affected significantly by the quality of the data set used to train the algorithm. The training process can be computationally intensive and may require a significant amount of time on a conventional general-purpose processor. Accordingly, parallel processing hardware is used to train many types of machine learning algorithms. This is particularly useful for optimizing the training of neural networks, as the computations performed in adjusting the coefficients in neural networks lend themselves naturally to parallel implementations. Specifically, many machine learning algorithms and software applications have been adapted to make use of the parallel processing hardware within general-purpose graphics processing devices.[0169] FIG. 6 is a generalized diagram of a machine learning software stack 600. A machine learning application 602 can be configured to train a neural network using a training dataset or to use a trained deep neural network to implement machine intelligence. The machine learning application 602 can include training and inference functionality for a neural network and/or specialized software that can be used to train a neural network before deployment. The machine learning application 602 can implement any type of machine intelligence including but not limited to image recognition, mapping and localization, autonomous navigation, speech synthesis, medical imaging, or language translation.[0170] Hardware acceleration for the machine learning application 602 can be enabled via a machine learning framework 604. The machine learning framework 604 can provide a library of machine learning primitives. Machine learning primitives are basic operations that are commonly performed by machine learning algorithms. Without the machine learning framework 604, developers of machine learning algorithms would be required to create and optimize the main computational logic associated with the machine learning algorithm, then re-optimize the computational logic as new parallel processors are developed. Instead, the machine learning application can be configured to perform the necessary computations using the primitives provided by the machine learning framework 604. Exemplary primitives include tensor convolutions, activation functions, and pooling, which are computational operations that are performed while training a convolutional neural network (CNN). The machine learning framework 604 can also provide primitives to implement basic linear algebra subprograms performed by many machine-learning algorithms, such as matrix and vector operations.
[0171] The machine learning framework 604 can process input data received from the machine learning application 602 and generate the appropriate input to a compute framework 606. The compute framework 606 can abstract the underlying instructions provided to the GPGPU driver 608 to enable the machine learning framework 604 to take advantage of hardware acceleration via the GPGPU hardware 610 without requiring the machine learning framework 604 to have intimate knowledge of the architecture of the GPGPU hardware 610. Additionally, the compute framework 606 can enable hardware acceleration for the machine learning framework 604 across a variety of types and generations of the GPGPU hardware 610.GPGPU Machine Teaming Acceleration[0172] FIG. 7 illustrates a general-purpose graphics processing unit 700, , which may be the parallel processor 200 of FIG. 2A or the parallel processor(s) 112 of FIG. 1. The general- purpose processing unit (GPGPU) 700 may be configured to be particularly efficient in processing the type of computational workloads associated with training deep neural networks. Additionally, the GPGPU 700 can be linked directly to other instances of the GPGPU to create a multi-GPU cluster to improve training speed for particularly deep neural networks.[0173] The GPGPU 700 includes a host interface 702 to enable a connection with a host processor. The host interface 702 may be a PCI Express interface. However, the host interface can also be a vendor specific communications interface or communications fabric. The GPGPU 700 receives commands from the host processor and uses a global scheduler 704 to distribute execution threads associated with those commands to a set of processing clusters 706A-706H. The processing clusters 706A-706H share a cache memory 708. The cache memory 708 can serve as a higher-level cache for cache memories within the processing clusters 706A-706H.The illustrated processing clusters 706A-706H may correspond with processing clusters 214A- 214N as in FIG. 2 A.[0174] The GPGPU 700 includes memory 714A-B coupled with the processing clusters 706A-H via a set of memory controllers 712A-712B. The memory 714A-714B can include various types of memory devices including dynamic random-access memory (DRAM) or graphics random access memory, such as synchronous graphics random access memory (SGRAM), including graphics double data rate (GDDR) memory. The memory 714A-714N may also include 3D stacked memory, including but not limited to high bandwidth memory (HBM).[0175] Each of the processing clusters 706A-706H may include a set of graphics multiprocessors, such as the graphics multiprocessor 234 of FIG. 2D, graphics multiprocessor 325 of FIG. 3A, graphics multiprocessor 350 of FIG. 3B, or may include a multi-core group 365A-365N as in FIG. 3C. The graphics multiprocessors of the compute cluster include multiple
types of integer and floating-point logic units that can perform computational operations at a range of precisions including suited for machine learning computations. For example, at least a subset of the floating-point units in each of the processing clusters 706A-706H can be configured to perform 16-bit or 32-bit floating point operations, while a different subset of the floating-point units can be configured to perform 64-bit floating point operations.[0176] Multiple instances of the GPGPU 700 can be configured to operate as a compute cluster. The communication mechanism used by the compute cluster for synchronization and data exchange varies across embodiments. For example, the multiple instances of the GPGPU 700 communicate over the host interface 702. In one embodiment the GPGPU 700 includes an I/O hub 709 that couples the GPGPU 700 with a GPU link 710 that enables a direct connection to other instances of the GPGPU. The GPU link 710 may be coupled to a dedicated GPU-to- GPU bridge that enables communication and synchronization between multiple instances of the GPGPU 700. Optionally, the GPU link 710 couples with a high-speed interconnect to transmit and receive data to other GPGPUs or parallel processors. The multiple instances of the GPGPU 700 may be located in separate data processing systems and communicate via a network device that is accessible via the host interface 702. The GPU link 710 may be configured to enable a connection to a host processor in addition to or as an alternative to the host interface 702.[0177] While the illustrated configuration of the GPGPU 700 can be configured to train neural networks, an alternate configuration of the GPGPU 700 can be configured for deployment within a high performance or low power inferencing platform. In an inferencing configuration, the GPGPU 700 includes fewer of the processing clusters 706A-706H relative to the training configuration. Additionally, memory technology associated with the memory 714A-714B may differ between inferencing and training configurations. In one embodiment, the inferencing configuration of the GPGPU 700 can support inferencing specific instructions. For example, an inferencing configuration can provide support for one or more 8-bit integer dot product instructions, which are commonly used during inferencing operations for deployed neural networks.[0178] FIG. 8 illustrates a multi-GPU computing system 800. The multi-GPU computing system 800 can include a processor 802 coupled to multiple GPGPUs 806A-806D via a host interface switch 804. The host interface switch 804 may be a PCI express switch device that couples the processor 802 to a PCI express bus over which the processor 802 can communicate with the set of GPGPUs 806A-806D. Each of the multiple GPGPUs 806A-806D can be an instance of the GPGPU 700 of FIG. 7. The GPGPUs 806A-806D can interconnect via a set of high-speed point to point GPU to GPU links 816. The high-speed GPU to GPU links can connect to each of the GPGPUs 806A-806D via a dedicated GPU link, such as the GPU link 710
as in FIG. 7. The P2P GPU links 816 enable direct communication between each of the GPGPUs 806A-806D without requiring communication over the host interface bus to which the processor 802 is connected. With GPU-to-GPU traffic directed to the P2P GPU links, the host interface bus remains available for system memory access or to communicate with other instances of the multi-GPU computing system 800, for example, via one or more network devices. While in FIG. 8 the GPGPUs 806A-D connect to the processor 802 via the host interface switch 804, the processor 802 may alternatively include direct support for the P2P GPU links 816 and connect directly to the GPGPUs 806A-806D.Machine Learning Neural Network Implementations[0179] The computing architecture described herein can be configured to perform the types of parallel processing that is particularly suited for training and deploying neural networks for machine learning. A neural network can be generalized as a network of functions having a graph relationship. As is well-known in the art, there are a variety of types of neural network implementations used in machine learning. One exemplary type of neural network is the feedforward network, as previously described.[0180] A second exemplary type of neural network is the Convolutional Neural Network (CNN). A CNN is a specialized feedforward neural network for processing data having a known, grid-like topology, such as image data. Accordingly, CNNs are commonly used for compute vision and image recognition applications, but they also may be used for other types of pattern recognition such as speech and language processing. The nodes in the CNN input layer are organized into a set of“filters” (feature detectors inspired by the receptive fields found in the retina), and the output of each set of filters is propagated to nodes in successive layers of the network. The computations for a CNN include applying the convolution mathematical operation to each filter to produce the output of that filter. Convolution is a specialized kind of mathematical operation performed by two functions to produce a third function that is a modified version of one of the two original functions. In convolutional network terminology, the first function to the convolution can be referred to as the input, while the second function can be referred to as the convolution kernel. The output may be referred to as the feature map. For example, the input to a convolution layer can be a multidimensional array of data that defines the various color components of an input image. The convolution kernel can be a multidimensional array of parameters, where the parameters are adapted by the training process for the neural network.[0181] Recurrent neural networks (RNNs) are a family of feedforward neural networks that include feedback connections between layers. RNNs enable modeling of sequential data by sharing parameter data across different parts of the neural network. The architecture for an RNN
includes cycles. The cycles represent the influence of a present value of a variable on its own value at a future time, as at least a portion of the output data from the RNN is used as feedback for processing subsequent input in a sequence. This feature makes RNNs particularly useful for language processing due to the variable nature in which language data can be composed.[0182] The figures described below present exemplary feedforward, CNN, and RNN networks, as well as describe a general process for respectively training and deploying each of those types of networks. It will be understood that these descriptions are exemplary and non limiting as to any specific embodiment described herein and the concepts illustrated can be applied generally to deep neural networks and machine learning techniques in general.[0183] The exemplary neural networks described above can be used to perform deep learning. Deep learning is machine learning using deep neural networks. The deep neural networks used in deep learning are artificial neural networks composed of multiple hidden layers, as opposed to shallow neural networks that include only a single hidden layer. Deeper neural networks are generally more computationally intensive to train. However, the additional hidden layers of the network enable multistep pattern recognition that results in reduced output error relative to shallow machine learning techniques.[0184] Deep neural networks used in deep learning typically include a front-end network to perform feature recognition coupled to a back-end network which represents a mathematical model that can perform operations (e.g., object classification, speech recognition, etc.) based on the feature representation provided to the model. Deep learning enables machine learning to be performed without requiring hand crafted feature engineering to be performed for the model. Instead, deep neural networks can learn features based on statistical structure or correlation within the input data. The learned features can be provided to a mathematical model that can map detected features to an output. The mathematical model used by the network is generally specialized for the specific task to be performed, and different models will be used to perform different task.[0185] Once the neural network is structured, a learning model can be applied to the network to train the network to perform specific tasks. The learning model describes how to adjust the weights within the model to reduce the output error of the network. Backpropagation of errors is a common method used to train neural networks. An input vector is presented to the network for processing. The output of the network is compared to the desired output using a loss function and an error value is calculated for each of the neurons in the output layer. The error values are then propagated backwards until each neuron has an associated error value which roughly represents its contribution to the original output. The network can then leam from those errors
using an algorithm, such as the stochastic gradient descent algorithm, to update the weights of the of the neural network.[0186] FIG. 9A-9B illustrate an exemplary convolutional neural network. FIG. 9A illustrates various layers within a CNN. As shown in FIG. 9A, an exemplary CNN used to model image processing can receive input 902 describing the red, green, and blue (RGB) components of an input image. The input 902 can be processed by multiple convolutional layers (e.g., convolutional layer 904, convolutional layer 906). The output from the multiple convolutional layers may optionally be processed by a set of fully connected layers 908.Neurons in a fully connected layer have full connections to all activations in the previous layer, as previously described for a feedforward network. The output from the fully connected layers 908 can be used to generate an output result from the network. The activations within the fully connected layers 908 can be computed using matrix multiplication instead of convolution. Not all CNN implementations make use of fully connected layers 908. For example, in some implementations the convolutional layer 906 can generate output for the CNN.[0187] The convolutional layers are sparsely connected, which differs from traditional neural network configuration found in the fully connected layers 908. Traditional neural network layers are fully connected, such that every output unit interacts with every input unit. However, the convolutional layers are sparsely connected because the output of the convolution of a field is input (instead of the respective state value of each of the nodes in the field) to the nodes of the subsequent layer, as illustrated. The kernels associated with the convolutional layers perform convolution operations, the output of which is sent to the next layer. The dimensionality reduction performed within the convolutional layers is one aspect that enables the CNN to scale to process large images.[0188] FIG. 9B illustrates exemplary computation stages within a convolutional layer of a CNN. Input to a convolutional layer 912 of a CNN can be processed in three stages of a convolutional layer 914. The three stages can include a convolution stage 916, a detector stage 918, and a pooling stage 920. The convolution layer 914 can then output data to a successive convolutional layer. The final convolutional layer of the network can generate output feature map data or provide input to a fully connected layer, for example, to generate a classification value for the input to the CNN.[0189] In the convolution stage 916 performs several convolutions in parallel to produce a set of linear activations. The convolution stage 916 can include an affine transformation, which is any transformation that can be specified as a linear transformation plus a translation. Affine transformations include rotations, translations, scaling, and combinations of thesetransformations. The convolution stage computes the output of functions (e.g., neurons) that are
connected to specific regions in the input, which can be determined as the local region associated with the neuron. The neurons compute a dot product between the weights of the neurons and the region in the local input to which the neurons are connected. The output from the convolution stage 916 defines a set of linear activations that are processed by successive stages of the convolutional layer 914.[0190] The linear activations can be processed by a detector stage 918. In the detector stage 918, each linear activation is processed by a non-linear activation function. The non-linear activation function increases the nonlinear properties of the overall network without affecting the receptive fields of the convolution layer. Several types of non-linear activation functions may be used. One particular type is the rectified linear unit (ReLU), which uses an activation function defined as /(x) = max (0, x), such that the activation is thresholded at zero.[0191] The pooling stage 920 uses a pooling function that replaces the output of the convolutional layer 906 with a summary statistic of the nearby outputs. The pooling function can be used to introduce translation invariance into the neural network, such that small translations to the input do not change the pooled outputs. Invariance to local translation can be useful in scenarios where the presence of a feature in the input data is more important than the precise location of the feature. Various types of pooling functions can be used during the pooling stage 920, including max pooling, average pooling, and 12-norm pooling. Additionally, some CNN implementations do not include a pooling stage. Instead, such implementations substitute and additional convolution stage having an increased stride relative to previous convolution stages.[0192] The output from the convolutional layer 914 can then be processed by the next layer 922. The next layer 922 can be an additional convolutional layer or one of the fully connected layers 908. For example, the first convolutional layer 904 of FIG. 9 A can output to the second convolutional layer 906, while the second convolutional layer can output to a first layer of the fully connected layers 908.[0193] FIG. 10 illustrates an exemplary recurrent neural network 1000. In a recurrent neural network (RNN), the previous state of the network influences the output of the current state of the network. RNNs can be built in a variety of ways using a variety of functions. The use of RNNs generally revolves around using mathematical models to predict the future based on a prior sequence of inputs. For example, an RNN may be used to perform statistical language modeling to predict an upcoming word given a previous sequence of words. The illustrated RNN 1000 can be described has having an input layer 1002 that receives an input vector, hidden layers 1004 to implement a recurrent function, a feedback mechanism 1005 to enable a‘memory’ of previous states, and an output layer 1006 to output a result. The RNN 1000 operates based on time-steps.
The state of the RNN at a given time step is influenced based on the previous time step via the feedback mechanism 1005. For a given time step, the state of the hidden layers 1004 is defined by the previous state and the input at the current time step. An initial input (xi) at a first time step can be processed by the hidden layer 1004. A second input (x2) can be processed by the hidden layer 1004 using state information that is determined during the processing of the initial input (xi). A given state can be computed as st= f(Uxt+where U and W are parameter matrices. The function / is generally a nonlinearity, such as the hyperbolic tangent function (Tanh) or a variant of the rectifier function /(x) = max(0, x). However, the specific mathematical function used in the hidden layers 1004 can vary depending on the specific implementation details of the RNN 1000.[0194] In addition to the basic CNN and RNN networks described, variations on those networks may be enabled. One example RNN variant is the long short term memory (LSTM) RNN. LSTM RNNs are capable of learning long-term dependencies that may be necessary for processing longer sequences of language. A variant on the CNN is a convolutional deep belief network, which has a structure similar to a CNN and is trained in a manner similar to a deep belief network. A deep belief network (DBN) is a generative neural network that is composed of multiple layers of stochastic (random) variables. DBNs can be trained layer-by-layer using greedy unsupervised learning. The learned weights of the DBN can then be used to provide pre train neural networks by determining an optimal initial set of weights for the neural network.[0195] FIG. 11 illustrates training and deployment of a deep neural network. Once a given network has been structured for a task the neural network is trained using a training dataset 1102. Various training frameworks 1104 have been developed to enable hardware acceleration of the training process. For example, the machine learning framework 604 of FIG. 6 may be configured as a training framework 604. The training framework 604 can hook into an untrained neural network 1106 and enable the untrained neural net to be trained using the parallel processing resources described herein to generate a trained neural net 1108.[0196] To start the training process the initial weights may be chosen randomly or by pre training using a deep belief network. The training cycle then be performed in either a supervised or unsupervised manner.[0197] Supervised learning is a learning method in which training is performed as a mediated operation, such as when the training dataset 1102 includes input paired with the desired output for the input, or where the training dataset includes input having known output and the output of the neural network is manually graded. The network processes the inputs and compares the resulting outputs against a set of expected or desired outputs. Errors are then propagated back through the system. The training framework 1104 can adjust to adjust the weights that control
the untrained neural network 1106. The training framework 1104 can provide tools to monitor how well the untrained neural network 1106 is converging towards a model suitable to generating correct answers based on known input data. The training process occurs repeatedly as the weights of the network are adjusted to refine the output generated by the neural network.The training process can continue until the neural network reaches a statistically desired accuracy associated with a trained neural net 1108. The trained neural network 1108 can then be deployed to implement any number of machine learning operations to generate an inference result 1114 based on input of new data 1112.[0198] Unsupervised learning is a learning method in which the network attempts to train itself using unlabeled data. Thus, for unsupervised learning the training dataset 1102 will include input data without any associated output data. The untrained neural network 1106 can learn groupings within the unlabeled input and can determine how individual inputs are related to the overall dataset. Unsupervised training can be used to generate a self-organizing map, which is a type of trained neural network 1108 capable of performing operations useful in reducing the dimensionality of data. Unsupervised training can also be used to perform anomaly detection, which allows the identification of data points in an input dataset that deviate from the normal patterns of the data.[0199] Variations on supervised and unsupervised training may also be employed. Semi- supervised learning is a technique in which in the training dataset 1102 includes a mix of labeled and unlabeled data of the same distribution. Incremental learning is a variant of supervised learning in which input data is continuously used to further train the model. Incremental learning enables the trained neural network 1108 to adapt to the new data 1112 without forgetting the knowledge instilled within the network during initial training.[0200] Whether supervised or unsupervised, the training process for particularly deep neural networks may be too computationally intensive for a single compute node. Instead of using a single compute node, a distributed network of computational nodes can be used to accelerate the training process.[0201] FIG. 12 is a block diagram illustrating distributed learning. Distributed learning is a training model that uses multiple distributed computing nodes to perform supervised or unsupervised training of a neural network. The distributed computational nodes can each include one or more host processors and one or more of the general-purpose processing nodes, such as the highly parallel general-purpose graphics processing unit 700 as in FIG. 7. As illustrated, distributed learning can be performed model parallelism 1202, data parallelism 1204, or a combination of model and data parallelism 1204.
[0202] In model parallelism 1202, different computational nodes in a distributed system can perform training computations for different parts of a single network. For example, each layer of a neural network can be trained by a different processing node of the distributed system. The benefits of model parallelism include the ability to scale to particularly large models. Splitting the computations associated with different layers of the neural network enables the training of very large neural networks in which the weights of all layers would not fit into the memory of a single computational node. In some instances, model parallelism can be particularly useful in performing unsupervised training of large neural networks.[0203] In data parallelism 1204, the different nodes of the distributed network have a complete instance of the model and each node receives a different portion of the data. The results from the different nodes are then combined. While different approaches to data parallelism are possible, data parallel training approaches all require a technique of combining results and synchronizing the model parameters between each node. Exemplary approaches to combining data include parameter averaging and update based data parallelism. Parameter averaging trains each node on a subset of the training data and sets the global parameters (e.g., weights, biases) to the average of the parameters from each node. Parameter averaging uses a central parameter server that maintains the parameter data. Update based data parallelism is similar to parameter averaging exceptthat instead of transferring parameters from the nodes to the parameter server, the updates to the model are transferred. Additionally, update based data parallelism can be performed in a decentralized manner, where the updates are compressed and transferred between nodes.[0204] Combined model and data parallelism 1206 can be implemented, for example, in a distributed system in which each computational node includes multiple GPUs. Each node can have a complete instance of the model with separate GPUs within each node are used to train different portions of the model.[0205] Distributed training has increased overhead relative to training on a single machine. However, the parallel processors and GPGPUs described herein can each implement various techniques to reduce the overhead of distributed training, including techniques to enable high bandwidth GPU-to-GPU data transfer and accelerated remote data synchronization.Exemplary Machine Learning Applications[0206] Machine learning can be applied to solve a variety of technological problems, including but not limited to computer vision, autonomous driving and navigation, speech recognition, and language processing. Computer vision has traditionally been one of the most active research areas for machine learning applications. Applications of computer vision range
from reproducing human visual abilities, such as recognizing faces, to creating new categories of visual abilities. For example, computer vision applications can be configured to recognize sound waves from the vibrations induced in objects visible in a video. Parallel processor accelerated machine learning enables computer vision applications to be trained using significantly larger training dataset than previously feasible and enables inferencing systems to be deployed using low power parallel processors.[0207] Parallel processor accelerated machine learning has autonomous driving applications including lane and road sign recognition, obstacle avoidance, navigation, and driving control. Accelerated machine learning techniques can be used to train driving models based on datasets that define the appropriate responses to specific training input. The parallel processors described herein can enable rapid training of the increasingly complex neural networks used for autonomous driving solutions and enables the deployment of low power inferencing processors in a mobile platform suitable for integration into autonomous vehicles.[0208] Parallel processor accelerated deep neural networks have enabled machine learning approaches to automatic speech recognition (ASR). ASR includes the creation of a function that computes the most probable linguistic sequence given an input acoustic sequence. Accelerated machine learning using deep neural networks have enabled the replacement of the hidden Markov models (HMMs) and Gaussian mixture models (GMMs) previously used for ASR.[0209] Parallel processor accelerated machine learning can also be used to accelerate natural language processing. Automatic learning procedures can make use of statisticalinference algorithms to produce models that are robust to erroneous or unfamiliar input.Exemplary natural language processor applications include automatic machine translation between human languages.[0210] The parallel processing platforms used for machine learning can be divided into training platforms and deployment platforms. Training platforms are generally highly parallel and include optimizations to accelerate multi-GPU single node training and multi-node, multi- GPU training. Exemplary parallel processors suited for training include the general-purpose graphics processing unit 700 of FIG. 7 and the multi-GPU computing system 800 of FIG. 800. On the contrary, deployed machine learning platforms generally include lower power parallel processors suitable for use in products such as cameras, autonomous robots, and autonomous vehicles.[0211] FIG. 13 illustrates an exemplary inferencing system on a chip (SOC) 1300 suitable for performing inferencing using a trained model. The SOC 1300 can integrate processing components including a media processor 1302, a vision processor 1304, a GPGPU 1306 and a multi-core processor 1308. The GPGPU 1306 may be a GPGPU as described herein, such as the
GPGPU 700, and the multi-core processor 1308 may be a multi-core processor described herein, such as the multi-core processors 405-406. The SOC 1300 can additionally include on-chip memory 1305 that can enable a shared on-chip data pool that is accessible by each of the processing components. The processing components can be optimized for low power operation to enable deployment to a variety of machine learning platforms, including autonomous vehicles and autonomous robots. For example, one implementation of the SOC 1300 can be used as a portion of the main control system for an autonomous vehicle. Where the SOC 1300 is configured for use in autonomous vehicles the SOC is designed and configured for compliance with the relevant functional safety standards of the deployment jurisdiction.[0212] During operation, the media processor 1302 and vision processor 1304 can work in concert to accelerate computer vision operations. The media processor 1302 can enable low latency decode of multiple high-resolution (e.g., 4K, 8K) video streams. The decoded video streams can be written to a buffer in the on-chip-memory 1305. The vision processor 1304 can then parse the decoded video and perform preliminary processing operations on the frames of the decoded video in preparation of processing the frames using a trained image recognition model. For example, the vision processor 1304 can accelerate convolution operations for a CNN that is used to perform image recognition on the high-resolution video data, while back end model computations are performed by the GPGPU 1306.[0213] The multi-core processor 1308 can include control logic to assist with sequencing and synchronization of data transfers and shared memory operations performed by the media processor 1302 and the vision processor 1304. The multi-core processor 1308 can also function as an application processor to execute software applications that can make use of the inferencing compute capability of the GPGPU 1306. For example, at least a portion of the navigation and driving logic can be implemented in software executing on the multi-core processor 1308. Such software can directly issue computational workloads to the GPGPU 1306 or the computational workloads can be issued to the multi-core processor 1308, which can offload at least a portion of those operations to the GPGPU 1306.[0214] The GPGPU 1306 can include compute clusters such as a low power configuration of the processing clusters 706A-706H within general-purpose graphics processing unit 700. The compute clusters within the GPGPU 1306 can support instruction that are specifically optimized to perform inferencing computations on a trained neural network. For example, the GPGPU 1306 can support instructions to perform low precision computations such as 8-bit and 4-bit integer vector operations.Additional System Overview
[0215] FIG. 14 is a block diagram of a processing system 1400. The elements of FIG. 14 having the same or similar names as the elements of any other figure herein describe the same elements as in the other figures, can operate or function in a manner similar to that, can comprise the same components, and can be linked to other entities, as those described elsewhere herein, but are not limited to such. System 1400 may be used in a single processor desktop system, a multiprocessor workstation system, or a server system having a large number of processors 1402 or processor cores 1407. The system 1400 may be a processing platform incorporated within a system-on-a-chip (SoC) integrated circuit for use in mobile, handheld, or embedded devices such as within Internet-of-things (IoT) devices with wired or wireless connectivity to a local or wide area network.[0216] The system 1400 may be a processing system having components that correspond with those of FIG. 1. For example, in different configurations, processor(s) 1402 or processor core(s) 1407 may correspond with processor(s) 102 of FIG. 1. Graphics processor(s) 1408 may correspond with parallel processor(s) 112 of FIG. 1. External graphics processor 1418 may be one of the add-in device(s) 120 of FIG. 1.[0217] The system 1400 can include, couple with, or be integrated within: a server-based gaming platform; a game console, including a game and media console; a mobile gaming console, a handheld game console, or an online game console. The system 1400 may be part of a mobile phone, smart phone, tablet computing device or mobile Internet-connected device such as a laptop with low internal storage capacity. Processing system 1400 can also include, couple with, or be integrated within: a wearable device, such as a smart watch wearable device; smart eyewear or clothing enhanced with augmented reality (AR) or virtual reality (VR) features to provide visual, audio or tactile outputs to supplement real world visual, audio or tactile experiences or otherwise provide text, audio, graphics, video, holographic images or video, or tactile feedback; other augmented reality (AR) device; or other virtual reality (VR) device. The processing system 1400 may include or be part of a television or set top box device. The system 1400 can include, couple with, or be integrated within a self-driving vehicle such as a bus, tractor trailer, car, motor or electric power cycle, plane or glider (or any combination thereof). The self driving vehicle may use system 1400 to process the environment sensed around the vehicle.[0218] The one or more processors 1402 may include one or more processor cores 1407 to process instructions which, when executed, perform operations for system or user software. The least one of the one or more processor cores 1407 may be configured to process a specific instruction set 1409. The instruction set 1409 may facilitate Complex Instruction Set Computing (CISC), Reduced Instruction Set Computing (RISC), or computing via a Very Long Instruction Word (VLIW). One or more processor cores 1407 may process a different instruction set 1409,
which may include instructions to facilitate the emulation of other instruction sets. Processor core 1407 may also include other processing devices, such as a Digital Signal Processor (DSP).[0219] The processor 1402 may include cache memory 1404. Depending on thearchitecture, the processor 1402 can have a single internal cache or multiple levels of internal cache. In some embodiments, the cache memory is shared among various components of the processor 1402. In some embodiments, the processor 1402 also uses an external cache (e.g., a Level-3 (L3) cache or Last Level Cache (LLC)) (not shown), which may be shared among processor cores 1407 using known cache coherency techniques. A register file 1406 can be additionally included in processor 1402 and may include different types of registers for storing different types of data (e.g., integer registers, floating point registers, status registers, and an instruction pointer register). Some registers may be general-purpose registers, while other registers may be specific to the design of the processor 1402.[0220] The one or more processor(s) 1402 may be coupled with one or more interface bus(es) 1410 to transmit communication signals such as address, data, or control signals between processor 1402 and other components in the system 1400. The interface bus 1410, in one of these embodiments, can be a processor bus, such as a version of the Direct Media Interface (DMI) bus. However, processor busses are not limited to the DMI bus, and may include one or more Peripheral Component Interconnect buses (e.g., PCI, PCI express), memory busses, or other types of interface busses. For example, the processor(s) 1402 may include an integrated memory controller 1416 and a platform controller hub 1430. The memory controller 1416 facilitates communication between a memory device and other components of the system 1400, while the platform controller hub (PCH) 1430 provides connections to I/O devices via a local I/O bus.[0221] The memory device 1420 can be a dynamic random-access memory (DRAM) device, a static random-access memory (SRAM) device, flash memory device, phase-change memory device, or some other memory device having suitable performance to serve as process memory. The memory device 1420 can, for example, operate as system memory for the system 1400, to store data 1422 and instructions 1421 for use when the one or more processors 1402 executes an application or process. Memory controller 1416 also couples with an optional external graphics processor 1418, which may communicate with the one or more graphics processors 1408 in processors 1402 to perform graphics and media operations. In some embodiments, graphics, media, and or compute operations may be assisted by an accelerator 1412 which is a coprocessor that can be configured to perform a specialized set of graphics, media, or compute operations.For example, the accelerator 1412 may be a matrix multiplication accelerator used to optimize machine learning or compute operations. The accelerator 1412 can be a ray-tracing accelerator
that can be used to perform ray-tracing operations in concert with the graphics processor 1408.In one embodiment, an external accelerator 1419 may be used in place of or in concert with the accelerator 1412.[0222] A display device 1411 may be provided that can connect to the processor(s) 1402. The display device 1411 can be one or more of an internal display device, as in a mobile electronic device or a laptop device or an external display device attached via a display interface (e.g., DisplayPort, etc.). The display device 1411 can be a head mounted display (HMD) such as a stereoscopic display device for use in virtual reality (VR) applications or augmented reality (AR) applications.[0223] The platform controller hub 1430 may enable peripherals to connect to memory device 1420 and processor 1402 via a high-speed I/O bus. The I/O peripherals include, but are not limited to, an audio controller 1446, a network controller 1434, a firmware interface 1428, a wireless transceiver 1426, touch sensors 1425, a data storage device 1424 (e.g., non-volatile memory, volatile memory, hard disk drive, flash memory, NAND, 3D NAND, 3D XPoint, etc.). The data storage device 1424 can connect via a storage interface (e.g., SATA) or via a peripheral bus, such as a Peripheral Component Interconnect bus (e.g., PCI, PCI express). The touch sensors 1425 can include touch screen sensors, pressure sensors, or fingerprint sensors. The wireless transceiver 1426 can be a Wi-Fi transceiver, a Bluetooth transceiver, or a mobile network transceiver such as a 3G, 4G, 5G, or Long-Term Evolution (LTE) transceiver. The firmware interface 1428 enables communication with system firmware, and can be, for example, a unified extensible firmware interface (UEFI). The network controller 1434 can enable a network connection to a wired network. In some embodiments, a high-performance network controller (not shown) couples with the interface bus 1410. The audio controller 1446 may be a multi-channel high definition audio controller. In some of these embodiments the system 1400 includes an optional legacy I/O controller 1440 for coupling legacy (e.g., Personal System 2 (PS/2)) devices to the system. The platform controller hub 1430 can also connect to one or more Universal Serial Bus (USB) controllers 1442 connect input devices, such as keyboard and mouse 1443 combinations, a camera 1444, or other USB input devices.[0224] It will be appreciated that the system 1400 shown is exemplary and not limiting, as other types of data processing systems that are differently configured may also be used. For example, an instance of the memory controller 1416 and platform controller hub 1430 may be integrated into a discreet external graphics processor, such as the external graphics processor 1418. The platform controller hub 1430 and/or memory controller 1416 may be external to the one or more processor(s) 1402. For example, the system 1400 can include an external memory controller 1416 and platform controller hub 1430, which may be configured as a memory
controller hub and peripheral controller hub within a system chipset that is in communication with the processor(s) 1402.[0225] For example, circuit boards (“sleds”) can be used on which components such as CPUs, memory, and other components are placed are designed for increased thermal performance. Processing components such as the processors may be located on a top side of a sled while near memory, such as DIMMs, are located on a bottom side of the sled. As a result of the enhanced airflow provided by this design, the components may operate at higher frequencies and power levels than in typical systems, thereby increasing performance. Furthermore, the sleds are configured to blindly mate with power and data communication cables in a rack, thereby enhancing their ability to be quickly removed, upgraded, reinstalled, and/or replaced. Similarly, individual components located on the sleds, such as processors, accelerators, memory, and data storage drives, are configured to be easily upgraded due to their increased spacing from each other. In the illustrative embodiment, the components additionally include hardware attestation features to prove their authenticity.[0226] A data center can utilize a single network architecture (“fabric”) that supports multiple other network architectures including Ethernet and Omni-Path. The sleds can be coupled to switches via optical fibers, which provide higher bandwidth and lower latency than typical twisted pair cabling (e.g., Category 5, Category 5e, Category 6, etc.). Due to the high bandwidth, low latency interconnections and network architecture, the data center may, in use, pool resources, such as memory, accelerators (e.g., GPUs, graphics accelerators, FPGAs, ASICs, neural network and/or artificial intelligence accelerators, etc.), and data storage drives that are physically disaggregated, and provide them to compute resources (e.g., processors) on an as needed basis, enabling the compute resources to access the pooled resources as if they were local.[0227] A power supply or source can provide voltage and/or current to system 1400 or any component or system described herein. In one example, the power supply includes an AC to DC (alternating current to direct current) adapter to plug into a wall outlet. Such AC power can be renewable energy (e.g., solar power) power source. In one example, the power source includes a DC power source, such as an external AC to DC converter. A power source or power supply may also include wireless charging hardware to charge via proximity to a charging field. The power source can include an internal battery, alternating current supply, motion-based power supply, solar power supply, or fuel cell source.[0228] FIG. 15A-15C illustrate computing systems and graphics processors. The elements of FIG. 15A-15C having the same or similar names as the elements of any other figure herein describe the same elements as in the other figures, can operate or function in a manner similar to
that, can comprise the same components, and can be linked to other entities, as those described elsewhere herein, but are not limited to such.[0229] FIG. 15A is a block diagram of a processor 1500, which may be a variant of one of the processors 1402 and may be used in place of one of those. Therefore, the disclosure of any features in combination with the processor 1500 herein also discloses a corresponding combination with the processor(s) 1402, but is not limited to such. The processor 1500 may have one or more processor cores 1502A-1502N, an integrated memory controller 1514, and an integrated graphics processor 1508. Where an integrated graphics processor 1508 is excluded, the system that includes the processor will include a graphics processor device within a system chipset or coupled via a system bus. Processor 1500 can include additional cores up to and including additional core 1502N represented by the dashed lined boxes. Each of processor cores 1502A-1502N includes one or more internal cache units 1504A-1504N. In some embodiments each processor core 1502A-1502N also has access to one or more shared cache units 1506. The internal cache units 1504A-1504N and shared cache units 1506 represent a cache memory hierarchy within the processor 1500. The cache memory hierarchy may include at least one level of instruction and data cache within each processor core and one or more levels of shared mid level cache, such as a Level 2 (L2), Level 3 (L3), Level 4 (L4), or other levels of cache, where the highest level of cache before external memory is classified as the LLC. In someembodiments, cache coherency logic maintains coherency between the various cache units 1506 and 1504A-1504N.[0230] The processor 1500 may also include a set of one or more bus controller units 1516 and a system agent core 1510. The one or more bus controller units 1516 manage a set of peripheral buses, such as one or more PCI or PCI express busses. System agent core 1510 provides management functionality for the various processor components. The system agent core 1510 may include one or more integrated memory controllers 1514 to manage access to various external memory devices (not shown).[0231] Lor example, one or more of the processor cores 1502A-1502N may include support for simultaneous multi-threading. The system agent core 1510 includes components for coordinating and operating cores 1502A-1502N during multi-threaded processing. System agent core 1510 may additionally include a power control unit (PCU), which includes logic and components to regulate the power state of processor cores 1502A-1502N and graphics processor 1508.[0232] The processor 1500 may additionally include graphics processor 1508 to execute graphics processing operations. In some of these embodiments, the graphics processor 1508 couples with the set of shared cache units 1506, and the system agent core 1510, including the
one or more integrated memory controllers 1514. The system agent core 1510 may also include a display controller 1511 to drive graphics processor output to one or more coupled displays.The display controller 1511 may also be a separate module coupled with the graphics processor via at least one interconnect, or may be integrated within the graphics processor 1508.[0233] A ring-based interconnect unit 1512 may be used to couple the internal components of the processor 1500. However, an alternative interconnect unit may be used, such as a point- to-point interconnect, a switched interconnect, or other techniques, including techniques well known in the art. In some of these embodiments with a ring-based interconnect 1512, the graphics processor 1508 couples with the ring-based interconnect 1512 via an I/O link 1513.[0234] The exemplary I/O link 1513 represents at least one of multiple varieties of I/O interconnects, including an on package I/O interconnect which facilitates communication between various processor components and a high-performance embedded memory module 1518, such as an eDRAM module. Optionally, each of the processor cores 1502A-1502N and graphics processor 1508 can use embedded memory modules 1518 as a shared Last Level Cache.[0235] The processor cores 1502A-1502N may, for example, be homogenous cores executing the same instruction set architecture. Alternatively, the processor cores 1502A-1502N are heterogeneous in terms of instruction set architecture (ISA), where one or more of processor cores 1502A-1502N execute a first instruction set, while at least one of the other cores executes a subset of the first instruction set or a different instruction set. The processor cores 1502A-1502N may be heterogeneous in terms of microarchitecture, where one or more cores having a relatively higher power consumption couple with one or more power cores having a lower power consumption. As another example, the processor cores 1502A-1502N are heterogeneous in terms of computational capability. Additionally, processor 1500 can be implemented on one or more chips or as an SoC integrated circuit having the illustrated components, in addition to other components.[0236] FIG. 15B is a block diagram of hardware logic of a graphics processor core 1519, according to some embodiments described herein. The graphics processor core 1519, sometimes referred to as a core slice, can be one or multiple graphics cores within a modular graphics processor. The graphics processor core 1519 is exemplary of one graphics core slice, and a graphics processor as described herein may include multiple graphics core slices based on target power and performance envelopes. Each graphics processor core 1519 can include a fixed function block 1530 coupled with multiple sub-cores 1521A-1521F, also referred to as sub slices, that include modular blocks of general-purpose and fixed function logic.[0237] The fixed function block 1530 may include a geometry/fixed function pipeline 1531 that can be shared by all sub-cores in the graphics processor core 1519, for example, in lower
performance and/or lower power graphics processor implementations. The geometry/fixed function pipeline 1531 may include a 3D fixed function pipeline (e.g., 3D pipeline 1612 as in FIG. 16A described below) a video front-end unit, a thread spawner and thread dispatcher, and a unified return buffer manager, which manages unified return buffers (e.g., unified return buffer 1718 in FIG. 17, as described below).[0238] The fixed function block 1530 may also include a graphics SoC interface 1532, a graphics microcontroller 1533, and a media pipeline 1534. The graphics SoC interface 1532 provides an interface between the graphics processor core 1519 and other processor cores within a system on a chip integrated circuit. The graphics microcontroller 1533 is a programmable sub processor that is configurable to manage various functions of the graphics processor core 1519, including thread dispatch, scheduling, and pre-emption. The media pipeline 1534 (e.g., media pipeline 1616 of FIG. 16A and FIG. 17) includes logic to facilitate the decoding, encoding, pre processing, and/or post-processing of multimedia data, including image and video data. The media pipeline 1534 implement media operations via requests to compute or sampling logic within the sub-cores 1521-1521F.[0239] The SoC interface 1532 may enable the graphics processor core 1519 to communicate with general-purpose application processor cores (e.g., CPUs) and/or other components within an SoC, including memory hierarchy elements such as a shared last level cache memory, the system RAM, and/or embedded on-chip or on-package DRAM. The SoC interface 1532 can also enable communication with fixed function devices within the SoC, such as camera imaging pipelines, and enables the use of and/or implements global memory atomics that may be shared between the graphics processor core 1519 and CPUs within the SoC. The SoC interface 1532 can also implement power management controls for the graphics processor core 1519 and enable an interface between a clock domain of the graphic core 1519 and other clock domains within the SoC. Optionally, the SoC interface 1532 enables receipt of command buffers from a command streamer and global thread dispatcher that are configured to provide commands and instructions to each of one or more graphics cores within a graphics processor. The commands and instructions can be dispatched to the media pipeline 1534, when media operations are to be performed, or a geometry and fixed function pipeline (e.g., geometry and fixed function pipeline 1531, geometry and fixed function pipeline 1537) when graphics processing operations are to be performed.[0240] The graphics microcontroller 1533 can be configured to perform various scheduling and management tasks for the graphics processor core 1519. In one configuration the graphics microcontroller 1533 can, for example, perform graphics and/or compute workload scheduling on the various graphics parallel engines within execution unit (EU) arrays 1522A-1522F, 1524A-
1524F within the sub-cores 1521A-1521F. In this workload scheduling, host software executing on a CPU core of an SoC including the graphics processor core 1519 can submit workloads to one of multiple graphic processor doorbells, which invokes a scheduling operation on the appropriate graphics engine. Scheduling operations include determining which workload to run next, submitting a workload to a command streamer, pre-empting existing workloads running on an engine, monitoring progress of a workload, and notifying host software when a workload is complete. Optionally, the graphics microcontroller 1533 can also facilitate low-power or idle states for the graphics processor core 1519, providing the graphics processor core 1519 with the ability to save and restore registers within the graphics processor core 1519 across low-power state transitions independently from the operating system and/or graphics driver software on the system.[0241] The graphics processor core 1519 may have more than or fewer than the illustrated sub-cores 1521A-1521F, up to N modular sub-cores. For each set ofN sub-cores, the graphics processor core 1519 can also include shared function logic 1535, shared and/or cache memory 1536, a geometry /fixed function pipeline 1537, as well as additional fixed function logic 1538 to accelerate various graphics and compute processing operations. The shared function logic 1535 can include logic units associated with the shared function logic 1720 of FIG. 17 (e.g., sampler, math, and/or inter-thread communication logic) that can be shared by eachN sub-cores within the graphics processor core 1519. The shared and/or cache memory 1536 can be a last-level cache for the set ofN sub-cores 1521A-1521F within the graphics processor core 1519, and can also serve as shared memory that is accessible by multiple sub-cores. The geometry/fixed function pipeline 1537 can be included instead of the geometry/fixed function pipeline 1531 within the fixed function block 1530 and can include the same or similar logic units.[0242] The graphics processor core 1519 may include additional fixed function logic 1538 that can include various fixed function acceleration logic for use by the graphics processor core 1519. Optionally, the additional fixed function logic 1538 includes an additional geometry pipeline for use in position only shading. In position-only shading, two geometry pipelines exist, the full geometry pipeline within the geometry/fixed function pipeline 1538, 1531, and a cull pipeline, which is an additional geometry pipeline which may be included within the additional fixed function logic 1538. For example, the cull pipeline may be a trimmed down version of the full geometry pipeline. The full pipeline and the cull pipeline can execute different instances of the same application, each instance having a separate context. Position only shading can hide long cull runs of discarded triangles, enabling shading to be completed earlier in some instances. For example, the cull pipeline logic within the additional fixed function logic 1538 can execute position shaders in parallel with the main application and generally generates critical results
faster than the full pipeline, as the cull pipeline fetches and shades only the position attribute of the vertices, without performing rasterization and rendering of the pixels to the frame buffer.The cull pipeline can use the generated critical results to compute visibility information for all the triangles without regard to whether those triangles are culled. The full pipeline (which in this instance may be referred to as a replay pipeline) can consume the visibility information to skip the culled triangles to shade only the visible triangles that are finally passed to the rasterization phase.[0243] Optionally, the additional fixed function logic 1538 can also include machine learning acceleration logic, such as fixed function matrix multiplication logic, forimplementations including optimizations for machine learning training or inferencing.[0244] Within each graphics sub-core 1521 A- 152 IF a set of execution resources is included that may be used to perform graphics, media, and compute operations in response to requests by graphics pipeline, media pipeline, or shader programs. The graphics sub-cores 1521A-1521F include multiple EU arrays 1522A-1522F, 1524A-1524F, thread dispatch and inter-thread communication (TD/IC) logic 1523A-1523F, a 3D (e.g., texture) sampler 1525A-1525F, a media sampler 1506A-1506F, a shader processor 1527A-1527F, and shared local memory (SLM) 1528A-1528F. The EU arrays 1522A-1522F, 1524A-1524F each include multiple execution units, which are general-purpose graphics processing units capable of performing floating-point and integer/fixed-point logic operations in service of a graphics, media, or compute operation, including graphics, media, or compute shader programs. The TD/IC logic 1523A-1523F performs local thread dispatch and thread control operations for the execution units within a sub core and facilitate communication between threads executing on the execution units of the sub core. The 3D sampler 1525A-1525F can read texture or other 3D graphics related data into memory. The 3D sampler can read texture data differently based on a configured sample state and the texture format associated with a given texture. The media sampler 1506A-1506F can perform similar read operations based on the type and format associated with media data. For example, each graphics sub-core 1521 A- 152 IF can alternately include a unified 3D and media sampler. Threads executing on the execution units within each of the sub-cores 1521A-1521F can make use of shared local memory 1528A-1528F within each sub-core, to enable threads executing within a thread group to execute using a common pool of on-chip memory.[0245] FIG. 15C is a block diagram of general-purpose graphics processing unit (GPGPU) 1570 that can be configured as a graphics processor, e.g. the graphics processor 1508, and/or compute accelerator, according to embodiments described herein. The GPGPU 1570 can interconnect with host processors (e.g., one or more CPU(s) 1546) and memory 1571, 1572 via one or more system and/or memory busses. The memory 1571 may be a system memory that
can be shared with the one or more CPU(s) 1546, while memory 1572 is device memory that is dedicated to the GPGPU 1570. For example, components within the GPGPU 1570 and device memory 1572 may be mapped into memory addresses that are accessible to the one or more CPU(s) 1546. Access to memory 1571 and 1572 may be facilitated via a memory controller 1568. The memory controller 1568 may include an internal direct memory access (DMA) controller 1569 or can include logic to perform operations that would otherwise be performed by a DMA controller.[0246] The GPGPU 1570 includes multiple cache memories, including an L2 cache 1553, LI cache 1554, an instruction cache 1555, and shared memory 1556, at least a portion of which may also be partitioned as a cache memory. The GPGPU 1570 also includes multiple compute units 1560A-1560N. Each compute unit 1560A-1560N includes a set of vector registers 1561, scalar registers 1562, vector logic units 1563, and scalar logic units 1564. The compute units 1560A- 1560N can also include local shared memory 1565 and a program counter 1566. The compute units 1560A-1560N can couple with a constant cache 1567, which can be used to store constant data, which is data that will not change during the run of kernel or shader program that executes on the GPGPU 1570. The constant cache 1567 may be a scalar data cache and cached data can be fetched directly into the scalar registers 1562.[0247] During operation, the one or more CPU(s) 1546 can write commands into registers or memory in the GPGPU 1570 that has been mapped into an accessible address space. The command processors 1557 can read the commands from registers or memory and determine how those commands will be processed within the GPGPU 1570. A thread dispatcher 1558 can then be used to dispatch threads to the compute units 1560A-1560N to perform those commands.Each compute unit 1560A-1560N can execute threads independently of the other compute units. Additionally, each compute unit 1560A-1560N can be independently configured for conditional computation and can conditionally output the results of computation to memory. The command processors 1557 can interrupt the one or more CPU(s) 1546 when the submitted commands are complete.[0248] FIG. 16A-16C illustrate block diagrams of additional graphics processor and compute accelerator architectures provided by embodiments described herein, e.g. in accordance with Fig. 15A-15C. The elements of FIG. 16A-16C having the same or similar names as the elements of any other figure herein describe the same elements as in the other figures, can operate or function in a manner similar to that, can comprise the same components, and can be linked to other entities, as those described elsewhere herein, but are not limited to such.[0249] Fig. 16A is a block diagram of a graphics processor 1600, which may be a discrete graphics processing unit, or may be a graphics processor integrated with a plurality of processing
cores, or other semiconductor devices such as, but not limited to, memory devices or network interfaces. The graphics processor 1600 may be a variant of the graphics processor 1508 and may be used in place of the graphics processor 1508. Therefore, the disclosure of any features in combination with the graphics processor 1508 herein also discloses a corresponding combination with the graphics processor 1600, but is not limited to such. The graphics processor may communicate via a memory mapped I/O interface to registers on the graphics processor and with commands placed into the processor memory. Graphics processor 1600 may include a memory interface 1614 to access memory. Memory interface 1614 can be an interface to local memory, one or more internal caches, one or more shared external caches, and/or to system memory.[0250] Optionally, graphics processor 1600 also includes a display controller 1602 to drive display output data to a display device 1618. Display controller 1602 includes hardware for one or more overlay planes for the display and composition of multiple layers of video or user interface elements. The display device 1618 can be an internal or external display device. In one embodiment the display device 1618 is a head mounted display device, such as a virtual reality (VR) display device or an augmented reality (AR) display device. Graphics processor 1600 may include a video codec engine 1606 to encode, decode, or transcode media to, from, or between one or more media encoding formats, including, but not limited to Moving Picture Experts Group (MPEG) formats such as MPEG-2, Advanced Video Coding (AVC) formats such as H.264/MPEG-4 AVC, H.265/HEVC, Alliance for Open Media (AOMedia) VP8, VP9, as well as the Society of Motion Picture & Television Engineers (SMPTE) 421M/VC-1, and Joint Photographic Experts Group (JPEG) formats such as JPEG, and Motion JPEG (MJPEG) formats.[0251] Graphics processor 1600 may include a block image transfer (BLIT) engine 1604 to perform two-dimensional (2D) rasterizer operations including, for example, bit-boundary block transfers. However, alternatively, 2D graphics operations may be performed using one or more components of graphics processing engine (GPE) 1610. In some embodiments, GPE 1610 is a compute engine for performing graphics operations, including three-dimensional (3D) graphics operations and media operations.[0252] GPE 1610 may include a 3D pipeline 1612 for performing 3D operations, such as rendering three-dimensional images and scenes using processing functions that act upon 3D primitive shapes (e.g., rectangle, triangle, etc.). The 3D pipeline 1612 includes programmable and fixed function elements that perform various tasks within the element and/or spawn execution threads to a 3D/Media sub-system 1615. While 3D pipeline 1612 can be used to perform media operations, an embodiment of GPE 1610 also includes a media pipeline 1616 that is specifically used to perform media operations, such as video post-processing and image enhancement.
[0253] Media pipeline 1616 may include fixed function or programmable logic units to perform one or more specialized media operations, such as video decode acceleration, video de interlacing, and video encode acceleration in place of, or on behalf of video codec engine 1606. Media pipeline 1616 may additionally include a thread spawning unit to spawn threads for execution on 3D/Media sub-system 1615. The spawned threads perform computations for the media operations on one or more graphics execution units included in 3D/Media sub-system 1615.[0254] The 3D/Media subsystem 1615 may include logic for executing threads spawned by 3D pipeline 1612 and media pipeline 1616. The pipelines may send thread execution requests to 3D/Media subsystem 1615, which includes thread dispatch logic for arbitrating and dispatching the various requests to available thread execution resources. The execution resources include an array of graphics execution units to process the 3D and media threads. The 3D/Media subsystem 1615 may include one or more internal caches for thread instructions and data. Additionally, the 3D/Media subsystem 1615 may also include shared memory, including registers and addressable memory, to share data between threads and to store output data.[0255] Fig. 16B illustrates a graphics processor 1620, being a variant of the graphics processor 1600 and may be used in place of the graphics processor 1600 and vice versa.Therefore, the disclosure of any features in combination with the graphics processor 1600 herein also discloses a corresponding combination with the graphics processor 1620, but is not limited to such. The graphics processor 1620 has a tiled architecture, according to embodiments described herein. The graphics processor 1620 may include a graphics processing engine cluster 1622 having multiple instances of the graphics processing engine 1610 of Fig. 16A within a graphics engine tile 1610A-1610D. Each graphics engine tile 1610A-1610D can beinterconnected via a set of tile interconnects 1623A-1623F. Each graphics engine tile 1610A- 1610D can also be connected to a memory module or memory device 1626A-1626D via memory interconnects 1625A-1625D. The memory devices 1626A-1626D can use any graphics memory technology. For example, the memory devices 1626A-1626D may be graphics double data rate (GDDR) memory. The memory devices 1626A-1626D may be high-bandwidth memory (HBM) modules that can be on-die with their respective graphics engine tile 1610A-1610D. The memory devices 1626A-1626D may be stacked memory devices that can be stacked on top of their respective graphics engine tile 1610A-1610D. Each graphics engine tile 1610A-1610D and associated memory 1626A-1626D may reside on separate chiplets, which are bonded to a base die or base substrate, as described in further detail in FIG. 24B-24D.[0256] The graphics processor 1620 may be configured with a non-uniform memory access (NUMA) system in which memory devices 1626A-1626D are coupled with associated graphics
engine tiles 1610A-1610D. A given memory device may be accessed by graphics engine tiles other than the tile to which it is directly connected. However, access latency to the memory devices 1626A-1626D may be lowest when accessing a local tile. In one embodiment, a cache coherent NUMA (ccNUMA) system is enabled that uses the tile interconnects 1623A-1623F to enable communication between cache controllers within the graphics engine tiles 1610A-1610D to keep a consistent memory image when more than one cache stores the same memory location.[0257] The graphics processing engine cluster 1622 can connect with an on-chip or on- package fabric interconnect 1624. The fabric interconnect 1624 can enable communication between graphics engine tiles 1610A-1610D and components such as the video codec 1606 and one or more copy engines 1604. The copy engines 1604 can be used to move data out of, into, and between the memory devices 1626A-1626D and memory that is external to the graphics processor 1620 (e.g., system memory). The fabric interconnect 1624 can also be used to interconnect the graphics engine tiles 1610A-1610D. The graphics processor 1620 may optionally include a display controller 1602 to enable a connection with an external display device 1618. The graphics processor may also be configured as a graphics or compute accelerator. In the accelerator configuration, the display controller 1602 and display device 1618 may be omitted.[0258] The graphics processor 1620 can connect to a host system via a host interface 1628. The host interface 1628 can enable communication between the graphics processor 1620, system memory, and/or other system components. The host interface 1628 can be, for example, a PCI express bus or another type of host system interface.[0259] Fig. 16C illustrates a compute accelerator 1630, according to embodiments described herein. The compute accelerator 1630 can include architectural similarities with the graphics processor 1620 of Fig. 16B and is optimized for compute acceleration. A compute engine cluster 1632 can include a set of compute engine tiles 1640A-1640D that include execution logic that is optimized for parallel or vector-based general-purpose compute operations. The compute engine tiles 1640A-1640D may not include fixed function graphics processing logic, although in some embodiments one or more of the compute engine tiles 1640A-1640D can include logic to perform media acceleration. The compute engine tiles 1640A-1640D can connect to memory 1626A-1626D via memory interconnects 1625A-1625D. The memory 1626A-1626D and memory interconnects 1625A-1625D may be similar technology as in graphics processor 1620, or can be different. The graphics compute engine tiles 1640A-1640D can also be interconnected via a set of tile interconnects 1623A-1623F and may be connected with and/or interconnected by a fabric interconnect 1624. In one embodiment the compute accelerator 1630 includes a large L3 cache 1636 that can be configured as a device-wide cache. The compute accelerator 1630 can
also connect to a host processor and memory via a host interface 1628 in a similar manner as the graphics processor 1620 of Fig. 16B.Graphics Processing Engine[0260] FIG. 17 is a block diagram of a graphics processing engine 1710 of a graphics processor in accordance with some embodiments. The graphics processing engine (GPE) 1710 may be a version of the GPE 1610 shown in FIG. 16A, and may also represent a graphics engine tile 1610A-1610D of FIG. 16B. The elements of FIG. 17 having the same or similar names as the elements of any other figure herein describe the same elements as in the other figures, can operate or function in a manner similar to that, can comprise the same components, and can be linked to other entities, as those described elsewhere herein, but are not limited to such. For example, the 3D pipeline 1612 and media pipeline 1616 of FIG. 16A are also illustrated in Fig. 17. The media pipeline 1616 is optional in some embodiments of the GPE 1710 and may not be explicitly included within the GPE 1710. For example and in at least one embodiment, a separate media and/or image processor is coupled to the GPE 1710.[0261] GPE 1710 may couple with or include a command streamer 1703, which provides a command stream to the 3D pipeline 1612 and/or media pipelines 1616. Alternatively or additionally, the command streamer 1703 may be directly coupled to a unified return buffer 1718. The unified return buffer 1718 may be communicatively coupled to a graphics core array 1714. Optionally, the command streamer 1703 is coupled with memory, which can be system memory, or one or more of internal cache memory and shared cache memory. The command streamer 1703 may receive commands from the memory and sends the commands to 3D pipeline 1612 and/or media pipeline 1616. The commands are directives fetched from a ring buffer, which stores commands for the 3D pipeline 1612 and media pipeline 1616. The ring buffer can additionally include batch command buffers storing batches of multiple commands. The commands for the 3D pipeline 1612 can also include references to data stored in memory, such as but not limited to vertex and geometry data for the 3D pipeline 1612 and/or image data and memory objects for the media pipeline 316. The 3D pipeline 1612 and media pipeline 1616 process the commands and data by performing operations via logic within the respective pipelines or by dispatching one or more execution threads to the graphics core array 1714. The graphics core array 1714 may include one or more blocks of graphics cores (e.g., graphics core(s) 1715A, graphics core(s) 1715B), each block including one or more graphics cores. Each graphics core includes a set of graphics execution resources that includes general-purpose and graphics specific execution logic to perform graphics and compute operations, as well as fixed function texture processing and/or machine learning and artificial intelligence acceleration logic.
[0262] In various embodiments the 3D pipeline 1612 can include fixed function and programmable logic to process one or more shader programs, such as vertex shaders, geometry shaders, pixel shaders, fragment shaders, compute shaders, or other shader programs, by processing the instructions and dispatching execution threads to the graphics core array 1714.The graphics core array 1714 provides a unified block of execution resources for use in processing these shader programs. Multi-purpose execution logic (e.g., execution units) within the graphics core(s) 1715A-1714B of the graphic core array 1714 includes support for various 3D API shader languages and can execute multiple simultaneous execution threads associated with multiple shaders.[0263] The graphics core array 1714 may include execution logic to perform media functions, such as video and/or image processing. The execution units may include general- purpose logic that is programmable to perform parallel general-purpose computational operations, in addition to graphics processing operations. The general-purpose logic can perform processing operations in parallel or in conjunction with general-purpose logic within the processor core(s) 1407 of FIG. 14 or core 1502A-1502N as in FIG. 15A.[0264] Output data generated by threads executing on the graphics core array 1714 can output data to memory in a unified return buffer (URB) 1718. The URB 1718 can store data for multiple threads. The URB 1718 may be used to send data between different threads executing on the graphics core array 1714. The URB 1718 may additionally be used for synchronization between threads on the graphics core array 1714 and fixed function logic within the shared function logic 1720.[0265] Optionally, the graphics core array 1714 may be scalable, such that the array includes a variable number of graphics cores, each having a variable number of execution units based on the target power and performance level of GPE 1710. The execution resources may be dynamically scalable, such that execution resources may be enabled or disabled as needed.[0266] The graphics core array 1714 couples with shared function logic 1720 that includes multiple resources that are shared between the graphics cores in the graphics core array. The shared functions within the shared function logic 1720 are hardware logic units that provide specialized supplemental functionality to the graphics core array 1714. In various embodiments, shared function logic 1720 includes but is not limited to sampler 1721, math 1722, and inter thread communication (ITC) 1723 logic. Additionally, one or more cache(s) 1725 within the shared function logic 1720 may be implemented.[0267] A shared function is implemented at least in a case where the demand for a given specialized function is insufficient for inclusion within the graphics core array 1714. Instead a single instantiation of that specialized function is implemented as a stand-alone entity in the
shared function logic 1720 and shared among the execution resources within the graphics core array 1714. The precise set of functions that are shared between the graphics core array 1714 and included within the graphics core array 1714 varies across embodiments. Specific shared functions within the shared function logic 1720 that are used extensively by the graphics core array 1714 may be included within shared function logic 1716 within the graphics core array 1714. Optionally, the shared function logic 1716 within the graphics core array 1714 can include some or all logic within the shared function logic 1720. All logic elements within the shared function logic 1720 may be duplicated within the shared function logic 1716 of the graphics core array 1714. Alternatively, the shared function logic 1720 is excluded in favor of the shared function logic 1716 within the graphics core array 1714.Execution Units[0268] FIG. 18A-18B illustrate thread execution logic 1800 including an array of processing elements employed in a graphics processor core according to embodiments described herein.The elements of FIG. 18A-18B having the same or similar names as the elements of any other figure herein describe the same elements as in the other figures, can operate or function in a manner similar to that, can comprise the same components, and can be linked to other entities, as those described elsewhere herein, but are not limited to such. FIG. 18A-18B illustrates an overview of thread execution logic 1800, which may be representative of hardware logic illustrated with each sub-core 1521A-1521F of FIG. 15B. FIG. 18A is representative of an execution unit within a general-purpose graphics processor, while FIG. 18B is representative of an execution unit that may be used within a compute accelerator.[0269] As illustrated in FIG. 18 A, thread execution logic 1800 may include a shader processor 1802, a thread dispatcher 1804, instruction cache 1806, a scalable execution unit array including a plurality of execution units 1808A-1808N, a sampler 1810, shared local memory 1811, a data cache 1812, and a data port 1814. Optionally, the scalable execution unit array can dynamically scale by enabling or disabling one or more execution units (e.g., any of execution units 1808A, 1808B, 1808C, 1808D, through 1808N-1 and 1808N) based on the computational requirements of a workload. The included components may be interconnected via an interconnect fabric that links to each of the components. Thread execution logic 1800 may include one or more connections to memory, such as system memory or cache memory, through one or more of instruction cache 1806, data port 1814, sampler 1810, and execution units 1808A-1808N. Each execution unit (e.g. 1808A) may be a stand-alone programmable general- purpose computational unit that is capable of executing multiple simultaneous hardware threads while processing multiple data elements in parallel for each thread. In various embodiments, the
array of execution units 1808A-1808N is scalable to include any number individual execution units.[0270] The execution units 1808A-1808N may be primarily used to execute shader programs. A shader processor 1802 can process the various shader programs and dispatch execution threads associated with the shader programs via a thread dispatcher 1804. The thread dispatcher may include logic to arbitrate thread initiation requests from the graphics and media pipelines and instantiate the requested threads on one or more execution units 1808A-1808N.For example, a geometry pipeline can dispatch vertex, tessellation, or geometry shaders to the thread execution logic for processing. Optionally, the thread dispatcher 1804 can also process runtime thread spawning requests from the executing shader programs.[0271] The execution units 1808A-1808N may support an instruction set that includes native support for many standard 3D graphics shader instructions, such that shader programs from graphics libraries (e.g., Direct 3D and OpenGL) are executed with a minimal translation. The execution units support vertex and geometry processing (e.g., vertex programs, geometry programs, vertex shaders), pixel processing (e.g., pixel shaders, fragment shaders) and general- purpose processing (e.g., compute and media shaders). Each of the execution units 1808A- 1808N is capable of multi-issue single instruction multiple data (SIMD) execution and multi threaded operation enables an efficient execution environment in the face of higher latency memory accesses. Each hardware thread within each execution unit has a dedicated high- bandwidth register file and associated independent thread-state. Execution is multi-issue per clock to pipelines capable of integer, single and double precision floating point operations,SIMD branch capability, logical operations, transcendental operations, and other miscellaneous operations. While waiting for data from memory or one of the shared functions, dependency logic within the execution units 1808A-1808N causes a waiting thread to sleep until the requested data has been returned. While the waiting thread is sleeping, hardware resources may be devoted to processing other threads. For example, during a delay associated with a vertex shader operation, an execution unit can perform operations for a pixel shader, fragment shader, or another type of shader program, including a different vertex shader, such as vertex shader 2107 illustrated in FIG. 21. Various embodiments can apply to use execution by use of Single Instruction Multiple Thread (SIMT) as an alternate to use of SIMD or in addition to use of SIMD. Reference to a SIMD core or operation can apply also to SIMT or apply to SIMD in combination with SIMT.[0272] Each execution unit in execution units 1808A-1808N operates on arrays of data elements. The number of data elements is the“execution size,” or the number of channels for the instruction. An execution channel is a logical unit of execution for data element access,
masking, and flow control within instructions. The number of channels may be independent of the number of physical Arithmetic Logic Units (ALUs), Floating-Point Units (FPUs), or other logic units (e.g., tensor cores, ray tracing cores, etc.) for a particular graphics processor.Additionally, the execution units 1808A-1808N may support integer and floating-point data types.[0273] The execution unit instruction set includes SIMD instructions. The various data elements can be stored as a packed data type in a register and the execution unit will process the various elements based on the data size of the elements. For example, when operating on a 256- bit wide vector, the 256 bits of the vector are stored in a register and the execution unit operates on the vector as four separate 184-bit packed data elements (Quad-Word (QW) size data elements), eight separate 32-bit packed data elements (Double Word (DW) size data elements), sixteen separate 16-bit packed data elements (Word (W) size data elements), or thirty-two separate 8-bit data elements (byte (B) size data elements). However, different vector widths and register sizes are possible.[0274] Optionally, one or more execution units can be combined into a fused execution unit 1809A-1809N having thread control logic (1807A-1807N) that is common to the fused EUs. Multiple EUs can be fused into an EU group. Each EU in the fused EU group can be configured to execute a separate SIMD hardware thread. The number of EUs in a fused EU group can vary according to embodiments. Additionally, various SIMD widths can be performed per-EU, including but not limited to SIMD8, SIMD16, and SIMD32. Each fused graphics execution unit 1809A-1809N includes at least two execution units. For example, fused execution unit 1809 A includes a first EU 1808A, second EU 1808B, and thread control logic 1807A that is common to the first EU 1808A and the second EU 1808B. The thread control logic 1807A controls threads executed on the fused graphics execution unit 1809A, allowing each EU within the fused execution units 1809A-1809N to execute using a common instruction pointer register.[0275] One or more internal instruction caches (e.g., 1806) are included in the thread execution logic 1800 to cache thread instructions for the execution units. One or more data caches (e.g., 1812) may be included in the thread execution logic 1800 to cache thread data during thread execution. Threads executing on the execution logic 1800 can also store explicitly managed data in the shared local memory 1811. A sampler 1810 may be included to provide texture sampling for 3D operations and media sampling for media operations. Sampler 1810 may include specialized texture or media sampling functionality to process texture or media data during the sampling process before providing the sampled data to an execution unit.[0276] During execution, the graphics and media pipelines send thread initiation requests to thread execution logic 1800 via thread spawning and dispatch logic. Once a group of geometric
objects has been processed and rasterized into pixel data, pixel processor logic (e.g., pixel shader logic, fragment shader logic, etc.) within the shader processor 1802 is invoked to further compute output information and cause results to be written to output surfaces (e.g., color buffers, depth buffers, stencil buffers, etc.). A pixel shader or fragment shader may calculate the values of the various vertex attributes that are to be interpolated across the rasterized object. The pixel processor logic within the shader processor 1802 may then execute an application programming interface (API)-supplied pixel or fragment shader program. To execute the shader program, the shader processor 1802 dispatches threads to an execution unit (e.g., 1808A) via thread dispatcher 1804. Shader processor 1802 may use texture sampling logic in the sampler 1810 to access texture data in texture maps stored in memory. Arithmetic operations on the texture data and the input geometry data compute pixel color data for each geometric fragment, or discards one or more pixels from further processing.[0277] In addition, the data port 1814 may provide a memory access mechanism for the thread execution logic 1800 to output processed data to memory for further processing on a graphics processor output pipeline. The data port 1814 may include or couple to one or more cache memories (e.g., data cache 1812) to cache data for memory access via the data port 1814.[0278] Optionally, the execution logic 1800 can also include a ray tracer 1805 that can provide ray tracing acceleration functionality. The ray tracer 1805 can support a ray tracing instruction set that includes instructions/functions for ray generation. The ray tracing instruction set can be similar to or different from the ray-tracing instruction set supported by the ray tracing cores 372 in Fig. 3C.[0279] FIG. 18B illustrates exemplary internal details of an execution unit 1808. A graphics execution unit 1808 can include an instruction fetch unit 1837, a general register file array (GRF) 1824, an architectural register file array (ARF) 1826, a thread arbiter 1822, a send unit 1830, a branch unit 1832, a set of SIMD floating point units (FPUs) 1834, and optionally a set of dedicated integer SIMD ALUs 1835. The GRF 1824 and ARF 1826 includes the set of general register files and architecture register files associated with each simultaneous hardware thread that may be active in the graphics execution unit 1808. Per thread architectural state may be maintained in the ARF 1826, while data used during thread execution is stored in the GRF 1824. The execution state of each thread, including the instruction pointers for each thread, can be held in thread- specific registers in the ARF 1826.[0280] The graphics execution unit 1808 may have an architecture that is a combination of Simultaneous Multi-Threading (SMT) and fine-grained Interleaved Multi-Threading (IMT). The architecture may have a modular configuration that can be fine-tuned at design time based on a target number of simultaneous threads and number of registers per execution unit, where
execution unit resources are divided across logic used to execute multiple simultaneous threads. The number of logical threads that may be executed by the graphics execution unit 1808 is not limited to the number of hardware threads, and multiple logical threads can be assigned to each hardware thread.[0281] Optionally, the graphics execution unit 1808 can co-issue multiple instructions, which may each be different instructions. The thread arbiter 1822 of the graphics execution unit thread 1808 can dispatch the instructions to one of the send unit 1830, branch unit 1832, or SIMD FPU(s) 1834 for execution. Each execution thread can access 128 general-purpose registers within the GRF 1824, where each register can store 32 bytes, accessible as a SIMD 8-element vector of 32-bit data elements. Each execution unit thread may have access to 4 Kbytes within the GRF 1824, although embodiments are not so limited, and greater or fewer register resources may be provided in other embodiments. The graphics execution unit 1808 may be partitioned into seven hardware threads that can independently perform computational operations, although the number of threads per execution unit can also vary according to embodiments, for example, up to 16 hardware threads may be supported. In an exemplary embodiment, in which seven threads may access 4 Kbytes, the GRF 1824 can store a total of 28 Kbytes. In another exemplary embodiment, where 16 threads may access 4Kbytes, the GRF 1824 can store a total of 64 Kbytes. The number of threads per execution unit are, however, not limited to those examples and may be more or less than the given numbers. Flexible addressing modes can permit registers to be addressed together to build effectively wider registers or to represent strided rectangular block data structures.[0282] Additionally or alternatively, memory operations, sampler operations, and other longer-latency system communications may be dispatched via“send” instructions that are executed by the message passing send unit 1830. Branch instructions may be dispatched to a dedicated branch unit 1832 to facilitate SIMD divergence and eventual convergence.[0283] The graphics execution unit 1808 may include one or more SIMD floating point units (FPU(s)) 1834 to perform floating-point operations. The FPU(s) 1834 may also support integer computation. In some instances, the FPU(s) 1834 can SIMD execute up to M number of 32-bit floating-point (or integer) operations, or SIMD execute up to 2 M 16-bit integer or 16-bit floating-point operations. Optionally, at least one of the FPU(s) provides extended math capability to support high-throughput transcendental math functions and double precision 184-bit floating-point. A set of 8-bit integer SIMD ALUs 1835 may also be present, and may be specifically optimized to perform operations associated with machine learning computations.[0284] Optionally, arrays of multiple instances of the graphics execution unit 1808 can be instantiated in a graphics sub-core grouping (e.g., a sub-slice). For scalability, product architects
can choose the exact number of execution units per sub-core grouping. The execution unit 1808 may execute instructions across a plurality of execution channels. In addition, each thread executed on the graphics execution unit 1808 may be executed on a different channel.[0285] FIG. 19 illustrates a further exemplary execution unit 1900. The elements of FIG. 19 having the same or similar names as the elements of any other figure herein describe the same elements as in the other figures, can operate or function in a manner similar to that, can comprise the same components, and can be linked to other entities, as those described elsewhere herein, but are not limited to such. The execution unit 1900 may be a compute-optimized execution unit for use in, for example, a compute engine tile 1640A-1640D as in FIG. 16C, but is not limited as such. The execution unit 1900 may also be used in a graphics engine tile 1610A-1610D as in FIG. 16B. The execution unit 1900 may include a thread control unit 1901, a thread state unit 1902, an instruction fetch/prefetch unit 1903, and an instruction decode unit 1904. The execution unit 1900 may additionally include a register file 1906 that stores registers that can be assigned to hardware threads within the execution unit. The execution unit 1900 may additionally include a send unit 1907 and a branch unit 1908. The send unit 1907 and branch unit 1908 may operate similarly as the send unit 1830 and a branch unit 1832 of the graphics execution unit 1808 of FIG. 18B.[0286] The execution unit 1900 can also include a compute unit 1910 that includes multiple different types of functional units. The compute unit 1910 may also include an ALU unit 1911 that includes an array of arithmetic logic units. The ALU unit 1911 can be configured to perform 64-bit, 32-bit, and 16-bit integer and floating-point operations. Integer and floating point operations may be performed simultaneously. The compute unit 1910 can also include a systolic array 1912, and a math unit 1913. The systolic array 1912 includes a W wide and D deep network of data processing units that can be used to perform vector or other data-parallel operations in a systolic manner. The systolic array 1912 can be configured to perform matrix operations, such as matrix dot product operations. The systolic array 1912 may support 16-bit floating point operations, as well as 8-bit and 4-bit integer operations. The systolic array 1912 may be configured to accelerate machine learning operations. The systolic array 1912 can be configured with support for the bfloatl6, a 16-bit floating point format. A math unit 1913 can be included to perform a specific subset of mathematical operations in an efficient and lower-power manner than then ALU unit 1911. The math unit 1913 can include math logic found in shared function logic of a graphics processing engine provided by other embodiments described, e.g., the math logic 1722 of the shared function logic 1720 of FIG. 17. The math unit 1913 can be configured to perform 32-bit and 64-bit floating point operations.
[0287] The thread control unit 1901 includes logic to control the execution of threads within the execution unit. The thread control unit 1901 can include thread arbitration logic to start, stop, and preempt execution of threads within the execution unit 1900. The thread state unit 1902 can be used to store thread state for threads assigned to execute on the execution unit 1900. Storing the thread state within the execution unit 1900 enables the rapid pre-emption of threads when those threads become blocked or idle. The instruction fetch/prefetch unit 1903 can fetch instructions from an instruction cache of higher- level execution logic (e.g., instruction cache 1806 as in FIG. 18A). The instruction fetch/prefetch unit 1903 can also issue prefetch requests for instructions to be loaded into the instruction cache based on an analysis of currently executing threads. The instruction decode unit 1904 can be used to decode instructions to be executed by the compute units. The instruction decode unit 1904 can be used as a secondary decoder to decode complex instructions into constituent micro-operations.[0288] The execution unit 1900 additionally includes a register file 1906 that can be used by hardware threads executing on the execution unit 1900. Registers in the register file 1906 can be divided across the logic used to execute multiple simultaneous threads within the compute unit 1910 of the execution unit 1900. The number of logical threads that may be executed by the graphics execution unit 1900is not limited to the number of hardware threads, and multiple logical threads can be assigned to each hardware thread. The size of the register file 1906 can vary across embodiments based on the number of supported hardware threads. Register renaming may be used to dynamically allocate registers to hardware threads.[0289] FIG. 20 is a block diagram illustrating a graphics processor instruction format 2000. The graphics processor execution units support an instruction set having instructions in multiple formats. The solid lined boxes illustrate the components that are generally included in an execution unit instruction, while the dashed lines include components that are optional or that are only included in a sub-set of the instructions. The instruction formats 2000 described and illustrated are macro-instructions, in that they are instructions supplied to the execution unit, as opposed to micro-operations resulting from instruction decode once the instruction is processed.[0290] The graphics processor execution units as described herein may natively support instructions in a 128-bit instruction format 2010. A 64-bit compacted instruction format 2030 is available for some instructions based on the selected instruction, instruction options, and number of operands. The native 128-bit instruction format 2010 provides access to all instruction options, while some options and operations are restricted in the 64-bit format 2030. The native instructions available in the 64-bit format 2030 vary by embodiment. The instruction is compacted in part using a set of index values in an index field 2013. The execution unit hardware references a set of compaction tables based on the index values and uses the
compaction table outputs to reconstruct a native instruction in the 128-bit instruction format 2010. Other sizes and formats of instruction can be used.[0291] For each format, instruction opcode 2012 defines the operation that the execution unit is to perform. The execution units execute each instruction in parallel across the multiple data elements of each operand. For example, in response to an add instruction the execution unit performs a simultaneous add operation across each color channel representing a texture element or picture element. By default, the execution unit performs each instruction across all data channels of the operands. Instruction control field 2014 may enable control over certain execution options, such as channels selection (e.g., predication) and data channel order (e.g., swizzle). For instructions in the 128-bit instruction format 2010 an exec-size field 2016 limits the number of data channels that will be executed in parallel. An exec-size field 2016 may not be available for use in the 64-bit compact instruction format 2030.[0292] Some execution unit instructions have up to three operands including two source operands, srcO 2020, srcl 2022, and one destination 2018. The execution units may support dual destination instructions, where one of the destinations is implied. Data manipulation instmctions can have a third source operand (e.g., SRC2 2024), where the instruction opcode 2012 determines the number of source operands. An instruction's last source operand can be an immediate (e.g., hard-coded) value passed with the instruction.[0293] The 128-bit instruction format 2010 may include an access/address mode field 2026 specifying, for example, whether direct register addressing mode or indirect register addressing mode is used. When direct register addressing mode is used, the register address of one or more operands is directly provided by bits in the instruction.[0294] The 128-bit instruction format 2010 may also include an access/address mode field 2026, which specifies an address mode and/or an access mode for the instruction. The access mode may be used to define a data access alignment for the instruction. Access modes including a 16-byte aligned access mode and a 1-byte aligned access mode may be supported, where the byte alignment of the access mode determines the access alignment of the instruction operands. For example, when in a first mode, the instruction may use byte-aligned addressing for source and destination operands and when in a second mode, the instruction may use 16-byte- aligned addressing for all source and destination operands.[0295] The address mode portion of the access/address mode field 2026 may determine whether the instruction is to use direct or indirect addressing. When direct register addressing mode is used bits in the instruction directly provide the register address of one or more operands. When indirect register addressing mode is used, the register address of one or more operands
may be computed based on an address register value and an address immediate field in the instruction.[0296] Instructions may be grouped based on opcode 2012 bit-fields to simplify Opcode decode 2040. For an 8-bit opcode, bits 4, 5, and 6 allow the execution unit to determine the type of opcode. The precise opcode grouping shown is merely an example. A move and logic opcode group 2042 may include data movement and logic instructions (e.g., move (mov), compare (cmp)). Move and logic group 2042 may share the five most significant bits (MSB), where move (mov) instructions are in the form of OOOOxxxxb and logic instructions are in the form of OOOlxxxxb. A flow control instmction group 2044 (e.g., call, jump (jmp)) includes instructions in the form of OOlOxxxxb (e.g., 0x20). A miscellaneous instmction group 2046 includes a mix of instructions, including synchronization instructions (e.g., wait, send) in the form of OOl lxxxxb (e.g., 0x30). A parallel math instmction group 2048 includes component wise arithmetic instmctions (e.g., add, multiply (mul)) in the form of OlOOxxxxb (e.g., 0x40).The parallel math group 2048 performs the arithmetic operations in parallel across data channels. The vector math group 2050 includes arithmetic instmctions (e.g., dp4) in the form ofOlOlxxxxb (e.g., 0x50). The vector math group performs arithmetic such as dot product calculations on vector operands. The illustrated opcode decode 2040, in one embodiment, can be used to determine which portion of an execution unit will be used to execute a decoded instmction. For example, some instmctions may be designated as systolic instmctions that will be performed by a systolic array. Other instmctions, such as ray-tracing instmctions (not shown) can be routed to a ray-tracing core or ray-tracing logic within a slice or partition of execution logic.Graphics Pipeline[0297] FIG. 21 is a block diagram of graphics processor 2100, according to another embodiment. The elements of FIG. 21 having the same or similar names as the elements of any other figure herein describe the same elements as in the other figures, can operate or function in a manner similar to that, can comprise the same components, and can be linked to other entities, as those described elsewhere herein, but are not limited to such.[0298] The graphics processor 2100 may include different types of graphics processing pipelines, such as a geometry pipeline 2120, a media pipeline 2130, a display engine 2140, thread execution logic 2150, and a render output pipeline 2170. Graphics processor 2100 may be a graphics processor within a multi-core processing system that includes one or more general- purpose processing cores. The graphics processor may be controlled by register writes to one or more control registers (not shown) or via commands issued to graphics processor 2100 via a ring interconnect 2102. Ring interconnect 2102 may couple graphics processor 2100 to other
processing components, such as other graphics processors or general-purpose processors.Commands from ring interconnect 2102 are interpreted by a command streamer 2103, which supplies instructions to individual components of the geometry pipeline 2120 or the media pipeline 2130.[0299] Command streamer 2103 may direct the operation of a vertex fetcher 2105 that reads vertex data from memory and executes vertex-processing commands provided by command streamer 2103. The vertex fetcher 2105 may provide vertex data to a vertex shader 2107, which performs coordinate space transformation and lighting operations to each vertex. Vertex fetcher 2105 and vertex shader 2107 may execute vertex-processing instructions by dispatching execution threads to execution units 2152A-2152B via a thread dispatcher 2131.[0300] The execution units 2152A-2152B may be an array of vector processors having an instruction set for performing graphics and media operations. The execution units 2152A-2152B may have an attached LI cache 2151 that is specific for each array or shared between the arrays. The cache can be configured as a data cache, an instruction cache, or a single cache that is partitioned to contain data and instructions in different partitions.[0301] A geometry pipeline 2120 may include tessellation components to perform hardware- accelerated tessellation of 3D objects. A programmable hull shader 2111 may configure the tessellation operations. A programmable domain shader 2117 may provide back-end evaluation of tessellation output. A tessellator 2113 may operate at the direction of hull shader 2111 and contain special purpose logic to generate a set of detailed geometric objects based on a coarse geometric model that is provided as input to geometry pipeline 2120. In addition, if tessellation is not used, tessellation components (e.g., hull shader 2111, tessellator 2113, and domain shader 2117) can be bypassed.[0302] Complete geometric objects may be processed by a geometry shader 2119 via one or more threads dispatched to execution units 2152A-2152B, or can proceed directly to the clipper 2129. The geometry shader may operate on entire geometric objects, rather than vertices or patches of vertices as in previous stages of the graphics pipeline. If the tessellation is disabled the geometry shader 2119 receives input from the vertex shader 2107. The geometry shader 2119 may be programmable by a geometry shader program to perform geometry tessellation if the tessellation units are disabled.[0303] Before rasterization, a clipper 2129 processes vertex data. The clipper 2129 may be a fixed function clipper or a programmable clipper having clipping and geometry shader functions. A rasterizer and depth test component 2173 in the render output pipeline 2170 may dispatch pixel shaders to convert the geometric objects into per pixel representations. The pixel shader logic may be included in thread execution logic 2150. Optionally, an application can bypass the
rasterizer and depth test component 2173 and access un-rasterized vertex data via a stream out unit 2123.[0304] The graphics processor 2100 has an interconnect bus, interconnect fabric, or some other interconnect mechanism that allows data and message passing amongst the major components of the processor. In some embodiments, execution units 2152A-2152B and associated logic units (e.g., LI cache 2151, sampler 2154, texture cache 2158, etc.) interconnect via a data port 2156 to perform memory access and communicate with render output pipeline components of the processor. A sampler 2154, caches 2151, 2158 and execution units 2152A- 2152B each may have separate memory access paths. Optionally, the texture cache 2158 can also be configured as a sampler cache.[0305] The render output pipeline 2170 may contain a rasterizer and depth test component 2173 that converts vertex-based objects into an associated pixel-based representation. The rasterizer logic may include a windower/masker unit to perform fixed function triangle and line rasterization. An associated render cache 2178 and depth cache 2179 are also available in some embodiments. A pixel operations component 2177 performs pixel-based operations on the data, though in some instances, pixel operations associated with 2D operations (e.g. bit block image transfers with blending) are performed by the 2D engine 2141, or substituted at display time by the display controller 2143 using overlay display planes. A shared L3 cache 2175 may be available to all graphics components, allowing the sharing of data without the use of main system memory.[0306] The graphics processor media pipeline 2130 may include a media engine 2137 and a video front-end 2134. Video front-end 2134 may receive pipeline commands from the command streamer 2103. The media pipeline 2130 may include a separate command streamer. Video front-end 2134 may process media commands before sending the command to the media engine 2137. Media engine 2137 may include thread spawning functionality to spawn threads for dispatch to thread execution logic 2150 via thread dispatcher 2131.[0307] The graphics processor 2100 may include a display engine 2140. This display engine 2140 may be external to processor 2100 and may couple with the graphics processor via the ring interconnect 2102, or some other interconnect bus or fabric. Display engine 2140 may include a 2D engine 2141 and a display controller 2143. Display engine 2140 may contain special purpose logic capable of operating independently of the 3D pipeline. Display controller 2143 may couple with a display device (not shown), which may be a system integrated display device, as in a laptop computer, or an external display device attached via a display device connector.[0308] The geometry pipeline 2120 and media pipeline 2130 maybe configurable to perform operations based on multiple graphics and media programming interfaces and are not specific to
any one application programming interface (API). A driver software for the graphics processor may translate API calls that are specific to a particular graphics or media library into commands that can be processed by the graphics processor. Support may be provided for the OpenGraphics Library (OpenGL), Open Computing Language (OpenCL), and/or Vulkan graphics and compute API, all from the Khronos Group. Support may also be provided for the Direct3D library from the Microsoft Corporation. A combination of these libraries may be supported. Support may also be provided for the Open Source Computer Vision Library (OpenCV). A future API with a compatible 3D pipeline would also be supported if a mapping can be made from the pipeline of the future API to the pipeline of the graphics processor.Graphics Pipeline Programming[0309] FIG. 22A is a block diagram illustrating a graphics processor command format 2200 used for programming graphics processing pipelines, such as, for example, the pipelines described herein in conjunction with FIG. 16A, 17, 21. FIG. 22B is a block diagram illustrating a graphics processor command sequence 2210 according to an embodiment. The solid lined boxes in FIG. 22A illustrate the components that are generally included in a graphics command while the dashed lines include components that are optional or that are only included in a sub- set of the graphics commands. The exemplary graphics processor command format 2200 of FIG. 22A includes data fields to identify a client 2202, a command operation code (opcode) 2204, and data 2206 for the command. A sub-opcode 2205 and a command size 2208 are also included in some commands.[0310] Client 2202 may specify the client unit of the graphics device that processes the command data. A graphics processor command parser may examine the client field of each command to condition the further processing of the command and route the command data to the appropriate client unit. The graphics processor client units may include a memory interface unit, a render unit, a 2D unit, a 3D unit, and a media unit. Each client unit may have a corresponding processing pipeline that processes the commands. Once the command is received by the client unit, the client unit reads the opcode 2204 and, if present, sub-opcode 2205 to determine the operation to perform. The client unit performs the command using information in data field 2206. For some commands an explicit command size 2208 is expected to specify the size of the command. The command parser may automatically determine the size of at least some of the commands based on the command opcode. Commands may be aligned via multiples of a double word. Other command formats can also be used.[0311] The flow diagram in FIG. 22B illustrates an exemplary graphics processor command sequence 2210. Software or firmware of a data processing system that features an exemplary graphics processor may use a version of the command sequence shown to set up, execute, and
terminate a set of graphics operations. A sample command sequence is shown and described for purposes of example only and is not limited to these specific commands or to this command sequence. Moreover, the commands may be issued as batch of commands in a command sequence, such that the graphics processor will process the sequence of commands in at least partially concurrence.[0312] The graphics processor command sequence 2210 may begin with a pipeline flush command 2212 to cause any active graphics pipeline to complete the currently pending commands for the pipeline. Optionally, the 3D pipeline 2222 and the media pipeline 2224 may not operate concurrently. The pipeline flush is performed to cause the active graphics pipeline to complete any pending commands. In response to a pipeline flush, the command parser for the graphics processor will pause command processing until the active drawing engines complete pending operations and the relevant read caches are invalidated. Optionally, any data in the render cache that is marked‘dirty’ can be flushed to memory. Pipeline flush command 2212 can be used for pipeline synchronization or before placing the graphics processor into a low power state.[0313] A pipeline select command 2213 may be used when a command sequence requires the graphics processor to explicitly switch between pipelines. A pipeline select command 2213 may be required only once within an execution context before issuing pipeline commands unless the context is to issue commands for both pipelines. A pipeline flush command 2212 may be required immediately before a pipeline switch via the pipeline select command 2213.[0314] A pipeline control command 2214 may configure a graphics pipeline for operation and may be used to program the 3D pipeline 2222 and the media pipeline 2224. The pipeline control command 2214 may configure the pipeline state for the active pipeline. The pipeline control command 2214 may be used for pipeline synchronization and to clear data from one or more cache memories within the active pipeline before processing a batch of commands.[0315] Return buffer state commands 2216 may be used to configure a set of return buffers for the respective pipelines to write data. Some pipeline operations require the allocation, selection, or configuration of one or more return buffers into which the operations write intermediate data during processing. The graphics processor may also use one or more return buffers to store output data and to perform cross thread communication. The return buffer state 2216 may include selecting the size and number of return buffers to use for a set of pipeline operations.[0316] The remaining commands in the command sequence differ based on the active pipeline for operations. Based on a pipeline determination 2220, the command sequence is
tailored to the 3D pipeline 2222 beginning with the 3D pipeline state 2230 or the media pipeline 2224 beginning at the media pipeline state 2240.[0317] The commands to configure the 3D pipeline state 2230 include 3D state setting commands for vertex buffer state, vertex element state, constant color state, depth buffer state, and other state variables that are to be configured before 3D primitive commands are processed. The values of these commands are determined at least in part based on the particular 3D API in use. The 3D pipeline state 2230 commands may also be able to selectively disable or bypass certain pipeline elements if those elements will not be used.[0318] A 3D primitive 2232 command may be used to submit 3D primitives to be processed by the 3D pipeline. Commands and associated parameters that are passed to the graphics processor via the 3D primitive 2232 command are forwarded to the vertex fetch function in the graphics pipeline. The vertex fetch function uses the 3D primitive 2232 command data to generate vertex data structures. The vertex data structures are stored in one or more return buffers. The 3D primitive 2232 command may be used to perform vertex operations on 3D primitives via vertex shaders. To process vertex shaders, 3D pipeline 2222 dispatches shader execution threads to graphics processor execution units.[0319] The 3D pipeline 2222 may be triggered via an execute 2234 command or event. A register may write trigger command executions. An execution may be triggered via a‘go’ or ‘kick’ command in the command sequence. Command execution may be triggered using a pipeline synchronization command to flush the command sequence through the graphics pipeline. The 3D pipeline will perform geometry processing for the 3D primitives. Once operations are complete, the resulting geometric objects are rasterized and the pixel engine colors the resulting pixels. Additional commands to control pixel shading and pixel back end operations may also be included for those operations.[0320] The graphics processor command sequence 2210 may follow the media pipeline 2224 path when performing media operations. In general, the specific use and manner ofprogramming for the media pipeline 2224 depends on the media or compute operations to be performed. Specific media decode operations may be offloaded to the media pipeline during media decode. The media pipeline can also be bypassed and media decode can be performed in whole or in part using resources provided by one or more general-purpose processing cores. The media pipeline may also include elements for general-purpose graphics processor unit (GPGPU) operations, where the graphics processor is used to perform SIMD vector operations using computational shader programs that are not explicitly related to the rendering of graphics primitives.
[0321] Media pipeline 2224 may be configured in a similar manner as the 3D pipeline 2222. A set of commands to configure the media pipeline state 2240 are dispatched or placed into a command queue before the media object commands 2242. Commands for the media pipeline state 2240 may include data to configure the media pipeline elements that will be used to process the media objects. This includes data to configure the video decode and video encode logic within the media pipeline, such as encode or decode format. Commands for the media pipeline state 2240 may also support the use of one or more pointers to“indirect” state elements that contain a batch of state settings.[0322] Media object commands 2242 may supply pointers to media objects for processing by the media pipeline. The media objects include memory buffers containing video data to be processed. Optionally, all media pipeline states must be valid before issuing a media object command 2242. Once the pipeline state is configured and media object commands 2242 are queued, the media pipeline 2224 is triggered via an execute command 2244 or an equivalent execute event (e.g., register write). Output from media pipeline 2224 may then be post processed by operations provided by the 3D pipeline 2222 or the media pipeline 2224. GPGPU operations may be configured and executed in a similar manner as media operations.Graphics Software Architecture[0323] FIG. 23 illustrates an exemplary graphics software architecture for a data processing system 2300. Such a software architecture may include a 3D graphics application 2310, an operating system 2320, and at least one processor 2330. Processor 2330 may include a graphics processor 2332 and one or more general-purpose processor core(s) 2334. The processor 2330 may be a variant of the processor 1402 or any other of the processors described herein. The processor 2330 may be used in place of the processor 1402 or any other of the processors described herein. Therefore, the disclosure of any features in combination with the processor 1402 or any other of the processors described herein also discloses a corresponding combination with the graphics processor 2330, but is not limited to such. Moreover, the elements of FIG. 23 having the same or similar names as the elements of any other figure herein describe the same elements as in the other figures, can operate or function in a manner similar to that, can comprise the same components, and can be linked to other entities, as those described elsewhere herein, but are not limited to such. The graphics application 2310 and operating system 2320 are each executed in the system memory 2350 of the data processing system.[0324] 3D graphics application 2310 may contain one or more shader programs including shader instructions 2312. The shader language instructions may be in a high-level shader language, such as the High-Level Shader Language (HLSL) of Direct3D, the OpenGL Shader Language (GLSL), and so forth. The application may also include executable instructions 2314
in a machine language suitable for execution by the general-purpose processor core 2334. The application may also include graphics objects 2316 defined by vertex data.[0325] The operating system 2320 may be a Microsoft® Windows® operating system from the Microsoft Corporation, a proprietary UNIX-like operating system, or an open source UNIX- like operating system using a variant of the Linux kernel. The operating system 2320 can support a graphics API 2322 such as the Direct3D API, the OpenGL API, or the Vulkan API. When the Direct3D API is in use, the operating system 2320 uses a front-end shader compiler 2324 to compile any shader instructions 2312 in HLSL into a lower-level shader language. The compilation may be a just-in-time (JIT) compilation or the application can perform shader pre compilation. High-level shaders may be compiled into low-level shaders during the compilation of the 3D graphics application 2310. The shader instructions 2312 may be provided in an intermediate form, such as a version of the Standard Portable Intermediate Representation (SPIR) used by the Vulkan API.[0326] User mode graphics driver 2326 may contain a back-end shader compiler 2327 to convert the shader instructions 2312 into a hardware specific representation. When the OpenGL API is in use, shader instructions 2312 in the GLSL high-level language are passed to a user mode graphics driver 2326 for compilation. The user mode graphics driver 2326 may use operating system kernel mode functions 2328 to communicate with a kernel mode graphics driver 2329. The kernel mode graphics driver 2329 may communicate with graphics processor 2332 to dispatch commands and instructions.TP Core Implementations[0327] One or more aspects may be implemented by representative code stored on a machine-readable medium which represents and/or defines logic within an integrated circuit such as a processor. For example, the machine-readable medium may include instructions which represent various logic within the processor. When read by a machine, the instructions may cause the machine to fabricate the logic to perform the techniques described herein. Such representations, known as“IP cores,” are reusable units of logic for an integrated circuit that may be stored on a tangible, machine-readable medium as a hardware model that describes the structure of the integrated circuit. The hardware model may be supplied to various customers or manufacturing facilities, which load the hardware model on fabrication machines that manufacture the integrated circuit. The integrated circuit may be fabricated such that the circuit performs operations described in association with any of the embodiments described herein.[0328] FIG. 24A is a block diagram illustrating an IP core development system 2400 that may be used to manufacture an integrated circuit to perform operations according to an embodiment. The IP core development system 2400 may be used to generate modular, re-usable
designs that can be incorporated into a larger design or used to construct an entire integrated circuit (e.g., an SOC integrated circuit). A design facility 2430 can generate a software simulation 2410 of an IP core design in a high-level programming language (e.g., C/C++). The software simulation 2410 can be used to design, test, and verify the behavior of the IP core using a simulation model 2412. The simulation model 2412 may include functional, behavioral, and/or timing simulations. A register transfer level (RTL) design 2415 can then be created or synthesized from the simulation model 2412. The RTL design 2415 is an abstraction of the behavior of the integrated circuit that models the flow of digital signals between hardware registers, including the associated logic performed using the modeled digital signals. In addition to an RTL design 2415, lower- level designs at the logic level or transistor level may also be created, designed, or synthesized. Thus, the particular details of the initial design and simulation may vary.[0329] The RTL design 2415 or equivalent may be further synthesized by the design facility into a hardware model 2420, which may be in a hardware description language (HDL), or some other representation of physical design data. The HDL may be further simulated or tested to verify the IP core design. The IP core design can be stored for delivery to a 3rdparty fabrication facility 2465 using non-volatile memory 2440 (e.g., hard disk, flash memory, or any non-volatile storage medium). Alternatively, the IP core design may be transmitted (e.g., via the Internet) over a wired connection 2450 or wireless connection 2460. The fabrication facility 2465 may then fabricate an integrated circuit that is based at least in part on the IP core design. The fabricated integrated circuit can be configured to perform operations in accordance with at least one embodiment described herein.[0330] FIG. 24B illustrates a cross-section side view of an integrated circuit package assembly 2470. The integrated circuit package assembly 2470 illustrates an implementation of one or more processor or accelerator devices as described herein. The package assembly 2470 includes multiple units of hardware logic 2472, 2474 connected to a substrate 2480. The logic 2472, 2474 may be implemented at least partly in configurable logic or fixed-functionality logic hardware, and can include one or more portions of any of the processor core(s), graphics processor(s), or other accelerator devices described herein. Each unit of logic 2472, 2474 can be implemented within a semiconductor die and coupled with the substrate 2480 via an interconnect structure 2473. The interconnect structure 2473 may be configured to route electrical signals between the logic 2472, 2474 and the substrate 2480, and can include interconnects such as, but not limited to bumps or pillars. The interconnect structure 2473 may be configured to route electrical signals such as, for example, input/output (I/O) signals and/or power or ground signals associated with the operation of the logic 2472, 2474. Optionally, the substrate 2480 may be an
epoxy-based laminate substrate. The substrate 2480 may also include other suitable types of substrates. The package assembly 2470 can be connected to other electrical devices via a package interconnect 2483. The package interconnect 2483 may be coupled to a surface of the substrate 2480 to route electrical signals to other electrical devices, such as a motherboard, other chipset, or multi-chip module.[0331] The units of logic 2472, 2474 may be electrically coupled with a bridge 2482 that is configured to route electrical signals between the logic 2472, 2474. The bridge 2482 may be a dense interconnect structure that provides a route for electrical signals. The bridge 2482 may include a bridge substrate composed of glass or a suitable semiconductor material. Electrical routing features can be formed on the bridge substrate to provide a chip-to-chip connection between the logic 2472, 2474.[0332] Although two units of logic 2472, 2474 and a bridge 2482 are illustrated, embodiments described herein may include more or fewer logic units on one or more dies. The one or more dies may be connected by zero or more bridges, as the bridge 2482 may be excluded when the logic is included on a single die. Alternatively, multiple dies or units of logic can be connected by one or more bridges. Additionally, multiple logic units, dies, and bridges can be connected together in other possible configurations, including three-dimensional configurations.[0333] FIG. 24C illustrates a package assembly 2490 that includes multiple units of hardware logic chiplets connected to a substrate 2480 (e.g., base die). A graphics processing unit, parallel processor, and/or compute accelerator as described herein can be composed from diverse silicon chiplets that are separately manufactured. In this context, a chiplet is an at least partially packaged integrated circuit that includes distinct units of logic that can be assembled with other chiplets into a larger package. A diverse set of chiplets with different IP core logic can be assembled into a single device. Additionally the chiplets can be integrated into a base die or base chiplet using active interposer technology. The concepts described herein enable the interconnection and communication between the different forms of IP within the GPU. IP cores can be manufactured using different process technologies and composed during manufacturing, which avoids the complexity of converging multiple IPs, especially on a large SoC with several flavors IPs, to the same manufacturing process. Enabling the use of multiple process technologies improves the time to market and provides a cost-effective way to create multiple product SKUs. Additionally, the disaggregated IPs are more amenable to being power gated independently, components that are not in use on a given workload can be powered off, reducing overall power consumption.[0334] The hardware logic chiplets can include special purpose hardware logic chiplets 2472, logic or I/O chiplets 2474, and/or memory chiplets 2475. The hardware logic chiplets
2472 and logic or I/O chiplets 2474 may be implemented at least partly in configurable logic or fixed-functionality logic hardware and can include one or more portions of any of the processor core(s), graphics processor(s), parallel processors, or other accelerator devices described herein. The memory chiplets 2475 can be DRAM (e.g., GDDR, HBM) memory or cache (SRAM) memory.[0335] Each chiplet can be fabricated as separate semiconductor die and coupled with the substrate 2480 via an interconnect structure 2473. The interconnect structure 2473 may be configured to route electrical signals between the various chiplets and logic within the substrate 2480. The interconnect structure 2473 can include interconnects such as, but not limited to bumps or pillars. In some embodiments, the interconnect structure 2473 may be configured to route electrical signals such as, for example, input/output (I/O) signals and/or power or ground signals associated with the operation of the logic, I/O and memory chiplets.[0336] The substrate 2480 may be an epoxy-based laminate substrate, however, it is not limited to that and the substrate 2480 may also include other suitable types of substrates. The package assembly 2490 can be connected to other electrical devices via a package interconnect 2483. The package interconnect 2483 may be coupled to a surface of the substrate 2480 to route electrical signals to other electrical devices, such as a motherboard, other chipset, or multi-chip module.[0337] A logic or I/O chiplet 2474 and a memory chiplet 2475 may be electrically coupled via a bridge 2487 that is configured to route electrical signals between the logic or I/O chiplet 2474 and a memory chiplet 2475. The bridge 2487 may be a dense interconnect structure that provides a route for electrical signals. The bridge 2487 may include a bridge substrate composed of glass or a suitable semiconductor material. Electrical routing features can be formed on the bridge substrate to provide a chip-to-chip connection between the logic or I/O chiplet 2474 and a memory chiplet 2475. The bridge 2487 may also be referred to as a silicon bridge or an interconnect bridge. For example, the bridge 2487 is an Embedded Multi-die Interconnect Bridge (EMIB). Alternatively, the bridge 2487 may simply be a direct connection from one chiplet to another chiplet.[0338] The substrate 2480 can include hardware components for I/O 2491, cache memory 2492, and other hardware logic 2493. A fabric 2485 can be embedded in the substrate 2480 to enable communication between the various logic chiplets and the logic 2491, 2493 within the substrate 2480. Optionally, the I/O 2491, fabric 2485, cache, bridge, and other hardware logic 2493 can be integrated into a base die that is layered on top of the substrate 2480. The fabric 2485 may be a network on a chip interconnect or another form of packet switched fabric that switches data packets between components of the package assembly.
[0339] Furthermore, a package assembly 2490 can also include a smaller or greater number of components and chiplets that are interconnected by a fabric 2485 or one or more bridges 2487. The chiplets within the package assembly 2490 may be arranged in a 3D or 2.5D arrangement.In general, bridge structures 2487 may be used to facilitate a point to point interconnect between, for example, logic or FO chiplets and memory chiplets. The fabric 2485 can be used to interconnect the various logic and/or FO chiplets (e.g., chiplets 2472, 2474, 2491, 2493). with other logic and/or FO chiplets. The cache memory 2492 within the substrate can act as a global cache for the package assembly 2490, part of a distributed global cache, or as a dedicated cache for the fabric 2485.[0340] FIG. 24D illustrates a package assembly 2494 including interchangeable chiplets 2495, according to an embodiment. The interchangeable chiplets 2495 can be assembled into standardized slots on one or more base chiplets 2496, 2498. The base chiplets 2496, 2498 can be coupled via a bridge interconnect 2497, which can be similar to the other bridge interconnects described herein and may be, for example, an EMIB. Memory chiplets can also be connected to logic or FO chiplets via a bridge interconnect. FO and logic chiplets can communicate via an interconnect fabric. The base chiplets can each support one or more slots in a standardized format for one of logic or FO or memory /cache.[0341] SRAM and power delivery circuits may be fabricated into one or more of the base chiplets 2496, 2498, which can be fabricated using a different process technology relative to the interchangeable chiplets 2495 that are stacked on top of the base chiplets. For example, the base chiplets 2496, 2498 can be fabricated using a larger process technology, while theinterchangeable chiplets can be manufactured using a smaller process technology. One or more of the interchangeable chiplets 2495 may be memory (e.g., DRAM) chiplets. Different memory densities can be selected for the package assembly 2494 based on the power, and/or performance targeted for the product that uses the package assembly 2494. Additionally, logic chiplets with a different number of type of functional units can be selected at time of assembly based on the power, and/or performance targeted for the product. Additionally, chiplets containing IP logic cores of differing types can be inserted into the interchangeable chiplet slots, enabling hybrid processor designs that can mix and match different technology IP blocks.Exemplary System on a Chip Integrated Circuit[0342] FIG. 25-26 illustrate exemplary integrated circuits and associated graphics processors that may be fabricated using one or more IP cores. In addition to what is illustrated, other logic and circuits may be included, including additional graphics processors/cores, peripheral interface controllers, or general-purpose processor cores. The elements of FIG. 25-26 having the same or similar names as the elements of any other figure herein describe the same elements as in the
other figures, can operate or function in a manner similar to that, can comprise the same components, and can be linked to other entities, as those described elsewhere herein, but are not limited to such.[0343] FIG. 25 is a block diagram illustrating an exemplary system on a chip integrated circuit 2500 that may be fabricated using one or more IP cores. Exemplary integrated circuit 2500 includes one or more application processor(s) 2505 (e.g., CPUs), at least one graphics processor 2510, which may be a variant of the graphics processor 1408, 1508, 2510, or of any graphics processor described herein and may be used in place of any graphics processor described. Therefore, the disclosure of any features in combination with a graphics processor herein also discloses a corresponding combination with the graphics processor 2510, but is not limited to such. The integrated circuit 2500 may additionally include an image processor 2515 and/or a video processor 2520, any of which may be a modular IP core from the same or multiple different design facilities. Integrated circuit 2500 may include peripheral or bus logic including a USB controller 2525, UART controller 2530, an SPI/SDIO controller 2535, and an I2S/I2C controller 2540. Additionally, the integrated circuit can include a display device 2545 coupled to one or more of a high-definition multimedia interface (HDMI) controller 2550 and a mobile industry processor interface (MIPI) display interface 2555. Storage may be provided by a flash memory subsystem 2560 including flash memory and a flash memory controller. Memory interface may be provided via a memory controller 2565 for access to SDRAM or SRAM memory devices. Some integrated circuits additionally include an embedded security engine 2570.[0344] FIG. 26A-26B are block diagrams illustrating exemplary graphics processors for use within an SoC, according to embodiments described herein. The graphics processors may be variants of the graphics processor 1408, 1508, 2510, or any other graphics processor described herein. The graphics processors may be used in place of the graphics processor 1408, 1508,2510, or any other of the graphics processors described herein. Therefore, the disclosure of any features in combination with the graphics processor 1408, 1508, 2510, or any other of the graphics processors described herein also discloses a corresponding combination with the graphics processors of FIG. 26A-26B, but is not limited to such. FIG. 26A illustrates an exemplary graphics processor 2610 of a system on a chip integrated circuit that may be fabricated using one or more IP cores, according to an embodiment. FIG. 26B illustrates an additional exemplary graphics processor 2640 of a system on a chip integrated circuit that may be fabricated using one or more IP cores, according to an embodiment. Graphics processor 2610 of FIG. 26A is an example of a low power graphics processor core. Graphics processor 2640 of FIG. 26B is an example of a higher performance graphics processor core. For example, each of
the graphics processors 2610, 2640 can be a variant of the graphics processor 2510 of FIG. 25, as mentioned at the outset of this paragraph.[0345] As shown in FIG. 26 A, graphics processor 2610 includes a vertex processor 2605 and one or more fragment processor(s) 2615A-2615N (e.g., 2615A, 2615B, 2615C, 2615D, through 2615N-1, and 2615N). Graphics processor 2610 can execute different shader programs via separate logic, such that the vertex processor 2605 is optimized to execute operations for vertex shader programs, while the one or more fragment processor(s) 2615A-2615N execute fragment (e.g., pixel) shading operations for fragment or pixel shader programs. The vertex processor 2605 performs the vertex processing stage of the 3D graphics pipeline and generates primitives and vertex data. The fragment processor(s) 2615A-2615N use the primitive and vertex data generated by the vertex processor 2605 to produce a framebuffer that is displayed on a display device. The fragment processor(s) 2615A-2615N may be optimized to execute fragment shader programs as provided for in the OpenGL API, which may be used to perform similar operations as a pixel shader program as provided for in the Direct 3D API.[0346] Graphics processor 2610 additionally includes one or more memory management units (MMUs) 2620A-2620B, cache(s) 2625A-2625B, and circuit interconnect(s) 2630A-2630B. The one or more MMU(s) 2620A-2620B provide for virtual to physical address mapping for the graphics processor 2610, including for the vertex processor 2605 and/or fragment processor(s) 2615A-2615N, which may reference vertex or image/texture data stored in memory, in addition to vertex or image/texture data stored in the one or more cache(s) 2625A-2625B. The one or more MMU(s) 2620A-2620B may be synchronized with other MMUs within the system, including one or more MMUs associated with the one or more application processor(s) 2505, image processor 2515, and/or video processor 2520 of FIG. 25, such that each processor 2505- 2520 can participate in a shared or unified virtual memory system. Components of graphics processor 2610 may correspond with components of other graphics processors described herein. The one or more MMU(s) 2620A-2620B may correspond with MMU 245 of FIG. 2C. Vertex processor 2605 and fragment processor 2615A-2615N may correspond with graphics multiprocessor 234. The one or more circuit interconnect(s) 2630A-2630B enable graphics processor 2610 to interface with other IP cores within the SoC, either via an internal bus of the SoC or via a direct connection, according to embodiments. The one or more circuitinterconnect(s) 2630A-2630B may correspond with the data crossbar 240 of Fig. 2C. Further correspondence may be found between analogous components of the graphics processor 2610 and the various graphics processor architectures described herein.[0347] As shown FIG. 26B, graphics processor 2640 includes the one or more MMU(s) 2620A-2620B, cache(s) 2625A-2625B, and circuit interconnect(s) 2630A-2630B of the graphics
processor 2610 of FIG. 26A. Graphics processor 2640 includes one or more shader cores 2655A-2655N (e.g., 2655A, 2655B, 2655C, 2655D, 2655E, 2655F, through 2655N-1, and 2655N), which provides for a unified shader core architecture in which a single core or type or core can execute all types of programmable shader code, including shader program code to implement vertex shaders, fragment shaders, and/or compute shaders. The exact number of shader cores present can vary among embodiments and implementations. Additionally, graphics processor 2640 includes an inter-core task manager 2645, which acts as a thread dispatcher to dispatch execution threads to one or more shader cores 2655A-2655N and a tiling unit 2658 to accelerate tiling operations for tile-based rendering, in which rendering operations for a scene are subdivided in image space, for example to exploit local spatial coherence within a scene or to optimize use of internal caches. Shader cores 2655A-2655N may correspond with, for example, graphics multiprocessor 234 as in FIG. 2D, or graphics multiprocessors 325, 350 of FIG. 3A and 3B respectively, or multi-core group 365A of Fig. 3C.MULTI-TILE MEMORY MANAGEMENT FOR DETECTING CROSS TILE ACCESS[0348] For a multiple GPU configuration, cross-tile access between GPUs may have long delays from frequent cross-tile access landing. A single GPU typically only keeps a local cache updated with local memory. In other words, a first GPU updates a first local cache based on memory requests for a first local memory while not updating non-local cache of other GPUs. If the first GPU needs to obtain data from a non-local cache or memory, then the first GPU will obtain this data via a cross-GPU link that may have a longer latency compared to local links of the first GPU.[0349] The present design detects frequent cross tile access (e.g., communication links 2791- 2794 of FIG. 27) to remote memory resources and triggers a page fault (soft) to transfer a per process translation mapping of pages or data (e.g., translation of virtual addresses to physical addresses) to local physical memory closer to a requesting GPU. This design detects transfer patterns automatically and starts transferring pages ahead of time.[0350] FIG. 27 shows a system 2700 that includes a processor 2707 coupled to a graphics processor 2702 having multipleGPUs . The graphics processor 2702 includes GPUs 2710, 2720, 2730, and 2740 (e.g., parallel processor 200, GPU 380, 410-413, 700, 806A-806D, 1306, 1510) and respective local memory 2770-1, 2770-2, 2770-3, and 2770-4 (e.g., high bandwidth memory, DRAM) for each GPU. The graphics processor 2702 includes communication links 2790-2798 (e.g., high speed bidirectional interconnects, PCIe x 16). The cache 2780-1 is normally updated or synchronized with memory 2770-1, but does not update with non-local memory 2770-2, 2770- 3, and 2770-4. If GPU 2720 needs to obtain data from non-local memory, then GPU will need to utilize one or more of the communication links 2791-2795.
[0351] Memory controllers 2715, 2725, 2735, and 2745 couple a respective GPU to a local memory 2770-1, 2770-2, 2770-3, and 2770-4 which may be a system memory (e.g., DRAM) and/or a dedicated graphics memory (e.g., GDDR6 memory).[0352] In one embodiment, a memory controller detects frequent cross tile access (e.g., communication links 2791-2794 of FIG. 27) to a remote or farthest memory resource and causes a driver to trigger a page fault (soft) to transfer a per process translation mapping of pages or data (e.g., translation of virtual addresses to physical addresses) to local physical memory closer to its tile. This design detects transfer patterns automatically (e.g., access page N) and starts transferring pages or data (e.g., access N+2) prior to subsequent pages being requested.[0353] In one example, a memory controller 2715 detects that GPU 2710 is frequently accessing data from memory 2770-2. A counter may count a number of accesses from the GPU 2710 to a remote memory. The memory controller 2715 will automatically detect this transfer pattern and cause a driver to trigger a page fault to transfer a per process translation mapping of pages or data (e.g., translation of virtual addresses to physical addresses) to local physical memory 2770- 1. The mapping information and data from memory 2770-2 is transferred to memory 2770-1 to reduce a frequency of the GPU 2710 needing to access communication link 2791 to obtain data from remote memory 2770-2.[0354] FIG. 27 illustrates additional optional details for an interconnection between a multi core processor 2707 and the graphics processor 2702. The graphics processor 2702 may include one or more GPU chips integrated on a die which is coupled to the processor 2707 via the high speed link 2790. Alternatively, the graphics processor 2702 may be integrated on the same package or chip as the processor 2707.[0355] The illustrated processor 2707 includes a plurality of cores 2760A-2760D, each with a translation lookaside buffer 2761A-2761D and one or more caches 2762A-462D. The cores may include various other components for executing instructions and processing data which are not illustrated to avoid obscuring the underlying principles of the components described herein (e.g., instruction fetch units, branch prediction units, decoders, execution units, reorder buffers, etc.). The caches 2762A-2762D may comprise level 1 (LI) and level 2 (L2) caches. In addition, one or more shared caches 2756 may be included in the caching hierarchy and shared by sets of the cores 2760A-2760D. For example, one embodiment of the processor 2707 includes 24 cores, each with its own LI cache, twelve shared L2 caches, and twelve shared L3 caches. In this embodiment, one of the L2 and L3 caches are shared by two adjacent cores. The processor 2707 and the graphics processor 2702 connect with memory 2770-1, 2770-2, 2770-3, and 2770-4 (e.g., system memory).
[0356] Coherency is maintained for data and instructions stored in the various caches 2762A-2762D and system memory 2770-1, 2770-2, 2770-3, and 2770-4 via inter-core communication over a coherence bus 2764. For example, each cache may have cache coherency logic/circuitry associated therewith to communicate to over the coherence bus 2764 in response to detected reads or writes to particular cache lines. In one implementation, a cache snooping protocol is implemented over the coherence bus 2764 to snoop cache accesses. Cache snooping/coherency techniques are well understood by those of skill in the art and will not be described in detail here to avoid obscuring the underlying principles described herein.[0357] A proxy circuit 2725 may be provided that communicatively couples the graphics processor 2702 to the coherence bus 2764, allowing the graphics processor to participate in the cache coherence protocol as a peer of the cores. In particular, an interface 2735 provides connectivity to the proxy circuit 2725 over high-speed link 2790 (e.g., a PCIe bus, NVLink, etc.).[0358] In one implementation, an accelerator integration circuit 2736 provides cache management, memory access, context management, and interrupt management services on behalf of a plurality of graphics processing units (GPUs) 2710, 2720, 2730, and 2740. The accelerator integration circuit 2736 also can include common system hardware resources 2731 for the GPUs 2710, 2720, 2730, and 2740. These common system hardware resources 2731 include shared IO (e.g., PCIe, USB) and system control of voltage, clocking, performance, thermals, and security. The accelerator integration circuit 2736 is coupled to the GPUs with communication links 2795-2798 (e.g., high speed bidirectional interconnects, PCIe x 16).[0359] The accelerator integration circuit 2736 may include a memory management unit 2739 for performing various memory management functions such as virtual-to-physical memory translations (also referred to as effective-to-real memory translations) and memory access protocols for accessing system memory. The MMU 2739 may also include a translation lookaside buffer (TLB) (not shown) for caching the virtual/effective to physical/real address translations. In one implementation, a cache 2738 of the accelerator integration circuit 2736 stores commands and data for efficient access by the GPUs. The data stored in the cache 2738 may be kept coherent with the core caches 2762A-2762D and system memory 2770-1, 2770-2, 2770-3, and 2770-4. As mentioned, this may be accomplished via proxy circuit 2725 which takes part in the cache coherency mechanism on behalf of the cache 2738 (e.g., sending updates to the cache 2738 related to modifications/accesses of cache lines on processor caches and receiving updates from the cache 2738).[0360] In one implementation, virtual/effective addresses from a GPU are translated to real/physical addresses in system memory by the MMU. The graphics processor may be
dedicated to a single application executed on the processor 2707 or may be shared between multiple applications. Optionally, a virtualized graphics execution environment is provided in which the resources of the GPUs are shared with multiple applications or virtual machines (VMs). The resources may be subdivided into“slices” which are allocated to different VMs and/or applications based on the processing.[0361] Input/output (I/O) circuitry 2763 couples the processor 2707 to one or more I/O devices 2764 such as digital signal processors (DSPs), network controllers, or user input devices. An on-chip interconnect may be used to couple the I/O devices 2764 to the GPUs and memory 2770-1, 2770-2, 2770-3, and 2770-4. One or more I/O memory management units (IOMMUs) of the I/O circuitry 2763 may couple the I/O devices 2764 directly to the system memory 2770-1, 2770-2, 2770-3, and 2770-4. Optionally, the IOMMU manages multiple sets of page tables to map virtual addresses to physical addresses in system memory. The I/O devices 2764, processor 2707, and GPU(s) may then share the same virtual address space. Alternatively, Input/output (I/O) circuitry 2763 is located within graphics processor 2702 and couples the GPUs to one or more I/O devices 2764 such as digital signal processors (DSPs), network controllers, or user input devices.[0362] The processor 2707, GPUs, and I/O devices 2764 may be integrated on a single semiconductor chip and/or chip package. Alternatively, the processor 2707, GPUs, and I/O devices 2764 may be separate semiconductor chips and/or chip packages. The components or modules (e.g., processor 2707, GPUs, and I/O devices 2764) of system 2700 can becommunicatively connected as a network-on-chip (NoC). The illustrated memory 2770-1, 2770-2, 2770-3, and 2770-4 may be integrated on the same chip or may be coupled to the memory controllers via an off-chip interface. In one implementation, the memory 2770-1, 2770-2, 2770-3, and 2770-4 comprises GDDR6 memory which shares the same virtual address space as other physical system-level memories, although the underlying principles described herein are not limited to this specific implementation.[0363] In one implementation, the system 2700 has non-uniform memory access (NUMA) with memory access to a shared system memory (e.g., 2770) that depends on memory location relative to a GPU. A cache coherent NUMA (ccNUMA) uses inter-processor or inter-GPU communication between cache controllers to keep a consistent memory image when more than one cache stores the same memory location. For this reason, ccNUMA may have performance issues when multiple processors or GPUs attempt to access the same memory area in rapid succession. Support for NUMA in operating systems attempts to reduce the frequency of this kind of access by allocating GPUs and memory to avoiding scheduling and locking algorithms.
[0364] Alternatively, the system 2700 can be implemented with cache coherency protocols such as the MESI or MESIF protocol that attempt to reduce the communication required to maintain cache coherency. The MESI protocol is an invalidate-based cache coherence protocol that supports write back caches to save bandwidth compared to write through cache. FIG. 28 illustrates a computer-implemented method 2800 for detecting cross tile access to provide a page transfer mechanism for a graphics processor in accordance with one embodiment. A graphics processing unit, graphics multiprocessor, or graphics processor having a memory controller (e.g., 367, 712A-712B, 1416, 1514, 1568, 2715, 2725, 2735, 2745, etc.), a MMU (e.g., 245, 364, 439, 3039B-3039E), and a driver (e.g., GPGPU driver 608, kernel mode graphics driver 2329, user mode graphics driver 2326) perform operations 2800 in accordance with one embodiment.[0365] At operation 2802, the computer-implemented method includes monitoring cross tile memory accesses from a local GPU to one or more remote GPUs in a multi-GPU configuration. At operation 2804, the computer-implemented method includes determining, with a memory controller or MMU, whether frequent cross tile memory accesses occur from a local GPU to one or more remote GPUs in a multi-GPU configuration. In one example, frequent cross tile memory accesses occur when the memory accesses exceed a threshold rate or frequency.[0366] At operation 2806, if frequent cross tile memory accesses occur from a local GPU to one or more remote GPUs, then the memory controller or MMU initiates a data transfer mechanism by sending a notification or interrupt message to a driver. At operation 2808, the driver provides data transfer mechanism (e.g., a page fault (soft)) in response to the notification or interrupt message. The page transfer mechanism (or data transfer mechanism) transfers a per process translation mapping of pages or data (e.g., translation of virtual addresses to physical addresses) to local physical memory closer to the local GPU. In another example, a page table provides the translation of virtual addresses to physical addresses instead of transferring or replicating per process translation mapping of pages or data (e.g., translation of virtual addresses to physical addresses) to local physical memory closer to the local GPU. The page table can be located in a memory management unit (MMU) (e.g., 245, 364, 439) or virtual address space.[0367] At operation 2810, the data being accessed frequently by the local GPU is transferred or copied to the local memory of the local GPU. In another example, the data transfer mechanism can copy data to multiple tiles not just a single GPU or the requesting GPU. The data transfer can also copy data to multiple tiles to enable split frame rendering with a first GPU handling rendering for a first portion of a display and a second GPU handling rendering for a second different portion of the display. This design detects transfer patterns automatically (e.g., access page N, N+l) and starts transferring pages or data (e.g., access N+2) prior to subsequent pages being requested.
[0368] If frequent cross tile memory accesses are not detected from a local GPU to one or more remote GPUs, then the method returns to monitoring memory accesses at operation 2802.[0369] In one example, a lazy page allocation occurs when a GPU uses a page stored in a remote location, then this page is copied from the remote location (e.g., system memory) to local memory for the GPU to allocate the page within the local memory.[0370] MULTI-TILE INFERENCE SCALING WITH MULTICASTING OF DATA VIA COPY OPERATION[0371] The present design includes multi-tile inference scaling based on multicasting of data to multiple locations via a copy operation. This copy operation reads data (e.g., weights for machine learning, constants for high performance computing, a dictionary lookup for natural language processing) once and writes to all tiles or a subset of tiles for a multi-GPUconfiguration. In another example, implicit Split Rendering with the copy operation is performed based on a mask to copy to a subset of GPUs. The multicasting can include other broadcast patterns (e.g., checkboard pattern of data across tiles). Different operations such as blocking, transposing, scaling, multiplying, shifting, or compression can occur for data prior to being copied to other tiles.[0372] FIG. 29 shows multi-tile inference scaling with multicasting for a multi-GPU configuration in accordance with one embodiment. The graphics processor 2900 includes GPUs 2910, 2920, 2930, and 2940 (e.g., parallel processor 200, GPU 380, 410-413, 700, 806A-806D, 1306, 1510), respective local memory 2970-1, 2970-2, 2970-3, and 2970-4 (e.g., high bandwidth memory, DRAM) for each GPU, and system memory 2998. The graphics processor 2900 includes communication links 2989-2794 (e.g., high speed bidirectional interconnects, PCIe x 16) between GPUs and system memory. If a GPU needs to obtain data from a non-local memory, then the GPU will need to utilize one or more of the communication links 2989-2994.[0373] MMUs or memory controllers 2915, 2925, 2935, and 2945 utilize memory interconnects to handle communications between a respective GPU and a local memory 2970-1, 2770-2, 2770-3, and 2770-4 which may be a system memory (e.g., DRAM) and/or a dedicated graphics memory (e.g., GDDR6 memory). The MMUs provide access to communication links as well.[0374] A copy engine 2999 (or multicasting engine) reads data (e.g., weights for machine learning, constants for high performance computing, a dictionary lookup for natural language processing) once and writes data to all tiles or a subset of tiles for a multi-GPU configuration as illustrated in FIG. 29. The data can be written to a list of addresses or an address range within remote memory for other tiles or GPUs.
[0375] In one implementation, the graphics processor 2900 has non-uniform memory access (NUMA) with memory access to a shared system memory (e.g., 2970, 2998) that depends on memory location relative to a GPU. A cache coherent NUMA (ccNUMA) uses inter-processor or inter-GPU communication between cache controllers to keep a consistent memory image when more than one cache stores the same memory location.[0376] Alternatively, the graphics processor 2900 can be implemented with cache coherency protocols such as the MESI or MESIF protocol that attempt to reduce the communication required to maintain cache coherency. The MESI protocol is an invalidate-based cache coherence protocol that supports write back caches to save bandwidth compared to write through cache.[0377] As illustrated in FIG. 30, in one optional implementation a unified memory addressable via a common virtual memory address space used to access the physical processor memories 3001-3002 and GPU memories 3020-3023 is employed. In this implementation, operations executed on the GPUs 3010-3013 utilize the same virtual/effective memory address space to access the processors memories 3001-3002 and vice versa, thereby simplifying programmability. A first portion of the virtual/effective address space may be allocated to the processor memory 3001, a second portion to the second processor memory 3002, a third portion to the GPU memory 3020, and so on. The entire virtual/effective memory space (sometimes referred to as the effective address space) may thereby be distributed across each of the processor memories 3001-3002 and GPU memories 3020-3023, allowing any processor or GPU to access any physical memory with a virtual address mapped to that memory.[0378] Memory management circuitry 3094A-3094E within one or more of the MMUs 3039A-3039E may be provided that includes multicasting functionality to read data (e.g., weights for machine learning, constants for high performance computing, a dictionary lookup for natural language processing) once and write data to addresses within remote memory of all GPUs 3010-3013 or a subset of GPUs for a multi-GPU configuration as illustrated in FIG. 30.[0379] While multiple instances of management circuitry 3094A-3094E are illustrated in FIG. 30, the management circuitry may be implemented within the MMU of one or more host processors 405 and/or within the accelerator integration circuit 436 of FIG. 4B.[0380] FIG. 31 illustrates a fabric interconnect 3124 that can enable communication between graphics engine tiles 3100A-3100D and components such as the video codec 3106 and one or more copy engines 3104. The copy engines 3104 can be used to move data out of, into, and between the memory devices 3126A-3126D via communication links 3123A-3123H and memory that is external to the graphics processor 3120 (e.g., system memory). The fabric interconnect 3124 can also be used to interconnect the graphics engine tiles 3100A-3100D. The
copy engines 3104 are designed to read data once and then copy the data multiple times with multicast to new destinations such as different tiles. Different operations such as blocking, transposing, scaling, multiplying, shifting, or compression can occur for the read data prior to this data being copied via multicast to other tiles.[0381] In one example, during machine learning training, multiple passes for different tiles through an entire training dataset can be completed. The training can have different dropout regularization for reducing overfitting and improving generalization of deep neural networks. A copy initialize operation will fabricate data for initial weights. This data can then be copied to different destinations such as different tiles.[0382] FIG. 32 illustrates a computer-implemented method 3200 for reading data once and then copying the data multiple times with multicast to new destinations such as different tiles for a graphics processor in accordance with one embodiment. A graphics processing unit, graphics multiprocessor, or graphics processor having a memory controller (e.g., 367, 712A-712B, 1416, 1514, 1568, 2715, 2725, 2735, 2745, etc.), a MMU (e.g., 245, 364, 439, 3039B-3039E), a copy engine (e.g., 2999, 3104), and a driver (e.g., GPGPU driver 608, kernel mode graphics driver 2329, user mode graphics driver 2326) perform operations 3200 in accordance with one embodiment.[0383] At operation 3202, the computer-implemented method includes initializing a copy operation by reading data from a memory location. At operation 3204, at least one operation (e.g., blocking, transposing, scaling, multiplying, shifting, or compression) is optionally applied to the read data to generate output data. For example, the at least one operation such as blocking, transposing, scaling, multiplying, shifting, or compression can occur for the read data prior to this data being copied via multicast to other tiles.[0384] For compression of the read data, a compression ratio will be determined for the read data (e.g., control surface data, main surface data). The data can then be compressed prior to sending the data to other memory locations for different GPUs.[0385] At operation 3206, the computer-implemented method includes performing the copy operation by copying the read data (or output data) to one or more memory locations in a multi- GPU configuration via multicast. A graphics driver determines addresses of the memory locations for copying the read data (or output data). In computer networking, multicast is group communication where data transmission is addressed to a group of destination devices simultaneously. Multicast can be one-to-many or many-to-many distribution.OPTIMAL MIGRATION OF PAGES IN A MULTI-GPU SYSTEM[0386] In another example, optimal migration of configurable pages (e.g., 4 kilobyte (KB) page, 1 megabyte (MB) page, 1 terabyte (TB) page) in a multi-GPU system may be
implemented. In a shared memory GPU system, memory pages are shared across GPUs.Numerous delays can occur particularly for cross-GPU accesses and these delays negatively impact latency, power consumption, and bandwidth of communication links. The present design optimizes the page migration in a multi-GPU system to minimize latency, power consumption, and bandwidth used for the cross-GPU communication.[0387] FIG. 33 illustrates a multi-GPU system having a distributed memory model in accordance with one embodiment. The system 3300 includes GPUs 3301-3306 and respective memory 3321-3326 (e.g., HBM, system memory) for each GPU. GPUs and memorycommunicate with memory interconnects (e.g., memory interconnects 450A-450D). The GPUs communicate with each other using cross-GPU links 3050-3055 (e.g., ethernet, high-speed links 442A-442B). The cross-GPU links may have communication throughput of 4GB/s, 30GB/s, 80GB/s or higher, depending on the implementation. Various interconnect protocols may be used including, but not limited to, PCIe 4.0 or 5.0 and NVLink 2.0. However, the underlying principles described herein are not limited to any particular communication protocol or throughput.[0388] By way of example, and not limitation, the GPU memories may be volatile memories such as dynamic random-access memories (DRAMs) (including stacked DRAMs), Graphics DDR SDRAM (GDDR) (e.g., GDDR5, GDDR6), or High Bandwidth Memory (HBM) and/or may be non-volatile memories such as 3D XPoint or Nano-Ram. For example, some portion of the memories may be volatile memory and another portion may be non-volatile memory (e.g., using a two-level memory (2LM) hierarchy).[0389] Alternatively, all communication between the various system components shown in FIG. 33 may be accomplished using the same protocols/links (e.g., over a commoninterconnection fabric). As mentioned, however, the underlying principles described herein are not limited to any particular type of interconnect technology.[0390] If a GPU has to modify a page in memory, then this GPU needs to have ownership of the page for this modification. Software or hardware can track modification of pages with bits to be stored in a page sharing table for each GPU as illustrated in FIG. 34. The table 3400 includes a page ID 3410 for each page in a memory, a valid field 350 to indicate whether data in a page is valid or not, and a dirty field 3460 to indicate whether data in the page has been modified or not. The pages stored in GPU memory could be in a dirty state (modified state) vs. a non-modified state. When it is in the non-modified state, multiple GPUs could have a copy.[0391] FIG. 35 illustrates a network table for a number of hops between GPUs in accordance with one embodiment. The table 3500 includes GPU IDs 3510 to identify each GPU from FIG.33 and a hops field 3550 to indicate a minimum number of hops between GPUs. In this example,
the table 3500 is designed for GPU 3302 and each GPU will have its own network table. A hop field 3551 having a value of 1 indicates 1 hop from GPU 3302 to GPU 3301. A hop field 3553 having a value of 1 indicates 1 hop from GPU 3302 to GPU 3303. A hop field 3554 having a value of 2 indicates 2 hops from GPU 3302 to GPU 3304. A hop field 3556 having a value of 2 indicates 2 hops from GPU 3302 to GPU 3306.[0392] In one example, a page is stored in memory 3321 and 3322, but not in memory 3323. When GPU 3303 requests this page and assuming a valid state and non-modified state (read only), it is optimal to move it from the GPU memory (e.g., memory 3321 or 3322) which is closest to the requesting GPU. In this case, GPU 3303 can obtain the page from memory 3322 in 1 hop or from memory 3321 in two hops. Thus, to minimize hops on the cross-GPUcommunication links, the present design will obtain the page from memory 3322. This minimizes the number of hops on the cross-GPU communication links. This is optimal both for performance and power.[0393] FIG. 36 illustrates a computer-implemented method 3600 for optimal page migration between GPUs for a multi-GPU configuration in accordance with one embodiment. A graphics processing unit, graphics multiprocessor, or graphics processor having a memory controller (e.g., 367, 712A-712B, 1416, 1514, 1568, 2715, 2725, 2735, 2745, etc.) or a MMU (e.g., 245, 364, 439, 3039B-3039E performs operations 3600 in accordance with one embodiment.[0394] At operation 3602, the computer-implemented method includes receiving, with a memory controller or MMU, a memory request for a page from a requesting GPU of a multi- GPU configuration. At operation 3604, the memory controller or MMU performs a lookup of a page ID for the requested page from a page sharing table (e.g., table 3400). At operation 3606, the memory controller of MMU determines whether the page ID has valid state and also non- modified state.[0395] If the page ID has valid state and also non-modified state, then at operation 3608, the memory controller or MMU determines a memory location(s) for the requested page. At operation 3610, the memory controller or MMU performs a lookup of a GPU ID that has the requested page from a network table (e.g., table 3500). At operation 3611, the memory controller or MMU determines a GPU ID from the network table that is a least or fewest number of hops from the requesting GPU. At operation 3612, the memory controller or MMU obtains a copy of the requested page from memory of the GPU that is the least or fewest number of hops from the requesting GPU. If multiple GPUs have the least or fewest number of hops to the requesting GPU, then the GPU can be selected randomly.[0396] At operation 3620, if the page ID does not have both of a valid state and non- modified state, then the page can not be obtained for the requesting GPU.
[0397] Many of the methods are described in their most basic form, but processes can be added to or deleted from any of the methods and information can be added or subtracted from any of the described messages without departing from the basic scope of the presentembodiments. It will be apparent to those skilled in the art that many further modifications and adaptations can be made. The particular embodiments are not provided to limit the concept but to illustrate it. The scope of the embodiments is not to be determined by the specific examples provided above but only by the claims below.[0398] If it is said that an element“A” is coupled to or with element“B,” element A may be directly coupled to element B or be indirectly coupled through, for example, element C. When the specification or claims state that a component, feature, structure, process, or characteristic A “causes” a component, feature, structure, process, or characteristic B, it means that“A” is at least a partial cause of“B” but that there may also be at least one other component, feature, structure, process, or characteristic that assists in causing“B.” If the specification indicates that a component, feature, structure, process, or characteristic“may”,“might”, or“could” be included, that particular component, feature, structure, process, or characteristic is not required to be included. If the specification or claim refers to“a” or“an” element, this does not mean there is only one of the described elements.[0399] An embodiment is an implementation or example. Reference in the specification to “an embodiment,”“one embodiment,”“some embodiments,” or“other embodiments” means that a particular feature, structure, or characteristic described in connection with theembodiments is included in at least some embodiments, but not necessarily allembodiments. The various appearances of“an embodiment,”“one embodiment,” or“some embodiments” are not necessarily all referring to the same embodiments. It should be appreciated that in the foregoing description of exemplary embodiments, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various novel aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed embodiments requires more features than are expressly recited in each claim. Rather, as the following claims reflect, novel aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims are hereby expressly incorporated into this description, with each claim standing on its own as a separate embodiment.[0400] Some embodiments pertain to Example 1 that includes a graphics processor for a multi-tile architecture, comprising a first graphics processing unit (GPU) having a memory and a memory controller, a second graphics processing unit (GPU) having a memory, and a cross-GPU fabric to communicatively couple the first and second GPUs. The memory controller is
configured to determine whether frequent cross tile memory accesses occur between the first GPU and the second GPU in the multi-GPU configuration and to cause initiation of a data transfer mechanism when frequent cross tile memory accesses occur between the first GPU and the second GPU.[0401] Example 2 includes the subject matter of Example 1, further comprising a hardware counter to count cross tile memory accesses from the first GPU to the memory of the second GPU.[0402] Example 3 includes the subject matter of any of Examples 1-2, wherein the memory controller is configured to determine whether frequent cross tile memory accesses occur between the first GPU and the second GPU in the multi-GPU configuration using data from the hardware counter.[0403] Example 4 includes the subject matter of any of Examples 1-3, wherein the data transfer mechanism to cause data that is being accessed frequently by the second GPU to be transferred or copied to the memory of the second GPU.[0404] Example 5 includes the subject matter of any of Examples 1-4, wherein the data transfer mechanism to cause data that is being accessed frequently by the first GPU to be transferred or copied to the memory of the first GPU.[0405] Example 6 includes the subject matter of any of Examples 1-5, wherein the memory controller is configured to detect transfer patterns automatically including accesses between the first and second GPUs.[0406] Example 7 includes the subject matter of any of Examples 1-6, wherein the memory controller is configured to detect transfer patterns automatically including accesses to page N of the memory of the second GPU and to start transferring pages N+l and N+2 prior to requests for pages N+l and N+2.[0407] Some embodiments pertain to Example 8 that includes a graphics processing unit (GPU) of a multi-GPU architecture, comprising processing resources to perform graphics operations, a memory, and a memory controller. The memory controller is configured to determine whether frequent cross tile memory accesses occur between the GPU and a remote GPU in the multi-GPU configuration and to cause initiation of a data transfer mechanism when frequent cross tile memory accesses occur between the GPU and the remote GPU.[0408] Example 9 includes the subject matter of Example 8, further comprising a hardware counter to count cross tile memory accesses from the GPU to the remote memory of the remote GPU.
[0409] Example 10 includes the subject matter of any of Examples 8-9, wherein the wherein the memory controller is configured to determine whether frequent cross tile memory accesses occur between the GPU and the remote memory of the remote GPU in the multi-GPU configuration using data from the hardware counter.[0410] Example 11 includes the subject matter of any of Examples 8-10, wherein the data transfer mechanism to cause data that is being accessed frequently by the remote GPU to be transferred or copied to the remote memory.[0411] Example 12 includes the subject matter of any of Examples 8-11, wherein the data transfer mechanism to cause data that is being accessed frequently by the GPU to be transferred or copied to the memory of the GPU.[0412] Example 13 includes the subject matter of any of Examples 8-12, wherein the memory controller is configured to detect transfer patterns automatically between the GPU and the remote GPU.[0413] Example 14 includes the subject matter of any of Examples 8-13, wherein the memory controller is configured to detect transfer patterns automatically including accesses to page N of the remote memory and to start transferring pages N+l and N+2 prior to requests for pages N+l and N+2.[0414] Some embodiments pertain to Example 15 that includes a computer- implemented method to provide a data transfer mechanism for a multiple GPU configuration. The computer- implemented method comprises monitoring cross tile memory accesses from a local GPU to one or more remote GPUs in the multi-GPU configuration, determining, with a memory controller, whether frequent cross tile memory accesses occur from a local GPU to one or more remote GPUs in the multi-GPU configuration and sending a message to initiate the data transfer mechanism when frequent cross tile memory accesses occur from the local GPU to one or more remote GPUs in the multi-GPU configuration.[0415] Example 16 includes the subject matter of Example 15, the method further comprises receiving, with a graphics driver, the message from the memory controller and to provide the data transfer mechanism in response to receiving the message.[0416] Example 17 includes the subject matter of any of Examples 15-16, wherein the data transfer mechanism accesses a page table to provide a translation of virtual addresses to physical addresses.[0417] Example 18 includes the subject matter of any of Examples 15-17, wherein the data transfer mechanism to transfer or copy the data that is being accessed frequently by the local GPU to the local memory of the local GPU and to local memory of at least one other GPU.
[0418] Example 19 includes the subject matter of any of Examples 15-18, wherein the data transfer mechanism to transfer or copy the data that is being accessed frequently by the local GPU to multiple tiles or GPUs to enable split frame rendering with a first GPU handling rendering for a first portion of a display and a second GPU handling rendering for a second different portion of the display.[0419] Example 20 includes the subject matter of any of Examples 15-19, the method further comprises performing a page allocation to local memory of the local GPU when a first access to a page in a remote GPU memory occurs.[0420] Some embodiments pertain to Example 21 that includes a graphics processor comprising a first graphics processing unit (GPU) having a memory and a memory management unit (MMU). A second graphics processing unit (GPU) includes a memory and a cross-GPU fabric communicatively couples the first and second GPUs. The MMU is configured to initiate or receive a copy operation to read data once from a memory location and write with multicasting to all GPUs or a subset of GPUs for a multi-GPU configuration including the second GPU for a multi- GPU configuration.[0421] Example 22 includes the subject matter of Example 21, wherein the MMU is configured to apply a mask to the read data for implicit split rendering before performing the copy operation.[0422] Example 23 includes the subject matter of any of Examples 21-22, wherein writing with multicasting to all GPUs or a subset of GPUs comprising broadcasting a checkboard pattern of data across tiles.[0423] Example 24 includes the subject matter of any of Examples 21-23, wherein the MMU is configured to apply a transpose operation to the read data prior to performing the copy operation with multicasting.[0424] Example 25 includes the subject matter of any of Examples 21-24, wherein the MMU is configured to apply an inference scaling operation to the read data including weights for inference prior to performing the copy operation with multicasting.[0425] Example 26 includes the subject matter of Example 21, wherein the MMU is configured to compress the read data prior to this compressed data being copied to other tiles.[0426] Some embodiments pertain to Example 27 that includes a graphics multiprocessor comprising a first graphics processing unit (GPU) having a memory, a second graphics processing unit (GPU) includes a memory, a copy engine, and a cross-GPU fabriccommunicatively couples the first and second GPUs. The copy engine is configured to perform a copy operation to read data once from a memory location and write with multicasting to all GPUs or a subset of GPUs for a multi-GPU configuration.
[0427] Example 28 includes the subject matter of Example 27, further comprising a memory management unit (MMU) that is configured to apply a mask to the read data for implicit split rendering before performing the copy operation.[0428] Example 29 includes the subject matter of any of Examples 27-28, wherein writing with multicasting to all GPUs or a subset of GPUs comprising broadcasting a checkboard pattern of data across tiles).[0429] Example 30 includes the subject matter of any of Examples 27-29, wherein the MMU is configured to apply a transpose operation to the read data prior to performing the copy operation with multicasting.[0430] Example 31 includes the subject matter of any of Examples 27-30, wherein performing the copy operation further comprises initializing data for initial weights for multiple passes of training on different tiles and then copying the weights to different destinations such as different tiles.[0431] Example 32 includes the subject matter of any of Examples 27-31, further comprising a compression engine to compress the read data prior to this compressed data being copied to other tiles.[0432] Some embodiments pertain to Example 33 that includes a graphics processor having a multi-tile architecture comprising, a first graphics processing unit (GPU) having a memory and a memory management unit (MMU), a second graphics processing unit (GPU) having a memory, and a cross-GPU fabric to communicatively couple the first and second GPUs. The MMU is configured to receive a memory request for a page, to perform a lookup of a page ID for the requested page from a page sharing table, and to determine whether the page ID has a valid state and a non-modified state.[0433] Example 34 includes the subject matter of Example 33, wherein the MMU is further configured to determine at least one memory location for the requested page.[0434] Example 35 includes the subject matter of any of Examples 33-34, wherein the MMU is further configured to perform a lookup of a GPU ID for the requested page from a network table when the page ID has valid state and non-modified state.[0435] Example 36 includes the subject matter of any of Examples 33-35, wherein the MMU is further configured to determine a GPU ID from the network table for a GPU that is a least or fewest number of hops from the requesting GPU.[0436] Example 37 includes the subject matter of any of Examples 33-36, wherein the MMU is further configured to obtain a copy of the requested page from memory of the GPU that is the least or fewest number of hops from the requesting GPU.
[0437] Some embodiments pertain to Example 38 that includes a computer implemented method for optimal page migration between GPUs for a multi-GPU configuration. The method includes receiving, with a MMU, a memory request for a page, performing a lookup of a page ID for the requested page from a page sharing table, and determining whether the page ID has a valid state and a non-modified state.[0438] Example 39 includes the subject matter of Example 38, the method further comprises determining at least one memory location for the requested page.[0439] Example 40 includes the subject matter of any of Examples 38-39, the method further comprises performing a lookup of a GPU ID for the requested page from a network table when the page ID has valid state and non-modified state.[0440] Example 41 includes the subject matter of any of Examples 38-40, the method further comprises determining a GPU ID from the network table for a GPU that is a least or fewest number of hops from the requesting GPU.[0441] Example 42 includes the subject matter of any of Examples 38-41, the method further comprises obtaining a copy of the requested page from memory of the GPU that is the least or fewest number of hops from the requesting GPU.[0442] The foregoing description and drawings are to be regarded in an illustrative rather than a restrictive sense. Persons skilled in the art will understand that various modifications and changes may be made to the embodiments described herein without departing from the broader spirit and scope of the invention as set forth in the appended claims. |
Various aspects of the present disclosure provide for detecting a condition indicating that a graphics processing unit (GPU) is in an unstable state while receiving GPU commands in a first wireless display mode, transmitting a GPU refresh request message and switching from the first wireless display mode to a second wireless display mode in response to detecting the condition, receiving data sufficient to reset the GPU from the unstable state to a stable state at a random access point (RAP) in a trace of the GPU commands, and switching from the second wireless display mode to the first wireless display mode after receiving the data. The GPU refresh request message may include information requesting the data sufficient to reset the GPU at an upcoming RAP in the trace of the GPU commands. Various other aspects are also provided throughout the present disclosure. |
CLAIMS1. An apparatus configured for wireless display, the apparatus comprising:a transceiver;a memory;a graphics processing unit (GPU); andat least one processor communicatively coupled to the transceiver, the memory, and the GPU, wherein the at least one processor and the memory are configured to:detect a condition indicating that the GPU is in an unstable state while in a first wireless display mode;in response to detecting the condition, transmit a GPU refresh request message and switch from the first wireless display mode to a second wireless display mode;after transmitting the GPU refresh request message, at a random access point (RAP) in a trace of GPU commands, receive data sufficient to reset the GPU from the unstable state to a stable state; andswitch from the second wireless display mode to the first wireless display mode after receiving the data.2. The apparatus of claim 1, wherein the condition indicating that the GPU is in the unstable state exists when a number of lost packets of the GPU commands exceeds a threshold number of lost packets of the GPU commands.3. The apparatus of claim 1, wherein the GPU refresh request message comprises information requesting the data sufficient to reset the GPU at an upcoming RAP in the trace of the GPU commands.4. The apparatus of claim 1, wherein the data sufficient to reset the GPU is independent of other data associated with another RAP in the trace of the GPU commands.5. The apparatus of claim 1, wherein the at least one processor and the memory are further configured to switch from the second wireless display mode to the first wireless display mode when a timestamp of a packet associated with the second wireless display mode is the same as a timestamp of a packet associated with the first wireless display mode.6. The apparatus of claim 1, wherein a display quality of the second wireless display mode is less than a display quality of the first wireless display mode.7. The apparatus of claim 1, wherein the at least one processor and the memory are further configured to utilize a Graphics Engine Entity (GEE) in the first wireless display mode and Miracast in the second wireless display mode.8. The apparatus of claim 1, wherein the GPU commands comprise at least one of OpenGL commands or DirectX commands.9. A method for wireless display, the method comprising:detecting a condition indicating that a graphics processing unit (GPU) is in an unstable state while in a first wireless display mode;in response to detecting the condition, transmitting a GPU refresh request message and switching from the first wireless display mode to a second wireless display mode;after transmitting the GPU refresh request message, at a random access point (RAP) in a trace of GPU commands, receiving data sufficient to reset the GPU from the unstable state to a stable state; andswitching from the second wireless display mode to the first wireless display mode after receiving the data.10. The method of claim 9, wherein the condition indicating that the GPU is in the unstable state exists when a number of lost packets of the GPU commands exceeds a threshold number of lost packets of the GPU commands.11. The method of claim 9, wherein the GPU refresh request message comprises information requesting the data sufficient to reset the GPU at an upcoming RAP in the trace of the GPU commands.12. The method of claim 9, wherein the data sufficient to reset the GPU is independent of other data associated with another RAP in the trace of the GPU commands.13. The method of claim 9, wherein the switch from the second wireless display mode to the first wireless display mode occurs when a timestamp of a packet associated with the second wireless display mode is the same as a timestamp of a packet associated with the first wireless display mode.14. The method of claim 9, wherein a display quality of the second wireless display mode is less than a display quality of the first wireless display mode.15. The method of claim 9, further comprising utilizing a Graphics Engine Entity (GEE) in the first wireless display mode and Miracast in the second wireless display mode.16. The method of claim 9, wherein the GPU commands comprise at least one of OpenGL commands or DirectX commands.17. A computer-readable medium configured for wireless display, the computer- readable medium comprising computer-executable instructions configured for:detecting a condition indicating that a graphics processing unit (GPU) is in an unstable state while in a first wireless display mode;in response to detecting the condition, transmitting a GPU refresh request message and switching from the first wireless display mode to a second wireless display mode; after transmitting the GPU refresh request message, at a random access point (RAP) in a trace of GPU commands, receiving data sufficient to reset the GPU from the unstable state to a stable state; andswitching from the second wireless display mode to the first wireless display mode after receiving the data.18. The computer-readable medium of claim 17, wherein the condition indicating that the GPU is in the unstable state exists when a number of lost packets of the GPU commands exceeds a threshold number of lost packets of the GPU commands.19. The computer-readable medium of claim 17, wherein the GPU refresh request message comprises information requesting the data sufficient to reset the GPU at an upcoming RAP in the trace of the GPU commands.20. The computer-readable medium of claim 17, wherein the data sufficient to reset the GPU is independent of other data associated with another RAP in the trace of the GPU commands.21. The computer-readable medium of claim 17, wherein the switch from the second wireless display mode to the first wireless display mode occurs when a timestamp of a packet associated with the second wireless display mode is the same as a timestamp of a packet associated with the first wireless display mode.22. The computer-readable medium of claim 17, wherein a display quality of the second wireless display mode is less than a display quality of the first wireless display mode.23. The computer-readable medium of claim 17, wherein the computer-executable instructions are further configured for utilizing a Graphics Engine Entity (GEE) in the first wireless display mode and Miracast in the second wireless display mode.24. The computer-readable medium of claim 17, wherein the GPU commands comprise at least one of OpenGL commands or DirectX commands.25. An apparatus configured for wireless display, the apparatus comprising:means for detecting a condition indicating that a graphics processing unit (GPU) is in an unstable state while in a first wireless display mode;means for transmitting a GPU refresh request message and switching from the first wireless display mode to a second wireless display mode in response to detecting the condition;means for receiving data sufficient to reset the GPU from the unstable state to a stable state at a random access point (RAP) in a trace of GPU commands and after transmitting the GPU refresh request message; andmeans for switching from the second wireless display mode to the first wireless display mode after receiving the data.26. The apparatus of claim 25, wherein the condition indicating that the GPU is in the unstable state exists when a number of lost packets of the GPU commands exceeds a threshold number of lost packets of the GPU commands.27. The apparatus of claim 25, wherein:the GPU refresh request message comprises information requesting the data sufficient to reset the GPU at an upcoming RAP in the trace of the GPU commands; and the data sufficient to reset the GPU is independent of other data associated with another RAP in the trace of the GPU commands.28. The apparatus of claim 25, wherein the means for switching is configured to switch from the second wireless display mode to the first wireless display mode when a timestamp of a packet associated with the second wireless display mode is the same as a timestamp of a packet associated with the first wireless display mode.29. The apparatus of claim 25, wherein a display quality of the second wireless display mode is less than a display quality of the first wireless display mode.30. The apparatus of claim 25, further comprising means for utilizing a Graphics Engine Entity (GEE) in the first wireless display mode and Miracast in the second wireless display mode, wherein the GPU commands comprise at least one of OpenGL commands or DirectX commands. |
SWITCHING A WIRELESS DISPLAY MODE AND REFRESHING A GRAPHICS PROCESSING UNIT IN AN UNSTABLE STATECROSS-REFERENCE TO RELATED APPLICATIONS[0001] This application claims priority to and the benefit of Non-ProvisionalApplication No. 14/863,853 filed in the U.S. Patent and Trademark Office on September 24, 2015.TECHNICAL FIELD[0002] Aspects of the present disclosure relate, generally, to wireless display and, more particularly, to switching a wireless display mode and refreshing a graphics processing unit (GPU) in an unstable state.INTRODUCTION[0003] A wireless display system may enable content to be displayed concurrently on multiple devices. Some wireless display systems may include an apparatus sometimes referred to as a source that transmits information to another apparatus sometimes referred to as a sink. Such information may include various types and forms of data. However, there exists a possibility that some of the information is lost during the wireless transmission. Information may be considered lost when such information does not reach the sink and/or reaches the sink in error. If too much information is lost during the wireless transmission, the graphics processing unit (GPU) of the sink may go into an unstable state. In existing systems, the GPU may remain in the unstable state for a considerable period of time and possibly malfunction, thereby resulting in an undesirable user experience. Overcoming such limitations may enhance the overall system and user experience.SUMMARY[0004] The following presents a simplified summary of one or more aspects of the present disclosure, in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated features of the disclosure, and is intended neither to identify key or critical elements of all aspects of the disclosure nor to delineate the scope of any or all aspects of the disclosure. Its sole purpose is to present some concepts of one or more aspects of the disclosure in a simplified form as a prelude to the more detailed description that is presented later.[0005] In an aspect, the present disclosure provides an apparatus configured for wireless display. The apparatus includes a transceiver, a memory, a graphics processing unit (GPU), and at least one processor communicatively coupled to the transceiver, the memory, and the GPU. The at least one processor and the memory are configured to detect a condition indicating that the GPU is in an unstable state while in a first wireless display mode. The at least one processor and the memory are further configured to transmit a GPU refresh request message and switch from the first wireless display mode to a second wireless display mode in response to detecting the condition. The at least one processor and the memory are further configured to receive data sufficient to reset the GPU from the unstable state to a stable state at a random access point (RAP) in a trace of the GPU commands and after transmitting the GPU refresh request message. The at least one processor and the memory are further configured to switch from the second wireless display mode to the first wireless display mode after receiving the data.[0006] In another aspect, the present disclosure provides a method for wireless display.The method includes detecting a condition indicating that the GPU is in an unstable state while in a first wireless display mode. The method also includes transmitting a GPU refresh request message and switching from the first wireless display mode to a second wireless display mode in response to detecting the condition. The method also includes receiving data sufficient to reset the GPU from the unstable state to a stable state at a RAP in a trace of the GPU commands and after transmitting the GPU refresh request message. The method also includes switching from the second wireless display mode to the first wireless display mode after receiving the data.[0007] In yet another aspect, the present disclosure provides a computer-readable medium configured for wireless display. The computer-readable medium includes instructions configured for detecting a condition indicating that the GPU is in an unstable state while in a first wireless display mode. The instructions are further configured for transmitting a GPU refresh request message and switching from the first wireless display mode to a second wireless display mode in response to detecting the condition. The instructions are further configured for receiving data sufficient to reset the GPU from the unstable state to a stable state at a RAP in a trace of the GPU commands and after transmitting the GPU refresh request message. The instructions are further configured for switching from the second wireless display mode to the first wireless display mode after receiving the data.[0008] In a further aspect, the present disclosure provides another apparatus configured for wireless display. The apparatus includes means for detecting a condition indicating that the GPU is in an unstable state while in a first wireless display mode. The apparatus also includes means for transmitting a GPU refresh request message and switching from the first wireless display mode to a second wireless display mode in response to detecting the condition. The apparatus also includes means for receiving data sufficient to reset the GPU from the unstable state to a stable state at a RAP in a trace of the GPU commands and after transmitting the GPU refresh request message. The apparatus also includes means for switching from the second wireless display mode to the first wireless display mode after receiving the data.[0009] These and other aspects of the present disclosure will become more fully understood upon a review of the detailed description, which follows. Other aspects, features, and embodiments of the present disclosure will become apparent to those of ordinary skill in the art, upon reviewing the following description of specific, exemplary embodiments of the present disclosure in conjunction with the accompanying figures. While features of the present disclosure may be discussed relative to certain embodiments and figures below, all embodiments of the present disclosure can include one or more of the advantageous features discussed herein. In other words, while one or more embodiments may be discussed as having certain advantageous features, one or more of such features may also be used in accordance with the various embodiments of the disclosure discussed herein. In similar fashion, while exemplary embodiments may be discussed below as device, system, or method embodiments it should be understood that such exemplary embodiments can be implemented in various devices, systems, and methods.BRIEF DESCRIPTION OF THE DRAWINGS[0010] FIG. 1 is a diagram illustrating an example of a source in communication with a sink according to aspects of the present disclosure.[0011] FIG. 2 is a diagram illustrating an example of functional blocks representing a data plane and a control plane according to aspects of the present disclosure. [0012] FIG. 3 is a diagram illustrating an example of the source and the sink displaying content in a home environment according to aspects of the present disclosure.[0013] FIG. 4 is a diagram illustrating an example of the source and the sink displaying content in a work environment according to aspects of the present disclosure.[0014] FIG. 5 is a diagram illustrating an example of various communications between the source and the sink according to aspects of the present disclosure.[0015] FIG. 6 is a diagram illustrating another example of various communications between the source and the sink according to aspects of the present disclosure.[0016] FIG. 7 is a diagram illustrating an example of various methods and/or processes operable at an apparatus according to aspects of the present disclosure.[0017] FIG. 8 is a diagram illustrating an example of a hardware implementation of an apparatus according to aspects of the present disclosure.DESCRIPTION OF SOME EXAMPLES[0018] The detailed description set forth below in connection with the appended drawings is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In some instances, well known structures and components are shown in block diagram form in order to avoid obscuring such concepts.[0019] FIG. 1 is a diagram 100 illustrating an example of a source 102 in communication with a sink 104. Generally, the sink 104 is any apparatus configured for wireless communication with another apparatus (e.g., the source 102). Generally, the source 102 is any apparatus configured for wireless communication with another apparatus (e.g., the sink 104). More specifically, the source 102 and the sink 104 may communicate with each other to enable wireless display, as described in greater detail herein. In some configurations, the source 102 may be referred to as a Wi-Fi Display (WFD) source, and the sink 104 may be referred to as a WFD sink. In such configurations, the source 102 and the sink 104 may communicate with each other in accordance with the Wi-Fi Display technical specifications of the Wi-Fi Alliance™. Although various examples of the source 102 and the sink 104 may be described herein, one of ordinary skill in the art will understand that the present disclosure is not limited to such examples and various other examples of the source 102 and the sink 104 are within the scope of the present disclosure.[0020] In some configurations, as illustrated in FIG. 1, the source 102 may include a central processing unit (CPU) 118, a graphics library 116, a local graphics processing unit (GPU) 114, and/or a local display 112. The CPU 118, graphics library 116, local GPU 114, and/or local display 112 may communicate with each other utilizing various technologies without deviating from the scope of the present disclosure. For example, the CPU 118 may generate content at an application layer and such content may be provided to the graphics library 116. The graphics library 116 may store such content. When such content is ready for display, the local GPU 114 may read the content from the graphics library 116. In some circumstances, such content may be displayed on the remote display 122 of the source 102. In such circumstances, the local GPU 114 may provide such content to the local display 112 of the source 102. In some circumstances, such content may be displayed at the sink 104. Display of such content at the sink 104 may be in addition or alternative to display of such content at the source 102. In other words, such content may sometimes be displayed concurrently at the local display 112 and the remote display 122, and such content may sometimes be displayed at either the local display 112 or the remote display 122.[0021] Various techniques may be implemented in order to display the content on the remote display 122 of the sink 104. In some configurations, which may sometimes be referred to as graphics offload, the source 102 transfers GPU commands (or other suitable instructions) to the sink 104. For example, the local GPU 114 of the source 102 may provide GPU commands (or other suitable instructions) to the remote GPU 124 of the sink 104. In such configurations, the GPU commands are processed at the sink 104, and the content corresponding to those GPU commands is displayed on the remote display 122. Generally, the GPU commands may provide the sink 104 with the instructions for executing processes that enable display of such content at the remote display 122 in a manner that is synchronized with display of such content at the local display 112. In some configurations, the GPU commands include OpenGL commands and/or DirectX commands. Graphics offload varies from other techniques, in part, because (i) the source 102 is not required to buffer, encode, and transmit the entire content to be displayed on the remote display 122, and (ii) the sink 104 is not required to receive, decode, and render that entire content.[0022] One of ordinary skill in the art will understand that the term content may generally refer to many types of data without deviating from the scope of the present disclosure. More specifically, with regard to a Graphics Engine Entity (GEE), content may refer to graphics content. Non-limiting examples of graphics content may include productivity applications and/or games. Additionally or alternatively, content may include various other types of data that can be characterized as content or graphics content by one of ordinary skill in the art.[0023] FIG. 2 is a diagram 200 illustrating an example of functional blocks representing a data plane and a control plane. The data plane may include Video Codec 208, Audio Codec 210, Packetized Elementary Stream (PES) Packetization 216, the High- Bandwidth Digital Content Protection (HDCP) 218, Moving Pictures Experts Group 2 Transport Stream (MPEG2-TS) 220, Real-Time Transport Protocol (RTP) 224, User Datagram Protocol (UDP) 228, and Internet Protocol (230). The control plane may include Real Time Streaming Protocol (RTSP) 222, Transmission Control Protocol (TCP) 226, IP 230, Remote I2C Read/Write 212, User Input Back Channel (UIBC) Capsulation 214, Human Interface Device Class (HIDC) 204, Generic User Input 202, and HDCP 218 session key establishment. The Wi-Fi Peer-2-Peer (P2P) / Tunneled Direct Link Setup (TDLS) block forms Layer-2 connectivity using either Wi-Fi P2P or TDLS. One of ordinary skill in the art has knowledge of the Wi-Fi Display technical specifications of the Wi-Fi Alliance™, which provides detailed description pertaining to the functional blocks illustrated in FIG. 2.[0024] Generally, the GEE 206 is a graphics subsystem that enables graphics offload capabilities. In the example illustrated in FIG. 2, the functional block representing the GEE 206 is shown as located above the PES Packetization 216. However, one of ordinary skill in the art will understand that the functional block representing the GEE 206 may be located in alternative configurations relative to other functional blocks illustrated in FIG. 2 without deviating from the scope of the present location. For example, the functional block representing the GEE 206 may be located directly above the TCP 226. In such configurations, the system may benefit from reduced overhead and/or latency because data from the GEE 206 could pass directly to the IP 230 without passing through the PES packetization 216, HDCP 218, MPEG2-TS 220, RTP 224, and UDP 228.[0025] The source 102 and the sink 104 may communicate with each other utilizing various technologies without deviating from the scope of the present disclosure. In some configurations, the source 102 and the sink 104 may communicate with each other utilizing a TCP/IP connection. In such configurations, the source 102 transmits TCP packets to the sink 104. In some configurations, the source 102 and the sink 104 may communicate with each other utilizing a UDP/IP connection. In such configurations, the source 102 transmits UDP packets to the sink 104. In some configurations, the source 102 and the sink may communicate with each other utilizing a non-IP connection, which may sometimes be referred to as a native Medium Access Control (MAC) connection. In such configurations, the source 102 transmits native MAC packets to the sink 104.[0026] FIG. 3 is a diagram 300 illustrating an example of the source 102 and the sink104 displaying content in a home environment. FIG. 4 is a diagram 400 illustrating an example of the source 102 and the sink 104 displaying content in a work environment. As illustrated in the examples provided in FIGS. 3 and 4, the content displayed on the sink 104 may be synchronized with the content displayed on the source 102. Additional description pertaining to the source 102 and sink 104 is provided throughout the present disclosure and therefore will not be repeated here.[0027] FIG. 5 is a diagram 500 illustrating an example of various communications between the source 102 and the sink 104. As described above, the source 102 may transfer GPU commands (or other suitable instructions) to the sink 104. The GPU commands are processed at the sink 104, and the content corresponding to those GPU commands is displayed on the remote display 122. The GPU commands may provide the sink 104 with the instructions for executing processes that enable display of such content at the sink 104 in a manner that is synchronized with display of such content at the source 102. In some configurations, the GPU commands include OpenGL commands and/or DirectX commands. The GPU commands may be included in various types and forms of packets without deviating from the scope of the present disclosure. In some configurations, the GPU commands may be included as a portion of the pay load in TCP/IP packets, UDP/IP packets, and/or native MAC packets, as described above. In the example illustrated in FIG. 5, some of the GPU commands are included in Packeti 502. The source 502 transmits Packeti 502, and Packeti 502 is received at the sink 104 without error. Accordingly, in some configurations, the sink 104 transmits an acknowledgement message (ACKi 504) to the source 102.[0028] In wireless communication, however, there is a possibility that some packets may be lost during transmission. More specifically, in some circumstances, one or more packets containing GPU commands may not be received at the sink 104 and/or may be received at the sink 104 with errors. When a packet containing GPU commands is not received at the sink 104 and/or is received at the sink 104 with errors, that packet may be referred to herein as a lost packet. For example, as illustrated in FIG. 5, the sink 104 transmits Packet2506, which contains GPU commands. However, Packet^ 506 is not received at the sink 104 and/or is received at the sink 104 with errors. In some configurations, the sink 104 may transmit a negative acknowledgement message (NACK2508) to the source 102. In some cases, the absence of an ACK during a period of time following transmission of Packet^ 506 may be considered an implicit indication of a NACK. Accordingly, in such cases, the NACK2508 is not required because it is implicitly inferred from the absence of the ACK. The source 102 may then retry the transmission of Packet2506. Subsequent packets may also be unable to reach the sink 104 and/or reach the sink 104 with errors. For example, as illustrated in FIG. 5, the sink 104 transmits Packet3510, which contains GPU commands. However, Packet3510 is not received at the sink 104 and/or is received at the sink 104 with errors. In some configurations, the sink 104 may transmit a negative acknowledgement message (NACK3512) to the source 102. In some cases, the absence of an ACK during a period of time following transmission of Packet3510 may be considered an implicit indication of a NACK. Accordingly, in such cases, the NACK3512 is not required because it is implicitly inferred from the absence of the ACK. The source 102 may then retry the transmission of Packet3510.[0029] As mentioned above, a lost packet refers to a packet that contains GPU commands and is not received at the sink 104 and/or is received at the sink 104 with errors. The remote GPU 124 of the sink 104 may be in an unstable state when the number of lost packets of GPU commands exceeds a threshold number of lost packets of GPU commands. The remote GPU 124 of the sink 104 may be in an unstable state for various other reasons without deviating from the scope of the present disclosure. As an example, the remote GPU 124 may be in an unstable state when the remote GPU 124 malfunctions as a result of not receiving one or more GPU commands. Malfunctions may result in inoperability and/or crash of the remote GPU 124. As another example, the remote GPU 124 of the sink 104 may be in an unstable state when the remote GPU 124 is unable to perform one or more critical functions of the sink 104. A critical function may refer to any function that is central to the operability of a particular component of the sink 104. As yet another example, the sink 104 may be in an unstable state when the remote GPU 124 is unable to operate in a manner that allows content to be displayed on the remote display 122 of the sink 104. As a further example, the sink 104 may be in an unstable state when the remote GPU 124 is unable to properly synchronize the display of content on the remote display 122 of the sink 104 with the content being displayed on the local display 112 of the source 102.[0030] In existing systems, the remote GPU 124 may be unable to recover from such an unstable state for a considerable period of time and, thus, may be unable to properly display the content corresponding to the GPU commands being sent from the source 102 to the sink 104. For example, the sink 104 may be unable to properly synchronize the display of content on the remote display 122 of the sink 104 with the content being displayed on the local display 112 of the source 102. In such circumstances, existing systems may remain in the unstable state for a considerable period of time and possibly malfunction, fail to perform critical functions, become inoperative, crash, hang, freeze, and/or otherwise provide in an undesirable user experience. Overcoming such limitations may enhance the overall system and user experience.[0031] FIG. 6 is a diagram 600 illustrating an example of various communications between the source 102 and the sink 104. The source 102 and the sink 104 may be configured to operate utilizing at least two wireless display modes. An example of a first wireless display mode includes the utilization of the GEE 206. As described above, the GEE 206 is a graphics subsystem that enables graphics offload capabilities. In some configurations, the GPU commands may be referred to as GEE frames, GEE packets, and/or any other suitable term without deviating from the scope of the present disclosure. GEE 206 may sometimes be referred to as graphics domain wireless display without deviating from the scope of the present disclosure. In graphics domain wireless display, display data may be captured as OpenGL and/or DirectX calls to the GPU. In graphics domain wireless display, content capture is application-agnostic. In graphics domain wireless display, content capture may occur at the entry of the GPU. In graphics domain wireless display, the pixels may be regenerated at the sink 104 to achieve lossless graphics and text quality. In graphics domain wireless display, the transmitted data may be scaled to the required resolution (e.g., 2K, 2080p, 4K, etc.) with little to no addition to the transmission data rate.[0032] An example of a second wireless display mode includes the utilization ofMiracast 604. Generally, Miracast refers to protocols that comply with the Wi-Fi Display technical specifications of the Wi-Fi Alliance™. Such protocols enable connectivity between the source 102 and the sink 104. Miracast may sometimes be referred to as High-Definition Multimedia Interface (HDMI) over Wi-Fi. Miracast enables synchronized display of contents from one device (e.g., the source 102) onto another device (e.g., the sink 104). Miracast may also be referred to as pixel domain wireless display without deviating from the scope of the present disclosure. In pixel domain wireless display, content capture may occur after entry of the GPU. In pixel domain wireless display, some content (e.g., images) may be captured from the display frame buffer in the pixel domain after GPU rendering, and some content (e.g., non- images) may be captured at a display processor of the source 102. In pixel domain wireless display, display data may be compressed, and up-sampling transmitted display data at the sink 104 may reduce the quality of text and graphics.[0033] Some packets described herein (e.g., packets 610, 612, 618) may sometimes be referred to as GEE packets, or another suitable term, without deviating from the scope of the present disclosure. While operating in the first wireless display mode (e.g., using GEE 206), the source 102 transmits one or more packets 610 containing GPU commands to the sink 104. Over time, one or more other packets 612 may be lost during the transmission. As mentioned above, a lost packet refers to a packet that contains GPU commands and is not received at the sink 104 and/or is received at the sink 104 with errors. The sink 104 is configured to detect one or more conditions indicating that the remote GPU 124 of the sink 104 is in an unstable state. In some configurations, the sink 104 may detect that the remote GPU 124 is in an unstable state upon detecting that the number of lost packets of GPU commands exceeds a threshold number of lost packets of GPU commands. One of ordinary skill in the art will understand that the threshold number of lost packets of GPU commands may be set by an administrator, preconfigured by the manufacturer, dynamically adjusted based on various factors, and/or determined utilizing many different techniques without deviating from the scope of the present disclosure. As an example, the threshold number of lost packets may be preset as two (2) packets. As illustrated in FIG. 6, the sink 104 may detect that the remote GPU 124 is in an unstable state upon detecting that three (3) packets 612 containing GPU commands were lost.[0034] In response to detecting that the remote GPU 124 is in an unstable state, the sink104 may transmit a GPU refresh request message 614 (to the source 102) and switch from the first wireless display mode (e.g., using GEE 206) to a second wireless display mode (e.g., using Miracast 606). In some configurations, the GPU refresh request message 614 may be referred to as a GPU reset message, or any other suitable term, without deviating from the scope of the present disclosure. The switch from the first wireless display mode (e.g., using GEE 206) to the second wireless display mode (e.g., using Miracast 606) enables the sink 104 to fallback to a mode of wireless display that is not suffering from lost packets containing GPU commands.[0035] Some encoders may have error recovery mechanisms that enable graceful degradation of the displayed content. For example, upon detecting a certain number of lost packets, such encoders may reduce the number of transmitted packets, thereby enabling a gradual reduction in the quality of the displayed content. However, reducing the number of transmitted packets may not be feasible for graphics offload. In graphics offload, a substantial reduction in the number of transmitted packets (containing GPU commands) can adversely impact the state of the remote GPU 124 of the sink 104.[0036] Without such a fallback, the remote GPU 124 of the sink 104 may remain in the unstable state and possibly malfunction, fail to perform critical functions, become inoperative, crash, hang, freeze, and/or otherwise provide in an undesirable user experience. One of ordinary skill in the art will understand that the severity of the impact may depend upon the priority or importance of the lost function, the application to which that lost GPU command is associated, the behavior of the graphics library to which that lost GPU command is associated, and/or various other factors.[0037] One of ordinary skill in the art will also understand that the specific techniques implemented to enable such a fallback can vary without deviating from the scope of the present disclosure. Although many examples of such techniques exist, one example may involve: (i) determining an identifier (ID) (e.g., Stream ID and/or Session ID) associated with content for display, (ii) using the ID, establishing a session using GEE 206 and a session using Miracast 606 (e.g., prior to receiving packets 610 containing the GPU commands), (iii) subsequently setting the session using Miracast 606 to 'pause' and setting the session using GEE 206 to 'play' (e.g., while receiving the packets 610 containing the GPU commands), and (iv) to initiate the fallback, setting the session using Miracast 606 to 'play' and setting the session using GEE 206 to 'pause. '[0038] This fallback may occur even though the display quality of the second wireless display mode (e.g., using Miracast 606) may be less than the display quality of the first wireless display mode (e.g., using GEE 206). Display quality may refer to the display resolution, refresh rate, picture quality, and/or any other metric associated with the display quality of the content on the remote display 122 of the sink 104. Even though the display quality of the second wireless display mode (e.g., using Miracast 606) may be less than the display quality of the first wireless display mode (e.g., using GEE 206), the user is provided with a viewing experience that is not interrupted as a result of lost packets containing GPU commands and/or a GPU in an instable state.[0039] The GPU refresh request message 614 may include information requesting data616 for resetting the remote GPU 124 of the sink 104. More specifically, in some configurations, the GPU refresh request message 614 may include information requesting data 616 that is sufficient to reset the remote GPU 124 of the sink 104 at an upcoming random access point (RAP) in a trace 602 of the GPU commands. The data 616 may sometimes be referred to as Miracast frames, Miracast packets, and/or other suitable terms without deviating from the scope of the present disclosure. Generally, resetting any GPU may include reinitializing settings, configurations, or conditions of the GPU such that the GPU changes from the unstable state to a stable state. Additional description pertaining to an unstable state is provided above and therefore will not be repeated. A stable state refers to any state that is different from the unstable state. For example, a GPU may be in a stable state when that GPU performs one or more critical functions that it was unable to perform while in the unstable state.[0040] Generally, the trace 602 of the GPU commands refers to a record or file containing all (or substantially all) of the GPU commands transmitted by the source 102 and/or received at the sink 104. The trace 602 of the GPU commands may be stored or maintained in a graphics library of the sink 104. The trace 602 of the GPU commands may be stored in various formats and/or file configurations without deviating from the scope of the present disclosure. Generally, a RAP refers to a mark, marker, and/or any other suitable indicator at one or more portions, positions, and/or locations of the trace 602. In some configurations, the trace 602 may be transmitted from the source 102 to the sink 104. The trace 602 may include a plurality of RAPs (e.g., RAP 0, RAP 10, RAP 20, RAP 30, and RAP 40, as illustrated in FIG. 6).[0041] At each RAP, the sink 104 may receive data 616 that is sufficient to reset the remote GPU 124. In some configurations, such data 616 may sometimes be referred to as GEE information, or any other suitable term, without deviating from the scope of the present disclosure. In some configurations, the data 616 is sufficient to reset the remote GPU 124 without requiring other data associated with another RAP in the trace 602 of the GPU commands. In other words, the data 616 is independent of other data associated with a preceding RAP (e.g., RAP 0, RAP 10, RAP 20) in the trace 602 of the GPU commands. For example, RAPNis independent of RAPN-i. In this example, the remote GPU 124 may be reset at RAPNwithout requiring any information available at RAPN-I. AS such, each RAP may be thought of as being 'memory-less' (e.g., not requiring information from, nor dependent on, any preceding RAP). However, because the data 616 may be considerable in quantity, thereby introducing peaks in the over-the- air transmission thereof, such data 616 may not be transmitted unless required for error recovery.[0042] Referring to the example illustrated in FIG. 6, the upcoming RAP (after the sink104 transmits the GPU refresh request message 614 to the source 102) is RAP 30. At RAP 30, the source 102 transmits data 616 that is sufficient to reset the remote GPU 124 of the sink 104 from the unstable state to the stable state. As described above, resetting the remote GPU 124 of the sink 104 may include altering settings, configurations, or conditions of the remote GPU 124 such that the remote GPU 124 changes from the unstable state to the stable state. In some configurations, the data for resetting the remote GPU 124 of the sink 104 may include information associated with the textures, shaders, vertices, and/or any other suitable display attributes. [0043] After receiving the data 616 that is sufficient to reset the remote GPU 124 of the sink 104 from the unstable state to the stable state, the sink 104 may switch from the second wireless display mode (e.g., using Miracast 606) to the first wireless display mode (e.g., using GEE 206). Because the remote GPU 124 of the sink 104 is reset from the unstable state to the stable state, the remote GPU 124 of the sink 104 is capable of receiving additional packets 618 containing GPU commands while operating in the first wireless display mode (e.g., using GEE 206). One of ordinary skill in the art will understand that the specific techniques implemented to enable such a switch can vary without deviating from the scope of the present disclosure. Although many examples of such techniques exist, one example may involve setting the session using Miracast 606 to 'pause' and setting the session using GEE 206 to 'play. '[0044] Although the switch from the second wireless display mode (e.g., using Miracast606) to the first wireless display mode (e.g., using GEE 206) can occur any time after receiving the data 616 sufficient to reset the remote GPU 124 of the sink 104, the exact time at which the switch occurs can vary based on various implementations without deviating from the scope of the present disclosure. As described above, a considerable amount of time may be utilized for transmitting such data 616 prior to performing the switch from the second wireless display mode (e.g., using Miracast 606) to the first wireless display mode (e.g., using GEE 206). Accordingly, the switch from the second wireless display mode (e.g., using Miracast 606) to the first wireless display mode (e.g., using GEE 206) may occur after a period of time following the beginning of the transmission of such data 616. A reason for performing this switch after a period of time (e.g., not instantaneously upon transmitting the data 616) is that an earlier portion of that data 616 contains information for resetting the remote GPU 124 of the sink 104 and a later portion of that data 616 contains the GPU commands corresponding to the content that will be rendered (nearly immediately) on the remote display 122 of the sink 104. That later portion of the data 616 may sometimes be referred to as screen content commands, screen content GPU commands, or any other suitable term without deviating from the scope of the present disclosure. Accordingly, in some configurations, the switch from the second wireless display mode (e.g., using Miracast 606) to the first wireless display mode (e.g., using GEE 206) occurs when the timestamp of a packet (e.g., data 616, which may sometimes be referred to as a Miracast frame/packet) associated with the second wireless display mode (e.g., using Miracast 606) is the same as a timestamp of a packet (e.g., packet 618, which may sometimes be referred to as a GEE frame/packet) associated with the first wireless display mode (e.g., using GEE 206).[0045] FIG. 7 is a diagram 700 illustrating an example of various processes that may occur at an apparatus. In some configurations, such an apparatus is the sink 104 described in greater detail herein. At block 702, the apparatus may detect a condition indicating that a GPU is in an unstable state while in a first wireless display mode. For example, referring to FIG. 6, the sink 104 may detect one or more conditions indicating that the remote GPU 124 of the sink 104 is in an unstable state while receiving GPU commands in a first wireless display mode (e.g., using GEE 206). In some configurations, the sink 104 may detect that the remote GPU 124 is in an unstable state upon detecting that the number of lost packets of GPU commands exceeds a threshold number of lost packets of GPU commands. For example, the threshold number of lost packets may be preset as two (2) packets. As illustrated in FIG. 6, the sink 104 may detect that the remote GPU 124 is in an unstable state upon detecting that three (3) packets 612 containing GPU commands were lost.[0046] At block 704, in response to detecting the condition, the apparatus may transmit a GPU refresh request message and switch from the first wireless display mode to a second wireless display mode. For example, referring to FIG. 6, in response to detecting that the remote GPU 124 is in an unstable state, the sink 104 may transmit a GPU refresh request message 614 and switch from the first wireless display mode (e.g., using GEE 206) to a second wireless display mode (e.g., using Miracast 606). The switch from the first wireless display mode (e.g., using GEE 206) to the second wireless display mode (e.g., using Miracast 606) enables the sink 104 to fallback to a mode of wireless display that is not suffering from lost packets containing GPU commands. Without such a fallback, the remote GPU 124 of the sink 104 may remain in the unstable state and possibly malfunction, fail to perform critical functions, become inoperative, crash, hang, freeze, and/or otherwise provide in an undesirable user experience.[0047] At block 706, the apparatus may receive data sufficient to reset the GPU from the unstable state to a stable state at a RAP in a trace of the GPU commands and after transmitting the GPU refresh request message. For example, referring to FIG. 6, the RAP after the sink 104 transmits the GPU refresh request message 614 to the source 102 is RAP 30. At RAP 30, the source 102 transmits data 616 that is sufficient to reset the remote GPU 124 of the sink 104 from the unstable state to the stable state. As described above, resetting the remote GPU 124 of the sink 104 may include altering settings, configurations, or conditions of the remote GPU 124 such that the remote GPU 124 changes from the unstable state to the stable state. Additional description pertaining to the unstable state is provided above and therefore will not be repeated. A stable state refers to any state that is different from the unstable state. For example, a GPU may be in a stable state when that GPU performs one or more critical functions that it was unable to perform while in the unstable state.[0048] At block 708, after receiving the aforementioned data, the apparatus may switch from the second wireless display mode to the first wireless display mode. In some configurations, the apparatus is configured to utilize GEE 206 in the first wireless display mode and Miracast 606 in the second wireless display mode. Referring to FIG. 6, after receiving the data 616 sufficient to reset the remote GPU 124 of the sink 104 from the unstable state to the stable state, the sink 104 may switch from the second wireless display mode (e.g., using Miracast 606) to the first wireless display mode (e.g., using GEE 206). Because the remote GPU 124 of the sink 104 is reset from the unstable state to the stable state, the remote GPU 124 of the sink 104 is capable of receiving additional packets 618 containing GPU commands while operating in the first wireless display mode (e.g., using GEE 206).[0049] The methods and/or processes described with reference to FIG. 7 are provided for illustrative purposes and are not intended to limit the scope of the present disclosure. The methods and/or processes described with reference to FIG. 7 may be performed in sequences different from those illustrated therein without deviating from the scope of the present disclosure. Additionally, some or all of the methods and/or processes described with reference to FIG. 7 may be performed individually and/or together without deviating from the scope of the present disclosure. It is to be understood that the specific order or hierarchy of steps in the methods disclosed is an illustration of exemplary processes. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the methods may be rearranged. The accompanying method claims present elements of the various steps in a sample order, and are not meant to be limited to the specific order or hierarchy presented unless specifically recited therein.[0050] FIG. 8 is a diagram 800 illustrating an example of a hardware implementation of an apparatus 802 according to various aspects of the present disclosure. Generally, the apparatus 802 may be any device configured for enabling wireless display capabilities. In some configurations, the apparatus 802 may be the sink 104 described above. The apparatus 802 may include a user interface 812. The user interface 812 may be configured to receive one or more inputs from a user of the apparatus 802. The user interface 812 may also be configured to display information (e.g., text and/or images) to the user of the apparatus 802. The user interface 812 may exchange data via the bus interface 808.[0051] The apparatus 802 may also include a transceiver 810. The transceiver 810 may be configured to receive data and/or transmit data in communication with another apparatus. The transceiver 810 provides a means for communicating with another apparatus via a wired or wireless transmission medium. For example, the transceiver 810 may provide the means for communicating with the source 102, as described in greater detail above. The transceiver 810 may be configured to perform such communications using various types of technologies, as described in greater detail above. One of ordinary skill in the art will understand that many types of technologies may perform such communication without deviating from the scope of the present disclosure.[0052] The apparatus 802 may also include a memory 814, one or more processors 804, a computer-readable medium 806, and a bus interface 808. The bus interface 808 may provide an interface between a bus 816 and the transceiver 810. The memory 814, the one or more processors 804, the computer-readable medium 806, and the bus interface 808 may be connected together via the bus 816. The processor 804 may be communicatively coupled to the transceiver 810 and/or the memory 814.[0053] The processor 804 may include a detection circuit 820. The detection circuit 820 may include various hardware components and/or may perform various algorithms that provide the means for detecting a condition indicating that a GPU is in an unstable state while in a first wireless display mode. The processor 804 may also include a transmission circuit 821. The transmission circuit 821 may include various hardware components and/or may perform various algorithms that provide the means for transmitting a GPU refresh request message and switching from the first wireless display mode to a second wireless display mode in response to detecting the condition. The processor 804 may also include a reception circuit 822. The reception circuit 822 may include various hardware components and/or may perform various algorithms that provide the means for receiving data sufficient to reset the GPU from the unstable state to a stable state at a RAP in a trace of the GPU commands and after transmitting the GPU refresh request message. The processor 804 may also include a control circuit 823. The control circuit 823 may include various hardware components and/or may perform various algorithms that provide the means for switching from the second wireless display mode to the first wireless display mode after receiving the data. The control circuit 823 may include various hardware components and/or may perform various algorithms that provide the means for utilizing a GEE in the first wireless display mode and Miracast in the second wireless display mode.[0054] The foregoing description provides a non-limiting example of the processor 804 of the apparatus 802. Although various circuits have been described above, one of ordinary skill in the art will understand that the processor 804 may also include various other circuits (not shown) that are in addition and/or altemative(s) to circuits 820, 821, 822, 823. Such other circuits (not shown) may provide the means for performing any one or more of the functions, methods, processes, features and/or aspects described herein.[0055] The computer-readable medium 806 may include various computer-executable instructions. The computer-executable instructions may include computer-executable code configured to perform various functions and/or enable various aspects described herein. The computer-executable instructions may be executed by various hardware components (e.g., the processor 804 and/or any of its circuits 820, 821, 822, 823) of the apparatus 802. The computer-executable instructions may be a part of various software programs and/or software modules.[0056] The computer-readable medium 806 may include detection instructions 840. The detection instructions 840 may include computer-executable instructions configured for detecting a condition indicating that a GPU is in an unstable state while in a first wireless display mode. The computer-readable medium 806 may also include transmission instructions 841. The transmission instructions 841 may include computer- executable instructions configured for transmitting a GPU refresh request message and switching from the first wireless display mode to a second wireless display mode in response to detecting the condition. The computer-readable medium 806 may include reception instructions 842. The reception instructions 842 may include computer- executable instructions configured for receiving data sufficient to reset the GPU from the unstable state to a stable state at a RAP in a trace of the GPU commands and after transmitting the GPU refresh request message. The computer-readable medium 806 may include control instructions 843. The control instructions 843 may include computer- executable instructions configured for switching from the second wireless display mode to the first wireless display mode after receiving the data. The control instructions 843 may also include computer-executable instructions configured for utilizing a GEE in the first wireless display mode and Miracast in the second wireless display mode.[0057] The foregoing description provides a non-limiting example of the computer- readable medium 806 of the apparatus 802. Although various computer-executable instructions (e.g., computer-executable code) have been described above, one of ordinary skill in the art will understand that the computer-readable medium 806 may also include various other computer-executable instructions (not shown) that are in addition and/or alternative(s) to instructions 840, 841, 842, 843. Such other computer- executable instructions (not shown) may be configured for performing any one or more of the functions, methods, processes, features and/or aspects described herein.[0058] The memory 814 may include various memory modules. The memory modules may be configured to store, and have read therefrom, various values and/or information by the processor 804, or any of its circuits 820, 821, 822, 823. The memory modules may also be configured to store, and have read therefrom, various values and/or information upon execution of the computer-executable code included in the computer- readable medium 806, or any of its instructions 840, 841, 842, 843. The memory 814 may include trace information 830. For example, the trace information 130 may be stored or maintained in a graphics library of the sink 104. The trace information 830 may exist in various formats and/or file configurations without deviating from the scope of the present disclosure. The memory may also include RAP information 831. As described above, a RAP refers to a mark, marker, and/or any other suitable indicator at one or more portions, positions, or locations of the trace 602. At each RAP, the sink 104 may receive data 616 that is sufficient to reset a GPU (e.g., the remote GPU 124 of the sink 104). In some configurations, the data 616 for resetting the remote GPU 124 of the sink 104 may include information associated with the textures, shaders, vertices, and/or any other suitable display attributes.[0059] One of ordinary skill in the art will also understand that the apparatus 802 may include alternative and/or additional features without deviating from the scope of the present disclosure. In accordance with various aspects of the present disclosure, an element, or any portion of an element, or any combination of elements may be implemented with a processing system that includes one or more processors 804. Examples of the one or more processors 804 include microprocessors, microcontrollers, digital signal processors (DSPs), field programmable gate arrays (FPGAs), programmable logic devices (PLDs), state machines, gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functionality described throughout this disclosure. The processing system may be implemented with a bus architecture, represented generally by the bus 816 and bus interface 808. The bus 816 may include any number of interconnecting buses and bridges depending on the specific application of the processing system and the overall design constraints. The bus 816 may link together various circuits including the one or more processors 804, the memory 814, and the computer-readable medium 806. The bus 816 may also link various other circuits such as timing sources, peripherals, voltage regulators, and power management circuits, which are well known in the art.[0060] The one or more processors 804 may be responsible for managing the bus 816 and general processing, including the execution of software stored on the computer- readable medium 806. The software, when executed by the one or more processors 804, causes the processing system to perform the various functions described below for any one or more apparatuses. The computer-readable medium 806 may also be used for storing data that is manipulated by the one or more processors 804 when executing software. Software shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. The software may reside on the computer-readable medium 806. The computer-readable medium 806 may be a non-transitory computer-readable medium. A non-transitory computer-readable medium includes, by way of example, a magnetic storage device (e.g., hard disk, floppy disk, magnetic strip), an optical disk (e.g., a compact disc (CD) or a digital versatile disc (DVD)), a smart card, a flash memory device (e.g., a card, a stick, or a key drive), a random access memory (RAM), a read only memory (ROM), a programmable ROM (PROM), an erasable PROM (EPROM), an electrically erasable PROM (EEPROM), a register, a removable disk, and any other suitable medium for storing software and/or instructions that may be accessed and read by a computer. The computer-readable medium 806 may also include, by way of example, a carrier wave, a transmission line, and any other suitable medium for transmitting software and/or instructions that may be accessed and read by a computer. The computer-readable medium 806 may reside in the processing system, external to the processing system, or distributed across multiple entities including the processing system. The computer-readable medium 806 may be embodied in a computer program product. By way of example and not limitation, a computer program product may include a computer-readable medium in packaging materials. Those skilled in the art will recognize how best to implement the described functionality presented throughout this disclosure depending on the particular application and the overall design constraints imposed on the overall system.The above description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but are to be accorded the full scope consistent with the language of the claims, wherein reference to an element in the singular is not intended to mean "one and only one" unless specifically so stated, but rather "one or more." Unless specifically stated otherwise, the term "some" refers to one or more. A phrase referring to "at least one of a list of items refers to any combination of those items, including single members. As an example, "at least one of: a, b, or c" is intended to cover: a; b; c; a and b; a and c; b and c; and a, b and c. All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed under the provisions of 35 U.S.C. §112(f), unless the element is expressly recited using the phrase "means for" or, in the case of a method claim, the element is recited using the phrase "step for." |
While taking X-Y coordinate measurements to determine the location of a point of contact on a touch screen, a controller circuit drives the touch screen with a selectable voltage. Voltages output from the touch screen are converted by an ADC into the X-coordinate and Y-coordinate values. The ADC has a convertible input voltage range. If only a low touch screen detection resolution is required, then the voltage with which the touch screen is driven is made to be substantially less than the convertible input voltage range. Only a portion of the convertible input range is usable, but this is adequate for the application and power consumption is reduced. If a higher touch screen detection resolution is required, then the touch screen is driven with a higher voltage. Power consumption is increased, but more or all of the convertible input voltage range of the ADC is then usable. |
CLAIMS What is claimed is: 1. An integrated circuit comprising: a first pair of terminals; a second pair of terminals; a control circuit that during a first time period supplies one of a plurality of selectable voltages onto a first of the first pair of terminals and that couples a second of the first pair of terminals to a ground potential, and that during a second time period supplies said one of the plurality of selectable voltages onto a first of the second pair of terminals and that couples a second of the second pair of terminals to the ground potential; and an analog-to-digital converter (ADC) that measures a first voltage present on one of the terminals of the second pair of terminals during the first time period and generates therefrom a first measurement value, and that measures a second voltage present on one of the terminals of the first pair of terminals during the second time period and generates therefrom a second measurement value. 2. The integrated circuit of Claim 1, wherein the integrated circuit is adapted to be coupled to a resistive touch screen such that a current flows, during the first time period, from the control circuit, out of the integrated circuit from first terminal of the first pair, through the touch screen, and back into the integrated circuit via the second terminal of the first pair. 3. The integrated circuit of Claim 1, wherein the control circuit stores touch screen control information (TSCI), wherein which one of the plurality of selectable voltages is supplied during the first time period and the second time period is determined by a value of the TSCI stored in the control circuit. 4. The integrated circuit of Claim 1, wherein the first and second measurement values are not used for calibration purposes. 5. The integrated circuit of Claim 1, wherein the ADC has a convertible input voltage range over which it can generate an output measurement corresponding to a voltage on an input of the ADC, and wherein a voltage difference between said one of the selectable voltages and ground potential is substantially smaller than the convertible input voltage range. 6. The integrated circuit of Claim 1, wherein the ADC is powered from a voltage that is substantially larger than said one of the selectable voltages. 7. The integrated circuit of Claim 1, wherein the control circuit stores touch screen control information (TSCI), wherein if the TSCI has a first value then the control circuit supplies a first amount of current out of the first terminal of the first pair of terminals during the first and second time periods, whereas if the TSCI has a second value then the control circuit supplies a second amount of current out of the first terminal of the first pair of terminals during the first and second time periods. 8. The integrated circuit of Claim 1, wherein the control circuit comprises a programmable voltage source that outputs said one of the plurality of selectable voltages. 9. A system for controlling a touch screen, the system comprising: an analog-to-digital converter (ADC) that measures a voltage received from the touch screen during a time period, wherein the ADC has a convertible input voltage range over which it can generate an output measurement corresponding to a voltage on an input of the ADC; and a control circuit that supplies one of a plurality of selectable voltages to the touch screen during the time period, wherein the voltage received by the ADC is due to a current supplied by the control circuit to the touch screen, and wherein said one of the selectable voltages is substantially smaller than the convertible input voltage range. 10. The system of Claim 9, wherein a second of the selectable voltages is substantially of the same magnitude as the convertible input voltage range. 11. A system for controlling a touch screen comprising: an analog-to-digital converter (ADC) that measures a magnitude of a signal received from the touch screen during a time period; and a control circuit that supplies one of a plurality of selectable currents of different magnitudes to the touch screen during the time period, wherein the signal received by the ADC is due to said one current supplied by the control circuit to the touch screen during the time period. 12. The system of Claim 11, wherein the ADC has a convertible input voltage range over which it can generate an output measurement corresponding to a voltage on an input of the ADC, wherein the touch screen has a first terminal and a second terminal, wherein said one selectable current flows into the first terminal of the touch screen and out of the second terminal of the touch screen such that a voltage is present between the first and second terminals, wherein the voltage present between the first and second terminals is substantially smaller than the convertible input voltage range of the ADC. 13. A method comprising: (a) driving a first current into a first terminal of a touch screen, through the touch screen, and out of a second terminal of the touch screen such that a first voltage is present between the first and second terminals; (b) converting a second voltage on a third terminal of the touch screen into a first digital value, wherein the second voltage is present on the third terminal during (a); (c) driving a second current into the first terminal of the touch screen, through the touch screen, and out of the second terminal of the touch screen such that a third voltage is present between the first and second terminals, wherein the third voltage is substantially smaller than the first voltage; and (d) converting a fourth voltage on the third terminal of the touch screen into a second digital value, wherein the fourth voltage is present on the third terminal during (C). 14. The method of Claim 13, wherein the converting of (b) and (c) is performed by an analog-to-digital converter (ADC), wherein the ADC has a convertible inputvoltage range over which it can generate an output measurement corresponding to a voltage supplied to the ADC, and wherein the convertible input voltage range is substantially larger than the third voltage. 15. The method of Claim 13, further comprising: (e) receiving a first multi-bit digital value, wherein the first multi-bit digital value determines the first voltage, wherein (e) occurs before (a); and (f) receiving a second multi-bit digital value, wherein the second multi-bit digital value determines the third voltage, wherein (f) occurs before (c). 16. The method of Claim 13, wherein the first current in (a) is a fixed current that is supplied from a current source, and wherein the first voltage is a voltage that results when the first current flows through the touch screen from the first terminal to the second terminal. 17. The method of Claim 13, wherein the first voltage in (a) is a fixed voltage that is supplied from a voltage regulator onto the first terminal of the touch screen when a ground potential is present on the second terminal of the touch screen, and wherein the first current is a current that results when the first voltage is present on the first terminal of the touch screen and when ground potential is on the second terminal of the touch screen. 18. A method comprising: (a) providing a control circuit adapted to drive a selectable current into a first terminal of a touch screen, through the touch screen, and out of a second terminal of the touch screen, wherein a first voltage is present between the first and second terminals when the selectable current is flowing; and (b) providing an analog-to-digital converter (ADC) adapted to measure a second voltage present on a third terminal of the touch screen when the selectable current is flowing, wherein the ADC has a convertible input voltage range over which it can generate an output measurement corresponding to a voltage on an input of the ADC, and wherein the convertible input voltage range is substantially larger than the first voltage. 19. The method of Claim 18, wherein the selectable current is one of a plurality of currents of different magnitudes. 20. A circuit comprising: a first pair of terminals; a second pair of terminals; and means for, during a first measurement time period, driving a first selectable current out of a first terminal of the first pair, through a touch screen, and back into a second terminal of the first pair, and for measuring a first voltage present on a first terminal of the second pair during the flow of the first selectable current, wherein the means is also for, during a second measurement time period, driving a second selectable current out of the first terminal of the first pair, through the touch screen, and back into the second terminal of the first pair, and for measuring a second voltage present on the first terminal of the second pair during the flow of the second selectable current, wherein the second selectable current is substantially smaller than the first selectable current. 21. The circuit of Claim 20, wherein the means is also for receiving a multi-bit digital value, and for using the multi-bit digital value to set a magnitude of a current driven out of the first terminal of the first pair during a measuring of a voltage present on the first terminal of the second pair. 22. The circuit of Claim 20, wherein the first selectable current is driven by supplying a fixed voltage onto the first terminal of the first pair, and wherein the first selectable current is a current that results when the fixed voltage is supplied onto the first terminal of the first pair when the second terminal of the first pair is grounded. 23. The circuit of Claim 20, wherein the first selectable current is a fixed current that is supplied by the means and that is output by the means through the first terminal of the first pair. 24. A set of instructions stored on a computer-readable medium, wherein execution of the set of instructions is for:changing a magnitude of a drive current to be driven through a touch screen during a touch screen point of contact location measurement. 25. The set of instructions of Claim 24, wherein the set of instructions is stored in semiconductor memory within a cellular telephone, wherein execution of the instructions by a processor of the cellular telephone causes control information to be communicated across a bus within the cellular telephone to a control circuit, and wherein the control circuit controls the magnitude of the drive current. 26. The set of instructions of Claim 25, wherein the magnitude of the drive current is changed such that a first touch screen point of contact location measurement is made using a first drive current, such that a second touch screen point of contact location measurement is made using a second drive current, wherein the first drive current is substantially greater than the second drive current, and wherein the control information communicated across the bus causes the control circuit to change the magnitude of drive current from the first drive current to the second drive current. |
LOW-POWER TOUCH SCREEN CONTROLLER BACKGROUND INFORMATION Field [0001] The disclosed embodiments relate to touch screens. Background [0002] Many electronic devices such as, for example, cellular telephones have touch screens (sometimes referred to as "touch panels"). By using a touch screen, the display area of the electronic device serves both as a display and also as a user input interface to enable a user to interact with and control the electronic device. [0003] Figure 1 (Prior Art) is a conceptual diagram of one type of touch screen 1. Touch screen 1 involves a first sheet 2 of transparent resistive material and a second sheet 3 of transparent resistive material. These two sheets are disposed over the display of the electronic device so that the display can be seen by the user through the touch screen. A first conductive bus bar 4 is attached to the upper left edge of sheet 2 and a second conductive bus bar 5 is attached to the lower right of sheet 2. Similarly, a third conductive bus bar 6 is attached to the upper right edge of sheet 3 and a fourth conductive bus bar 7 is attached to the lower left edge of sheet 3. When the touch screen is not being touched, the two sheets 2 and 3 do not touch one another. When the touch screen is pressed at a point of contact, the pressure of the touching causes the two sheets 2 and 3 to make electrical contact with one another at the point of contact. Electronics coupled to the touch screen determines an X-coordinate and a Y-coordinate on the touch screen that indicates the point of contact. [0004] Figures 2 and 3 (Prior Art) are conceptual schematic diagrams that illustrate how the touch screen and its associated electronics determine the X-coordinate and the Y- coordinate of the point of contact. Figure 2 is a cross-sectional side view of the touch screen. The upper row of resistors represents the upper sheet 2. The lower row of resistors represents the lower sheet 3. Figure 2 illustrates the touch screen when the user is not touching the screen and the two sheets 2 and 3 are not touching each other. At a first time, a voltage is impressed between YPJJL and XM_LR. The YP_LL end of sheet 3 is made to be an open, and a high input impedance voltage sensor 8 is used todetect a voltage on sheet 3. In the case of Figure 2, the lower sheet 3 does not receive a voltage from upper sheet 2 and this condition is sensed by sensor 8. At a second time, a voltage is impressed between YPJJR and YM_LL. The XM_LR end of sheet 3 is made to be an open, and a high input impedance voltage sensor 9 is used to detect a voltage on upper sheet 2. In the case of Figure 2, upper sheet 2 does not receive a voltage from lower sheet 3 and this condition is sensed by sensor 9. From the voltages detected by sensors 8 and 9 at the first time and second time, the electronics of the touch screen determines that the two sheets 2 and 3 are not touching each other. [0005] Figure 3 (Prior Art) illustrates the touch screen when the user is touching the screen. The two sheets 2 and 3 are therefore touching each other at a point of contact as illustrated. At a first time, a voltage is impressed between YPJJL and XM_LR. The YM_LL end of sheet 3 is made to be an open, and sensor 8 is used to detect a voltage on lower sheet 3. The upper sheet 2 forms a resistive voltage divider with the point of contact being a tap on the voltage divider. There is no current flow through lower sheet 3 due to YM_LL being open and due to sensor 8 being a high input sensor. The voltage sensed by sensor 8 is therefore the voltage on the tap of the voltage divider. The magnitude of the sensed voltage therefore indicates the location of the touching between YPJJL and XM_LR. The voltage may be converted into a digital value and this digital value may be considered to be the X-coordinate of the point of contact. Then, at a second time, a voltage is impressed between YPJJR and YM_LL of the lower sheet 3. The XM LR end of upper sheet 2 is made to be an open, and high input impedance voltage sensor 9 is used to detect a voltage on sheet 2. The lower sheet 3 forms a voltage divider with the point of contact being a tap on the voltage divider. There is no current flow through upper sheet 2, so the voltage sensed by sensor 9 is the voltage on the tap of the voltage divider, and therefore indicates the location of the touching between YPJJR and YM_LL. This voltage may be converted into a digital value and this digital value may be considered to be Y-coordinate of the point of contact. [0006] Figure 4 (Prior Art) is a simplified diagram of one type of conventional touch screen controller integrated circuit 10. At a first time, control portion 11 causes switches 12 and 13 to close such that a regulated analog supply voltage AVDD is supplied onto terminal 14 and such that terminal 15 is grounded. The voltage AVDD is therefore supplied across sheet 2. Analog multiplexer 16 is controlled such that the voltage on terminal 17 is supplied onto an input of an analog-to-digital converter (ADC) 18. ADC18 converts the voltage on terminal 17 into a multi-bit digital value usable as the X- coordinate. At a second time, control portion 11 causes switches 19 and 20 to close such that voltage AVDD is supplied onto terminal 17 and such that terminal 21 is grounded. The voltage AVDD is therefore supplied across sheet 3. Analog multiplexer 16 is controlled such that the voltage on terminal 14 is supplied onto the input of ADC 18. ADC 18 converts the voltage on terminal 17 into a multi-bit digital value usable as the Y-coordinate. Battery voltage VBATT between terminals 22 and 23 is regulated to generate the analog supply voltage AVDD. Rather than there being two sensors 8 and 9 as illustrated in the conceptual diagrams of Figures 2 and 3, the functions of sensors 8 and 9 are performed by multiplexer 16 and ADC 18 in Figure 4. [0007] The touch screen is usable in different situations where different amounts of precision of detecting the point of contact are required. If, for example, large selectable icons may be displayed on the screen. If this is the case, then the detection of the point of contact need not be very precise in order for the electronics of the cellular telephone to determine that a particular large icon is being pressed. In such a situation, ADC 18 can be controlled via bus 24 and register 25 to operate as a lower resolution ADC that outputs multi-bit digital values of a smaller number of bits. If, however, the screen is to be used to detect the selection of very small icons or to detect a user writing on the screen (a user may, for example, use a fine tip stylus to write on the screen), then the detection of the point of contact should be more precise. In this situation, ADC 18 may be controlled to operate as a higher resolution ADC that outputs multi-bit digital values of a larger number of bits. Touch screen control circuitry such as that illustrated in Figure 4 is sometimes embodied in digital baseband integrated circuits within cellular telephones. SUMMARY [0008] During the taking of X-coordinate and Y-coordinate measurements to determine the location of a point of contact on a touch screen, a novel touch screen controller circuit drives the touch screen with a selectable one of a plurality of voltages. In one example, voltages output from the touch screen are converted by an analog-to-digital converter (ADC) into multi-bit digital values that are the X-coordinate and Y-coordinate measurement values. The ADC has a convertible input voltage range over which it can generate an output measurement corresponding to a voltage on the ADC input.[0009] If only a low touch screen detection resolution is required, then the voltage with which the touch screen is driven is made to be substantially less than the convertible input voltage range of the ADC. The voltage measured can only range over a part of the convertible input voltage range of the ADC, but using only part of the ADC convertible input voltage range is acceptable due to the low touch screen detection resolution required. Driving the touch screen with the reduced voltage advantageously reduces power consumption in such low touch screen detection resolution situations. [0010] If a higher touch screen detection resolution is required, then the novel touch screen controller circuit drives the touch screen with one of the selectable voltages that is a higher voltage. An example of a situation in which higher touch screen detection resolution might be required is a situation in which the user is writing on the screen using a fine-tipped stylus and in which electronics of a mobile communication device of which the touch screen is a part is attempting to decipher the user's handwriting. In such a situation where one of the high selectable voltages is used to drive the touch screen, more or all of the convertible input voltage range of the ADC is usable to detect and convert a voltage output from the touch screen into a measurement value. More power is consumed, but increased touch screen detection resolution is realized. [0011] Whereas in some embodiments the novel touch screen controller circuit drives the touch screen with a selected one of a plurality of selectable fixed voltages, in other embodiments the novel touch screen controller circuit drives the touch screen with a selected one of a plurality of selectable fixed currents. In some embodiments, the resolution of the ADC is programmable across a bus, as is the voltage/current with which the touch screen is driven during a point of contact measurement. The novel touch screen controller circuit is controlled to drive the touch screen with the lowest voltage/current that still results in adequate touch screen detection resolution for the particular measurement being taken. [0012] The foregoing is a summary and thus contains, by necessity, simplifications, generalizations and omissions of detail; consequently, those skilled in the art will appreciate that the summary is illustrative only and does not purport to be limiting in any way. Other aspects, inventive features, and advantages of the devices and/or processes described herein, as defined solely by the claims, will become apparent in the non-limiting detailed description set forth herein.BRIEF DESCRIPTION OF THE DRAWINGS [0013] Figure 1 (Prior Art) is a conceptual diagram of a conventional resistive touch screen. [0014] Figure 2 (Prior Art) is a conceptual diagram of a cross-section of a conventional resistive touch screen in a "no-touch condition". [0015] Figure 3 (Prior Art) is a conceptual diagram of a cross-section of a conventional resistive touch screen in a "touch condition". [0016] Figure 4 (Prior Art) is a diagram of a conventional touch screen controller circuit. [0017] Figure 5 is a diagram of a mobile communication device in accordance with one novel aspect. [0018] Figure 6 is a simplified block diagram of the mobile communication device of Figure 5. [0019] Figure 7 is a conceptual diagram of the resistive touch screen of Figure 6. [0020] Figure 8 is a conceptual diagram of a cross-section of the touch screen of Figure 6 in a "no-touch condition". [0021] Figure 9 is a conceptual diagram of a cross-section of the touch screen of Figure 6 in a "touch condition". [0022] Figure 10 is a block diagram of the touch screen controller circuit of Figure 6. [0023] Figure 11 is a table that sets forth multiple different operating modes of the touch screen controller circuit of Figure 10. [0024] Figure 12A is a block diagram of a first example of the power supply portion of Figure 10. [0025] Figure 12B is a block diagram of a second example of the power supply portion of Figure 10. [0026] Figure 13 is a flowchart of a method in accordance with one novel aspect. DETAILED DESCRIPTION [0027] Figure 5 is a view of an electronic device 100 that includes a resistive touch screen 101. Touch screen 101 is disposed over a display 102 of the device such that the display is viewable through the touch screen. In the example of Figure 5, the electronic device is a cellular telephone. A user of electronic device 100 can use a finger to select one of a plurality of selectable icons that appear on the screen. Icon 103 is one such icon. When the user's finger presses on the portion of the touch screen disposed over icon 103, a touch screen controller circuit 113 within electronic device 100 detects the pressing andoutputs information indicative of the X-coordinate and the Y-coordinate of the point of contact on the touch screen. [0028] Figure 6 is a simplified diagram of cellular telephone 100 of Figure 5. Cellular telephone 100 includes (among several other parts not illustrated) an antenna 104, two integrated circuits 105 and 106, and touch screen 101. A display such as an LCD display (not shown) is disposed behind touch screen 101. Integrated circuit 106 is called a "digital baseband integrated circuit" or a "baseband processor integrated circuit". Integrated circuit 105 is an RF transceiver integrated circuit. RF transceiver integrated circuit 105 is called a "transceiver" because it includes a transmitter as well as a receiver. When the cellular telephone is receiving, a high frequency RF signal 108 is received on antenna 104. Information from signal 108 passes through a receive chain in transceiver integrated circuit 105 and is digitized by an analog-to-digital converter (ADC) 109 in the digital baseband integrated circuit 106. The resulting digital information is processed by a digital processor 110 in the digital baseband integrated circuit 106. If the cellular telephone is transmitting, then information to be transmitted is converted into analog form by a digital-to-analog converter 111 in the digital baseband integrated circuit 106. The analog information passes through a transmit chain in transceiver integrated circuit 105, is amplified by a power amplifier, and is supplied onto antenna 104 so that it is transmitted from antenna 104 as a high frequency RF signal 112. Processor 110 fetches and executes a set of processor-executable instructions 114 stored in or on a processor-readable medium 115 across bus 116. In this case, the processor-readable medium is a semiconductor memory. [0029] Figure 7 is a more detailed diagram of touch screen 101. Touch screen 101 is a conventional resistive touch screen as set forth in the background section of this patent document. Touch screen 101 involves a first sheet 117 of transparent resistive material and a second sheet 118 of transparent resistive material. These two sheets are disposed over display 102 of the electronic device so that display 102 can be seen by the user through the touch screen. A first conductive bus bar 119 is attached to the upper left edge of sheet 117 and a second conductive bus bar 120 is attached to the lower right of sheet 117. Similarly, a third conductive bus bar 121 is attached to the upper right edge of sheet 118 and a fourth conductive bus bar 122 is attached to the lower left edge of sheet 118. When the touch screen is not being touched, the two sheets 117 and 118 do not touch one another. When the touch screen is pressed at a point of contact, thepressure of the touching causes the two sheets 117 and 118 to make electrical contact with one another at the point of contact. Electronics coupled to the bus bars of the touch screen determines an X-coordinate and a Y-coordinate on the touch screen that indicates the point of contact. [0030] Figures 8 and 9 are conceptual schematic diagrams that illustrate how the touch screen and its associated electronics determine the X-coordinate and the Y-coordinate of the point of contact. Figure 8 is a cross-sectional side view of touch screen 101. The upper row of resistors represents the upper sheet 117. The lower row of resistors represents the lower sheet 118. Figure 8 illustrates the touch screen when the user is not touching the screen and the two sheets 117 and 118 are not touching each other. At a first time, a voltage is impressed between YP_UL and XM_LR. The YP_LL end of sheet 118 is made to be an open, and the voltage at YPJJR is detected. In the case of Figure 8, the lower sheet 118 does not receive a voltage from upper sheet 117 and the voltage detected is indicative of this non-touching condition. At a second time, a voltage is impressed between YPJJR and YM_LL. The XM_LR end of upper sheet 117 is made to be an open, and the voltage at YPJJL is detected. In the case of Figure 8, upper sheet 117 does not receive a voltage from lower sheet 118 and the voltage detected is indicative of this non-touching condition. From the voltages detected at the first time and second time, the electronics of the touch screen determines that the two sheets 117 and 118 are not touching each other. [0031] Figure 9 illustrates touch screen 101 when the user is touching the screen. The two sheets 117 and 118 are therefore touching each other at a point of contact as illustrated. At a first time, a voltage is impressed between YPJJL and XM_LR. The YM_LL end of sheet 118 is made to be an open, and the voltage at YPJJR is used to detect a voltage on lower sheet 118. The upper sheet 117 forms a resistive voltage divider with the point of contact being a tap on the voltage divider. There is no current flow through lower sheet 118 due to YM_LL being open and due to the circuit that detects the voltage having a high input impedance. The voltage detected is therefore the voltage on the tap of the voltage divider. The magnitude of the detected voltage therefore indicates the location of the touching between YPJJL and XM_LR. The voltage may be converted into a digital value and this digital value may be considered to be the X-coordinate of the point of contact. Then, at a second time, a voltage is impressed between YPJJR and YMJX of the lower sheet 118. The XM _LR end of upper sheet 117 is made to bean open, and the voltage on YP UL is detected using a detection circuit that has a high input impedance. The lower sheet 118 forms a voltage divider with the point of contact being a tap on the voltage divider. There is no current flow through upper sheet 117, so the voltage detected at YP_UL is the voltage on the tap of the voltage divider, and therefore indicates the location of the touching between YP UR and YM LL. This voltage may be converted into a digital value and this digital value may be considered to be Y-coordinate of the point of contact. [0032] Figure 10 is a simplified diagram of touch screen controller circuit 113 and touch screen 101 of Figure 6. Touch screen controller circuit 113 includes a control circuit 123 and an analog-to-digital converter 124. Touch screen controller circuit 113 and ADC 124 are coupled to processor 110 of Figure 6 via bus 116. Touch screen controller circuit 113 is coupled to touch screen 101 via a first pair of terminals 125 and 126 and a second pair of terminals 127 and 128. Control circuit 123 includes control logic 129, four switches 130-133, a power supply portion 134, and an analog multiplexer 135. The four switches may, for example, be realized as field effect transistors (FETs). Analog multiplexer 135 may, for example, be realized as a multiplexer of transmission gates of field effect transistors. Touch screen 101 has four terminals 136-139. Circular symbol 140 represents a point of contact between sheets 117 and 118. The point of contact 140 is established, for example, when the finger of a user of cellular telephone 100 presses on touch screen 101. When the touch screen is not being touched, the resistance across each of the sheets is generally in the range of 200 ohms to 2k ohms. It is fixed for a given individual touch screen, but the resistance varies from touch screen to touch screen due to manufacturing variations. [0033] In one embodiment, control logic 129 sets a regulated voltage on node 141 by supplying an appropriate multi-bit digital value (voltage set value) to power supply portion 134 via lines 142. Power supply portion 134 is coupled to a battery via terminals 146 and 147 and acts as a programmable voltage source. Depending on the value of the multi-bit voltage set value, the voltage on node 141 is set to have a voltage of a selected one of a number of selectable voltages (for example, 2.6 volts, 1.3 volts, 0.65 volts, and 0.1625 volts). In another embodiment, power supply portion 134 acts as a programmable current source. The magnitude of the current sourced onto node 141 is set by control logic 129. Depending on the value of the multi-bit voltage set value on lines 142, the current supplied onto on node 141 is set to have a magnitude of a selected one of anumber of selectable currents. Regardless of whether power supply portion 134 is acting as a programmable voltage source or a programmable current source for powering node 141, power supply portion 134 always operates to output regulated analog supply voltage AVDD. This analog supply voltage AVDD is used to power other portions of the circuit. [0034] In the presently described operational example, power supply portion 134 drives node 141 as a programmable voltage source. VBATT received from an external battery (not shown) is approximately 2.6 volts. Analog supply voltage AVDD is 2.6 volts. Execution of the set of processor-executable instructions 114 causes processor 110 to communicate touch screen control information (TSCI) across bus 16 and into register 143 in control logic 129. TSCI determines which one the numerous different selectable voltages is supplied by power supply portion 134 onto node 141. Initially, processor 110 determines that the required touch screen detection resolution is relatively relaxed because the user-selectable icons displayed on touch screen 101 are relatively large. The TSCI value therefore causes a first relatively low regulated voltage to be supplied onto node 141. This voltage is less than the analog supply voltage AVDD of 2.6 volts that is used to power ADC 124. In the present example, the first relatively low regulated voltage is 1.3 volts. ADC 124, however, has a convertible input voltage range over which it can generate an output measurement of substantially the entire zero to 2.6 volt range of AVDD. The terminology "can generate" used here means that if a voltage anywhere in the range of from zero volts to 2.6 volts were to be present on the input lead IN of the ADC portion of ADC 124 of Figure 10, then the ADC portion would convert that input voltage into a corresponding multi-bit digital output value. The terminology "can generate" does not necessarily mean that a voltage anywhere in the range of from zero volts to 2.6 volts can be present on the ADC input lead IN given the way the remainder of the touch screen controller circuit 113 is configured or used. [0035] During a first time period, control circuit 123 causes a first pair of switches 130 and 131 to be closed. Second pair of switches 132 and 133 are open. The 1.3 volt voltage on node 141 is therefore coupled via switch 130 and terminal 125 to terminal 136 of the touch screen 101. Similarly, ground potential is coupled through switch 131 and terminal 126 onto terminal 137 of touch screen 101. Analog multiplexer 135 is controlled such that the voltage on terminal 127 is coupled onto the high input impedance input of ADC 124. Switch 133 is open. If the touch screen is being pressed,then the voltage at the point of contact 140 is coupled through terminal 138, terminal 127 and multiplexer 135 to ADC 124. ADC 124 converts the voltage into a corresponding first multi-bit digital value. This first value may, for example, be considered to be an X-coordinate value indicative of the location of the point of contact. The value is loaded into register 144 and is read by processor 110 across bus 116. Because the magnitude of the voltage impressed across YPJJL and XM_LR is 1.3 volts, the maximum voltage that can be detected by ADC 124 is 1.3 volts. The entire upper half of ADC steps is not used, despite the fact that ADC 124 is powered by 2.6 volt AVDD. If, for example, ADC 124 is set to have a 12-bit resolution, then the upper half of ADC step values of 2048 to 4095 are not used. The ADC 124 can only output measurement values in the range of from zero to 2047. [0036] Next, during a second time period, control circuit 123 causes second pair of switches 132 and 133 to be closed. First pair of switches 130 and 131 is open. The 1.3 volt voltage on node 141 is therefore coupled via switch 132 and terminal 127 to terminal 138 of the touch screen 101. Similarly, ground potential is coupled through switch 133 and terminal 128 onto terminal 139 of touch screen 101. Analog multiplexer 135 is controlled such that the voltage on terminal 125 is coupled onto the high input impedance input of ADC 124. Switch 131 is open. If the touch screen is being pressed, then the voltage at the point of contact 140 is coupled through terminal 136, terminal 125 and multiplexer 135 to ADC 124. ADC 124 converts the voltage into a corresponding second multi-bit digital value. This second value may, for example, be considered to be a Y-coordinate value indicative of the location of the point of contact. The value is loaded into register 144 and is read by processor 110 across bus 116. The X-coordinate and the Y-coordinate values are indicative of the location of point of contact 140. Rather than supplying touch screen 101 with the full 2.6 volt AVDD voltage, touch screen 101 is advantageously supplied with the substantially smaller 1.3 volt voltage. The corresponding amount of current driven through the touch screen 101 during the measurements is therefore substantially reduced, thereby reducing power consumption of the overall circuit during low touch screen resolution measurements. [0037] Next, in accordance with the presently described operational example, processor 110 determines that the required touch screen detection resolution is relatively high because the user-selectable icons displayed on touch screen 101 are relatively small or because the user is to be using a stylus with a fine point to write on the touch screen. This may,for example, be a situation in which handwriting recognition is occurring. Processor 110 therefore writes a TSCI value across bus 116 into register 143 that causes a second relatively large regulated voltage to be supplied onto node 141. In the present example, this voltage is 2.6 volts. ADC 124 continues to be powered from the 2.6 volt AVDD and has the same convertible input voltage range of from zero to 2.6 volts. [0038] During a first time period, control circuit 123 causes a first pair of switches 130 and 131 to be closed. Second pair of switches 132 and 133 are open. The 2.6 volt voltage on node 141 is therefore coupled via switch 130 and terminal 125 to terminal 136 of the touch screen 101. Similarly, ground potential is coupled through switch 131 and terminal 126 onto terminal 137 of touch screen 101. Analog multiplexer 135 is controlled such that the voltage on terminal 127 is coupled onto the high input impedance input of ADC 124. Switch 133 is open. If the touch screen is being pressed, then the voltage at the point of contact 140 is coupled through terminal 138, terminal 127 and multiplexer 135 to ADC 124. ADC 124 converts the voltage into a corresponding first multi-bit digital value. This first value may, for example, be considered to be an X-coordinate value indicative of the location of the point of contact. The value is loaded into register 144 and is read by processor 110 across bus 116. The magnitude of the voltage impressed across YP_UL and XM_LR is 2.6 volts so that the full 2.6 volt ADC convertible input voltage range of ADC 124 is usable. If, for example, ADC 124 is set to have a 12-bit resolution, then all of the ADC step values of zero to 4095 are used. [0039] Next, during a second time period, control circuit 123 causes second pair of switches 132 and 133 to be closed. First pair of switches 130 and 131 are open. The 2.6 volt voltage on node 141 is therefore coupled via switch 132 and terminal 127 to terminal 138 of the touch screen 101. Similarly, ground potential is coupled through switch 133 and terminal 128 onto terminal 139 of touch screen 101. Analog multiplexer 135 is controlled such that the voltage on terminal 125 is coupled onto the high input impedance input of ADC 124. Switch 131 is open. If the touch screen is being pressed, then the voltage at the point of contact 140 is coupled through terminal 136, terminal 125 and multiplexer 135 to ADC 124. ADC 124 converts the voltage into a corresponding second multi-bit digital value. This second value may, for example, be considered to be a Y-coordinate value indicative of the location of the point of contact. The value is loaded into register 144 and is read by processor 110 across bus 116. TheX-coordinate and the Y-coordinate values are indicative of the location of point of contact 140. Touch screen 101 is supplied with the full 2.6 volt AVDD voltage to support the required high touch screen measurement resolution. More power is consumed than in the low touch screen resolution mode described above, but a higher touch screen resolution measurements can be made. [0040] Figure 11 is a table that sets forth various examples of the operation of the touch screen controller circuit 113 of Figure 10. Processor 110 controls the touch screen current by writing into register 143 via bus 116. Processor 110 controls ADC resolution by writing into register 145 via bus 116. [0041] Figure 12A is a simplified diagram of power supply portion 134 in the embodiment in which control logic 129 controls power supply portion 134 to drive node 141 with a selectable one of a plurality of different regulated voltages. [0042] Figure 12B is a simplified diagram of power supply portion 134 in the embodiment in which control logic 129 controls power supply portion 134 to supply a selectable one of a plurality of different fixed currents onto node 141. [0043] Figure 13 is a simplified flowchart of a novel method 200 in accordance with one operational example. In a first step (step 201), a first current is driven through the touch screen from a first terminal to a second terminal such that a first voltage is present between the first and second terminals. In one example, the first terminal is terminal 125 and the second terminal is terminal 126 and the first voltage is 2.6 volts. [0044] Next (step 202), an ADC converts a second voltage on a third terminal into a first digital value. In one example, the third terminal is terminal 127. Steps 201 and 202 carry out a first touch screen point of contact measurement (an X-coordinate measurement) conducted with a relatively high power. Although not illustrated in steps in the flowchart of Figure 13, a similar Y-coordinate measurement is carried out with the relatively high power. [0045] Next (step 203), touch screen control information (TSCI) is received. This TSCI indicates that less touch screen detection resolution is required. In one example, this TSCI is written by processor 110 into register 143. [0046] Next (step 204), a second current is driven through the touch screen from the first terminal to the second terminal such that a third voltage is present between the first and second terminals. In one example, the first terminal is terminal 125 and the second terminal is terminal 126 and the third voltage is 1.3 volts. The third voltage (forexample, 1.3 volts) is substantially smaller than the first voltage (for example, 2.6 volts). [0047] Next (step 205), the ADC converts a fourth voltage on the third terminal into a second digital value. In one example, the third terminal is terminal 127. Steps 204 and 205 carry out a second touch screen point of contact measurement (an X-coordinate measurement) conducted with a relatively low power. Although not illustrated in steps in the flowchart of Figure 13, a similar Y-coordinate measurement is carried out with the relatively low power. The ADC has a convertible input voltage range (for example, zero to 2.6 volts) that is substantially larger than the third voltage (for example, 1.3 volts). The lower power measurement of steps 204 and 205 has a lower touch screen detection resolution, but this lower touch screen detection resolution is adequate under certain situations and in these situations power consumption is reduced by using the lowest acceptable power consumption setting for the point of contact measurements to be performed. The first and second touch screen measurements are not just measurements made during a calibration process, but rather are touch screen measurements made during normal touch screen operation to receive user input into the mobile communication device 100. [0048] In one or more exemplary embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium (sometimes referred to as a processor-readable medium). Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer- readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair,DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. Although certain specific embodiments are described above for instructional purposes, the teachings of this patent document have general applicability and are not limited to the specific embodiments described above. In other embodiments, an amplifier having a controllable gain is disposed between multiplexer 135 and ADC 124. A relatively low voltage/current is used to drive the touch screen, but the relatively low voltage output to be measured is amplified by the controllable gain amplifier prior to the signal being supplied to ADC 124. The gain of the amplifier is determined by the TSCI value written into control logic 129. Although the ADC is always powered by AVDD in the embodiments set forth above, in other embodiments the supply voltage that powers the ADC is reduced along with the voltage that powers the touch screen. In some examples, the processor 110 determines that battery power is undesirably low and the processor 110 in response reduces touch screen detection resolution in order to reduce battery power consumption and to extend battery life. Using a reduced touch screen resolution may prevent the touch screen from being usable in certain ways such as, for example, for handwriting recognition, but this is acceptable if using the reduced touch screen detection resolution will extend battery life and allow the mobile communication device to perform other more essential tasks for a longer period of time before the battery is totally discharged. Accordingly, various modifications, adaptations, and combinations of the various features of the described specific embodiments can be practiced without departing from the scope of the claims that are set forth below. |
Aspects of the disclosure are directed to sequencing. In accordance with one aspect, sequencing includes creating a one hot list (220); selecting a current word of the one hot list as a one hot list output (231); comparing the one hot list output with a current accumulation register value of an accumulation register (240) to produce a logical comparison; inputting the logical comparison to the accumulation register (240) to generate an updated accumulation register value; and outputting the updated accumulated register state to a client unit (280) to enable or disable the client unit. |
1.An arrangement sequencer, including:A single hot list, the single hot list includes a single hot list output;XOR logic, coupled to the one-hot list, the XOR logic includes a first XOR input and a second XOR input; andAn accumulation register, coupled to the XOR logic, the accumulation register including an accumulation register output; andWherein the one-hot list output is coupled to the first XOR input, and the accumulation register output is coupled to the second XOR input.2.The permutation sequencer of claim 1, wherein the accumulation register output is coupled to one or more client units.3.The arrangement sequencer of claim 1, further comprising a read pointer for addressing the one-hot list.4.The arrangement sequencer of claim 3, further comprising a power good register coupled to the accumulation register, the power good register used to implement a single bit interface to the accumulation register.5.The permutation sequencer of claim 4, wherein the power good register stores an abstract representation of the actual client unit activation status of one or more client units.6.The arrangement sequencer of claim 4, wherein the contents of the accumulation register and the power good register are compared to generate a confirmation or confirmation of the actual sequence status of one or more client units.7.The arrangement sequencer according to claim 6, wherein the power good register is an analog register.8.The arrangement sequencer of claim 6, further comprising a logic module coupled to the power good register.9.The arrangement sequencer of claim 8, wherein the logic module generates a sequence of bits for input to the power good register.10.The arrangement sequencer of claim 9, wherein the bit sequence represents the actual sequence state of the one or more client units.11.The arrangement sequencer according to claim 2, wherein the number of the one or more client units is N number, and the one-hot list includes N number of words, each of the N number of words The word has a word length equal to N bits.12.The permutation sequencer of claim 2, wherein the one-hot list includes a one-hot encoding list and a shift register decoder, the shift register decoder being coupled to the one-hot encoding list.13.The arrangement sequencer of claim 12, wherein the number of the one or more client units is N numbers, and the one-hot encoding list includes N number of code words, wherein the N number of code words Each code word in has a word length less than N bits.14.The arrangement sequencer according to claim 13, wherein each of the N number of coded words is coded using binary coding to reduce the number of bits per coded word.15.A method for sequencing, including:Select the current word in the exclusive hot list as the exclusive hot list output;Comparing the one-hot list output with the current accumulation register value of the accumulation register to produce a first logical comparison; andThe first logical comparison is input to the accumulation register to generate an updated accumulation register value.16.The method of claim 15, further comprising outputting the updated accumulation register state to one client unit of the plurality of client units to enable or disable the one client unit.17.The method of claim 16, wherein the number of the plurality of client units is N number.18.The method of claim 17, further comprising creating the unique hot list, wherein the unique hot list includes N number of words.19.The method of claim 18, wherein each of the N number of words has an N-bit word length.20.The method of claim 18, wherein each of the N number of words has a word length less than N bits.21.The method of claim 20, further comprising: encoding each of the N number of words.22.The method of claim 21, wherein the encoding is binary encoding.23.The method of claim 16, further comprising: generating a second logical comparison between the contents of the accumulation register and the contents of the power good register.24.The method of claim 23, wherein outputting the updated accumulation register state to the one client unit is based on the second logical comparison.25.An apparatus for sequencing, the apparatus includes:Parts for creating unique hot lists;A component for selecting the current word of the unique hot list as the output of the unique hot list;A component for comparing the one-hot list output with the current accumulation register value of the accumulation register to generate a first logical comparison;Means for inputting the first logical comparison to the accumulation register to generate an updated accumulation register value; andA component for outputting the updated accumulation register state to one client unit of the plurality of client units to enable or disable the one client unit.26.The apparatus according to claim 25, wherein the number of the plurality of client units is N number, and wherein the one-hot list includes N number of words.27.The apparatus of claim 26, wherein each of the N number of words has an N-bit word length.28.The apparatus of claim 26, wherein each of the N number of words has a word length less than N bits, and each word is binary coded.29.A computer-readable medium storing computer-executable code operable on a device including at least one processor and at least one memory, the at least one memory coupled to the at least one processor, wherein The at least one processor is configured to implement sequencing, and the computer executable code includes:An instruction for causing the computer to select the current word of the unique hot list as the output of the unique hot list;Instructions for causing the computer to compare the one-hot list output with the current accumulation register value of the accumulation register to produce a first logical comparison; andAn instruction for causing the computer to input the first logical comparison into the accumulation register to generate an updated accumulation register value.30.The computer-readable medium of claim 29, further comprising a client unit for enabling the computer to output the updated accumulation register state to a plurality of client units to enable or disable the one client Unit instructions. |
Device and method for arranging sequencerCross-reference of related applicationsThis patent application requires U.S. Provisional Application No. 15 / 707,689, entitled “APPARATUSAND METHOD FOR A PERMUTATION SEQUENCER”, filed on September 18, 2017 and assigned to this assignee, and expressly incorporated herein by reference Priority, as fully explained below and used for all applicable purposes.Technical fieldThe present disclosure relates generally to the field of sequencers, and in particular to permutation sequencers.Background techniqueA sequencer is a controller for sequentially enabling or disabling multiple client units. That is, each client unit is powered on one at a time in a specific order. For example, the client unit may be a power supply in a system in which each power supply is turned on in a power-on sequence in a specific order in which the client unit is activated (ie, in an arrangement). Generally, the power-off sequence is the inverse of the power-on sequence. The power-on sequence may include the transmission of an enable signal from the sequencer to each client unit and the receipt of an acknowledgement (eg, a power good signal) from each client unit by the sequencer. In addition, the power-on sequence should maintain a state, that is, when other client units are sequentially activated, the previously activated client units should remain activated.In conventional designs, the sequencer can be hard-coded with specific logic to achieve a power-up sequence for a defined number of client units. Hard coding (for example, in a programmable logic device PLD) means that it may be difficult to change the power-on sequence when the architecture changes over time. However, the design may require increased flexibility in the operation of the sequencer by changing the arrangement when the client unit is added, subtracted, or swapped into the system. Therefore, what is desired is a more general and flexible sequencer architecture that is independent of the specific arrangement required.Summary of the inventionThe following presents a simplified summary of one or more aspects of the present disclosure in order to provide a basic understanding of these aspects. This summary is not an extensive overview of all the anticipated features of this disclosure, and is neither intended to identify key or important elements of all aspects of this disclosure nor to define the scope of any or all aspects of this disclosure. Its sole purpose is to present some concepts of one or more aspects of the disclosure in a simplified form as a prelude to the more detailed description that is presented later.In one aspect, the present disclosure provides an arrangement sequencer. Accordingly, the permutation sequencer may include: a single hot list, where the single hot list includes the single hot list output; XOR logic coupled to the single hot list, where the XOR logic includes a first XOR input and a second XOR input; The accumulation register of the XOR logic, where the accumulation register includes the accumulation register output; and wherein the one-hot list output is coupled to the first XOR input, and the accumulation register output is coupled to the second XOR input. In one example, the accumulation register output is coupled to one or more client units. In one example, the permutation sequencer also includes a read pointer for addressing the one-hot list.In one example, the permutation sequencer also includes a power good register coupled to the accumulation register. The power good register is used to implement a single bit interface to the accumulation register. The power good register stores an abstract representation of the actual client unit activation status of one or more client units. In one example, the contents of the accumulation register and the power good register are compared to generate a confirmation or confirmation of the actual sequence status of one or more client units. In one example, the power good register is a mock register.In one example, the arrangement sequencer includes a logic module coupled to a power good register. The logic module can generate a sequence of bits for input to the power good register. In addition, the bit sequence may represent the actual sequence status of one or more client units. In one example, the number of one or more client units is N number, and the one-hot list includes N number of words, where each of the N number of words has a word length equal to N bits.In one example, the one-hot list includes a one-hot encoding list and a shift register decoder, and the shift register decoder is coupled to the one-hot encoding list. The number of one or more client units is N number, and the one-hot encoding list includes N number of code words, where each of the N number of code words has a word length less than N bits. In addition, for example, each of the N number of coded words is coded using binary coding to reduce the number of bits per coded word.Another aspect of the present disclosure provides a method for sequencing, including: selecting the current word of the one-hot list as the one-hot list output; comparing the one-hot list output with the current accumulation register value of the accumulation register to generate the first A logical comparison; and inputting the first logical comparison to the accumulation register to generate an updated accumulation register value. The method for sequencing may further include: outputting the updated accumulation register state to one client unit of the plurality of client units to enable or disable the one client unit. In one example, the number of multiple client units is N number. The sequencing method may further include creating a unique hot list, where the unique hot list includes N number of words. In one example, each of the N number of words has an N-bit word length. In another example, each of the N number of words has a word length less than N bits. The method for sequencing may also include encoding each of the N number of words. Furthermore, in one example, the encoding is binary encoding. In one example, the method for sequencing may further include: generating a second logical comparison between the contents of the accumulation register and the contents of the power good register. In addition, in one example, outputting the updated accumulation register state to a client unit is based on a second logical comparison.Another aspect of the present disclosure provides an apparatus for sequencing, the apparatus includes: a component for creating a unique hot list; a component for selecting the current word of the unique hot list as a unique hot list output; The hot list output is compared with the current accumulation register value of the accumulation register to produce a first logical comparison; the means for inputting the first logical comparison into the accumulation register to generate an updated accumulation register value; and the means for updating the accumulated The register status is output to one client unit of the multiple client units to enable or disable components of one client unit. In one example, the number of multiple client units is N number, and wherein the one-hot list includes N number of words. In one example, each of the N number of words has an N-bit word length. In another example, each of the N number of words has a word length less than N bits, and each word is binary coded.Another aspect of the present disclosure provides a computer-readable medium storing computer-executable code operable on a device including at least one processor and at least one memory, the at least one memory coupled to the at least one processor , Where at least one processor is configured to implement sequencing, the computer executable code includes: an instruction for causing the computer to select the current word of the unique hot list as the unique hot list output; Instructions for comparing the current accumulation register value of the register to produce a first logical comparison; and instructions for causing the computer to input the first logical comparison into the accumulation register to generate an updated accumulation register value. In one example, the computer-readable medium also includes instructions for causing the computer to output the updated accumulation register state to one of the client units to enable or disable the one client unit.These and other aspects of the invention will be more fully understood after reviewing the detailed description that follows. Other aspects, features, and embodiments of the present invention will become apparent to those skilled in the art upon reviewing the following description of specific exemplary embodiments of the present invention in conjunction with the accompanying drawings. Although features of the invention may be discussed with respect to certain embodiments below and the drawings, all embodiments of the invention may include one or more of the advantageous features discussed herein. In other words, although one or more embodiments may be discussed as having certain advantageous features, one or more of such features may also be used in accordance with the various embodiments of the invention discussed herein. In a similar manner, although exemplary embodiments may be discussed below as device, system, or method embodiments, it should be understood that such exemplary embodiments may be implemented in various devices, systems, and methods.BRIEF DESCRIPTIONFIG. 1 illustrates an example system architecture with multiple client units and server units.Figure 2 illustrates an example arrangement sequencer for enabling or disabling a client unit.Figure 3 illustrates an example system that includes an arrangement sequencer coupled to validation comparison logic.4 illustrates an example finite state machine (FSM) for arranging sequencers with M = 9 states.FIG. 5 illustrates an example of a simplified finite state machine (FSM) for arranging sequencers with M = 6 states, N client units, and P permutation list.FIG. 6 illustrates a first example of successful operation of the arrangement sequencer.7 illustrates a second example of successful operation of the arrangement sequencer.FIG. 8 illustrates a first example of the error detection operation of the arrangement sequencer.FIG. 9 illustrates a second example of the error detection operation of the arrangement sequencer.FIG. 10 illustrates an example flow chart for the operation of the alignment sequencer.detailed descriptionThe detailed description set forth below in conjunction with the drawings is intended as a description of various configurations, and is not intended to represent the only configurations that can practice the concepts described herein. The detailed description includes specific details for the purpose of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts can be practiced without these specific details. In some cases, well-known structures and components are shown in block diagram form in order to avoid obscuring this concept.In a system architecture that includes multiple units, there may be two general categories of units. The first unit category may be a server unit, and the second unit category may be a client unit. For example, a hierarchy within the system architecture can be established, where the server unit provides services or tasks to the client unit. One type of service may be enabling or disabling services, that is, services for enabling or disabling client units.FIG. 1 illustrates an example system architecture 100 having multiple client units 180 and server units 110. As shown in the example of FIG. 1, there are N number of client units. Although one server unit is shown in the example of FIG. 1, in other examples, there may be more than one server unit. In one example, each of the plurality of client units 180 may be enabled or disabled by the server unit 110 in chronological order (ie, sequentially). Between each client unit 180 and the server unit 110, there is an enable / disable (E / D) channel 112 and an acknowledgement channel 114. In one example, the server unit 110 enables or disables the client unit 180 by sending an enable command or a disable command to the client unit 180 via the enable / disable channel. In one example, after receiving the enable command or the disable command, the client unit 180 confirms by sending a confirmation to the server unit 110 via the confirmation channel 114.In one example, the arrangement sequencer may be a server unit or controller to sequentially enable or disable multiple client units. In the example shown in FIG. 1, the server unit 110 is an example of an arrangement sequencer. For example, each client unit can be powered on one at a time in a specific time sequence. For example, the client unit may be a power supply in a system in which each power supply is turned on in a power-on sequence in a specific time sequence in which the client unit is activated (ie, in a sequence). In one example, permutation means a specific order. Generally, the power-off sequence of the client unit (eg, power supply) is in the reverse order of the power-on sequence. The power-on sequence may include the transmission of an enable signal from the permutation sequencer to each client unit and the receipt of an acknowledgement (eg, a power good signal) by each permutation unit from the permutation sequencer. In one example, the enable signal may be a specific bit from the accumulation register in the arrangement sequencer. In addition, the power-on sequence maintains the state. The maintenance state means that when the other client units are sequentially activated, the previously activated client units remain activated.According to the present disclosure, the permutation sequencer can provide a general and flexible architecture to allow simple permutation of client unit activations using simple bit-level processing. The key elements of the arrangement sequencer may include one or more of the following: one-hot list, exclusive OR (XOR) logic, and / or finite state machine (FSM) with accumulation registers. The FSM is a sequential logic function with a limited number of states, which sequentially transitions to the updated state based on the current state and the current input. The arrangement sequencer can use the built-in confirmation and accumulation to implement the arrangement for the ON sequence of the client unit.FIG. 2 illustrates an example arrangement sequencer 200 for enabling or disabling the client unit 280. The client unit 280 is not part of the arrangement sequencer 200 and is therefore shown as a dotted line. Although only one client unit 280 is shown, in some examples, more than one client unit 280 is coupled to the arrangement sequencer 200. As shown in the example of FIG. 2, the arrangement sequencer 200 includes a read pointer 210, a one-hot list 220, XOR logic 230 and an accumulation register 240. In one example, the unique hot list 220 includes one or more hot list words 221.In one example, the read pointer 210 may be an address register used to address memory locations. In one example, the one-hot list 220 may be a memory or register for storing arrangement words. In one example, the arrangement word may have a word length of N bits. For example, the read pointer 210 may address the one-hot list 220. The read pointer 210 may address the unique hot list 220, for example, to generate the selected unique hot list word 221 at the output of the unique hot list 220. The output of the one-hot list may be referred to as the one-hot list output.In one example, the unique hot list 220 may use unique hot bit coding to arrange the option encoding. For example, one-hot bit coding is a form of state coding, where the word length N is the same as the number of states. In one example, N is also the number of client units 280. The one-hot list 220 may be a register array with multiple N-bit words, where a single bit from the N-bit word is set to a logic level HIGH ("1"), and the remaining bits are set to a logic level low ( "0"), so the term "single heat" is derived. The unique hot bit code can be used to uniquely identify each client unit 280 in the unique hot list 220, where each register word has only one HIGH bit. That is, each register word is orthogonal to other register words. The HIGH bit may also indicate confirmation and / or masking for power-on verification. The one-hot list can be created by writing commands to the memory space, writing commands to the registers, initialization values in the registers, or initialization constants at the time of construction. In one example, if the number of one or more client units 280 is N number, the one-hot list 220 includes N number of words, where each of the N number of words has a word length equal to N bits.In another example, the one-hot list 220 is implemented by a one-hot encoding list coupled to the shift register decoder. In the example where the number of client units is N, the list of one-hot encoding includes N number of code words, where each code word has less than N bits per code word. In the one-hot encoding list, binary encoding is used to encode each coded word to reduce the number of bits in each coded word. Although binary coding is disclosed, those skilled in the art will understand that other forms of coding (such as, but not limited to, ternary coding, complementary binary coding, complementary ternary coding, etc.) may be used within the scope and spirit of the present disclosure ). In one example, the shift register decoder decodes the coded words of the one-hot encoding list to generate a list of words that match the words of the one-hot encoding list 220 without encoding.The XOR logic 230 may be followed by the one-hot list 220 by two inputs, a first XOR input 231 and a second XOR input 232. For example, the first XOR input 231 and the second XOR input 232 each have a word length of N bits. In one example, the first XOR input is connected to the selected unique hot list word 221 at the output of the unique hot list 220. The output of the XOR logic 230 is a logical XOR combination of the first XOR input 231 and the second XOR input 232. In one example, if the first XOR input 231 and the second XOR input 232 are set to different logic states (ie, one XOR input is HIGH and the other XOR input is low), then the two XOR inputs 231, 232 The logical XOR combination produces a logical HIGH output. Furthermore, if both the first XOR input 231 and the second XOR input 232 are set to the same logic state (ie, both XOR inputs are HIGH or both XOR inputs are LOW), then the two XOR inputs 231 , 232 logical XOR combination produces a logic LOW output. For example, the output of XOR logic 230 has a word length of N bits. In one example, XOR logic 230 implements a logical XOR combination of two N-bit XOR inputs to produce an N-bit XOR output.In one example, the output of XOR logic 230 is connected to the input of accumulation register 240. The accumulation register 240 stores the current state of the output of the XOR logic 230 as the current accumulation register value. For example, the output of the accumulation register 241 is connected to the second XOR input 232 of the XOR logic 230. In one example, the output of the accumulation register 241 is called the accumulation register output. In one example, the accumulation register 240 is part of a finite state machine (FSM). In one example, the accumulation register 240 updates the current accumulation register value to produce an updated accumulation register value. The accumulation register 240 may operate as a higher-level manager of the arrangement sequencer 200. For example, the accumulation register 240 may operate independently of the specific content of the one-hot list.The operation of the arrangement sequencer 200 may be represented by a repeating sequence of register value transitions. In one example, the register value transition is a logical progression from one value to another value. For example, the current accumulation register value can be converted to an updated accumulation register value. For example, the accumulation register 240 can realize the recursive relationship between the current accumulation register value and the updated accumulation register value, which is expressed mathematically as:r (k + 1) = XOR {r (k), p (k)}among themk = ranking index,r (k) = current accumulation register value at the current arrangement index kr (k + 1) = updated accumulated register value at updated arrangement index k + 1p (k) = current state of the selected unique hot list word at the current ranking index kIn one example, XOR logic 230 implements the arrangement step by step and automatically reverses the arrangement. Note that in one example, XOR logic is the only logic operation that implements automatic permutation inversion. In addition, XOR logic 230 maintains an enabled state for all client units 280. That is, the enabled state with N bits is incrementally updated with one bit transition for each event. The arrangement is only realized by the one-hot list 220 and the XOR logic 230 without hardware modification.The current accumulation register value is used to execute the power on / off sequence. In addition, confirmation or confirmation of the actual sequence state can be performed by the accumulation register 240 using the logical comparison of the accumulation register 240 with the power good register. In another embodiment, the confirmation may be optional. In one example, the actual sequence state is the enabled or disabled state of one or more client units.FIG. 3 illustrates an example system 300 that includes an arrangement sequencer 200 coupled to confirmation comparison logic 310. In one example, confirmation comparison logic 310 includes power good register 350 and logic module 360. Although only one logic module is shown in FIG. 3, more than one logic module can be implemented by specific design and / or for specific applications as needed.In one example, the permutation sequencer 200 may be implemented with the confirmation comparison logic 310 incorporated as shown in FIG. 3. In addition to the read pointer 210, the one-hot list 220, the XOR logic 230, and the accumulation register 240 that have been described with reference to FIG. 2, a power good register 350 may also be included in the system 300. The power good register 350 can be used to implement an N-bit interface to the accumulation register 240, so that the implementation details of other logic modules can be abstracted from the system 300. In one example, the confirmation of each client unit can be reduced to a single bit representation. An example logic module 360 is shown schematically in FIG. 3. The logic module 360 in FIG. 3 may include combinational logic, such as but not limited to one or more of the following: inverter, AND gate, OR gate, NAND gate, XOR gate, and the like. Those skilled in the art will understand that the example components of the logic module 360 shown in FIG. 3 may be replaced by other components and still be within the scope and spirit of the present disclosure. The specific components of the logic module 360 are governed by specific designs or specific application requirements. In one example, the logic module 360 generates a sequence of bits to input to the power good register 350, where the sequence of bits represents the actual sequence state of one or more client units 280. In one example, the contents of the accumulation register 240 and the power good register 350 may be compared to provide (ie, generate) confirmation or confirmation of the actual sequence status.In one example, if explicit confirmation is not required, the power-good register 350 can be replaced with an “analog register” without requiring changes to the accumulation register 240. That is, in one example, the FSM can operate the same with or without analog registers. In one example, the confirmation comparison logic 310 includes an analog register (not shown) without the logic module 360. In one example, the analog register is a register having the following content, which reflects the value of the accumulation register, that is, the desired state. When an analog register is provided, the comparison between the accumulation register 240 and the analog register will always match. In the example of implementing the analog register, since the comparison between the accumulation register 240 and the analog register always matches, the verification always succeeds. In one example, the FSM and the one-hot list 220 (which includes arrangement words) are modular.FIG. 4 illustrates an example finite state machine (FSM) 400 for arranging sequencers with M = 9 states. In one example, the FSM remains in M = 9 states, independent of the value N of the client unit in the one-hot list.FIG. 5 illustrates an example of a simplified finite state machine (FSM) 500 for arranging sequencers with M = 6 states, N client units, and P permutation lists. In one example, the first state is the initial state INIT, which proceeds to the second state after reset, which is the OFF state. Next, the third state is the PON state after receiving the ON command. In one example, as long as the accumulation register is consistent with the power good register, and until the end of the permutation list is reached, the FSM cycles through N client units while remaining in the PON state. After the FSM cycle is completed, if the accumulation register is consistent with the power good register, it proceeds to the fourth state, which is the ON state. In one example, the ON state represents the state where all client units are enabled. Next, after receiving the OFF command, proceed to the fifth state which may be the POFF state. In one example, as long as the accumulation register is consistent with the power good register, and until the end of the permutation list is reached, the FSM cycles through the N client units while remaining in the POFF state. After the FSM cycle is completed, if the accumulation register is consistent with the power good register, it proceeds to the second state in the OFF state. In one example, the OFF state indicates a state where all client units are disabled.In one example, when in the third state (eg, PON state), after the FSM cycle is completed, if the accumulation register and the power good register are not consistent, then proceed to the fifth state that is the POFF state. Next, when in the fifth state (for example, POFF state), if the accumulation register does not coincide with the power good register, then proceed to the sixth state. In one example, the sixth state is the OFF FAST state, which enables a forced close command. In one example, after receiving the forced close command, it proceeds to the second state which is the OFF state.In one example, PON is the "power-on" state and POFF is the "power-off" state, where these states occur while the FSM is reading values from the one-hot list. In addition, ON is the "on" state, and OFF is the "off" state, where these states occur while the FSM is not reading values from the one-hot list. In one example, PON is the abbreviation of "Power On", and POFF is the abbreviation of "Power Off", which is the server status, where FSM is reading out the value and updating from the list XOR accumulates registers and waits for PG to propagate back to test for equality. For all other states, FSM does not read the contents of the one-hot list, and the XOR accumulation register does not change the value. For example, PON starts with the read pointer at 0 and increments the read pointer until it reaches the last value or there is a fault; while POFF starts with the read pointer wherever PON leaves and decrements the read pointer until it reaches 0 or Until there is a fault. In the successful sequence, PON reads the arrangement from 0 to N, and POFF reads the arrangement from N to 0. In an unsuccessful arrangement, PON reads from 0 to N-x, where N-x fails, and POFF reads from N-x to 0, where x is an integer <N. In the unsuccessful fast-off arrangement, PON reads from 0 to N-x, and POFF reads from N-x, and eventually jumps to 0.FIG. 6 illustrates a first example 600 of successful operation of the arrangement sequencer. In the example 600, the table 610 is a unique hot list with four words, and each word has 4 bits in the first arrangement for four client units. Table 620 shows the content of the first enable register. In one example, the first enable register is an accumulation register 240 (shown in FIG. 2). The accumulation register 240 may be part of a finite state machine (FSM).In one example, the first enable register exhibits a power-on sequence (PON) that varies with time. For example, the first entry of the first enable register (shown in table 620) at time t1 (represented in line 621) consists of the first word 611 (shown in table 610) of the unique hot list The XOR logic of the initial entry (eg, XOR logic 230) is determined to produce a first state value of "0001".The second entry (represented in line 622) of the first enable register (shown in table 620) at time t2 consists of the second word 612 (shown in table 610) of the unique hot list and the first entry The XOR logic of 621 (eg, XOR logic 230) is determined to produce a second state value of "0011". The third entry (represented in line 623) of the first enable register (shown in table 620) at time t3 consists of the third word 613 (shown in table 610) of the one-hot list and the second entry The XOR logic (represented in line 622) (eg, XOR logic 230) is determined to produce a third state value of "0111". The fourth entry 624 of the first enable register (shown in table 620) at time t4 is composed of the fourth word 614 of the unique hot list (shown in table 610) and the third entry (represented in line 623) ) 'S XOR logic (eg, XOR logic 230) is determined to produce a fourth state value of "1111". The power-on sequence continues in this manner until a fifth state value of "1111" of the fifth entry of the first enable register (represented in line 625) is generated at time t5. The fifth state value is the same as the fourth state value. In one example, the fifth state indicates that all four client units are enabled or powered on.Table 630 shows the content of the second enable register. In one example, the second enable register is an accumulation register 240 (as shown in FIG. 2). The accumulation register 240 may be part of a finite state machine (FSM). In one example, the first enable register and the second enable register are the same accumulation register 240. In one example, the second enable register exhibits a power-off sequence (POFF) that varies with time. For example, the sixth entry (represented in line 636) of the second enable register at time t6 has a fifth state value of "1111". The seventh entry of the second enable register at time t7 (represented in line 637) is composed of the XOR logic of the fourth word 614 of the unique hot list shown in table 610 and the sixth entry (represented in line 636) (For example, XOR logic 230) determine to produce a seventh state value of "0111".The eighth entry (represented in line 638) of the second enable register at time t8 is composed of the third word 613 of the unique hot list shown in Table 610 and the XOR logic of the seventh entry (represented in line 637) ( For example, XOR logic 230) determines to produce an eighth state value of "0011". The ninth entry of the second enable register at time t9 (represented in line 639) is the XOR logic of the second word 612 of the unique hot list shown in table 610 and the eighth entry (represented in line 638) (For example, XOR logic 230) determine to produce a ninth state value of "0001". The power-off sequence continues in this manner until the tenth state value of "0000" of the tenth entry 640 of the second enable register is generated at time t10. In one example, the tenth state indicates that all four client units are disabled or powered off.FIG. 7 illustrates a second example 700 of successful operation of the arrangement sequencer. In the example 700, the table 710 is a unique hot list with four words, and each word has 4 bits in the second arrangement for four client units. In one example, by exchanging the client ID2 with the client ID4, the second arrangement is different from the first arrangement illustrated in FIG. 6. In one example, only the one-hot list is modified, and all other elements are unchanged from FIG. 6. The content of the first enable register is shown in table 720. In one example, the first enable register is an accumulation register 240 (shown in FIG. 2). The accumulation register 240 may be part of a finite state machine (FSM). The first enable register exhibits a power-on sequence (PON) that varies with time. For example, the first entry (represented in line 721) of the first enable register at time t1 consists of the XOR logic of the first entry 711 (shown in table 710) of the one-hot list and the initial entry of zero (eg, XOR logic 230) is determined to produce a first state value of "0001".The second entry (represented in line 722) of the first enable register at time t2 consists of the second word 712 (shown in table 710) of the unique hot list and the first entry 721 (represented in line 721) XOR logic (eg, XOR logic 230) is determined to produce a second state value of "1001". The third entry (represented in line 723) of the first enable register at time t3 consists of the third word 713 (shown in table 710) of the unique hot list and the second entry (represented in line 722) XOR logic (eg, XOR logic 230) determines to produce a third state value of "1101". The fourth entry (represented in line 724) of the first enable register at time t4 is composed of the fourth word 714 (shown in table 710) of the unique hot list and the third entry (represented in line 723) XOR logic (eg, XOR logic 230) determines to produce a fourth state value of "1111". The power-on sequence continues in this manner until a fifth state value of "1111" of the fifth entry 725 of the first enable register is generated at time t5. The fifth state value is the same as the fourth state value. In one example, the fifth state indicates that all four client units are enabled or powered on.The content of the second enable register is shown in table 730. In one example, the second enable register is an accumulation register 240 (as shown in FIGS. 2 and 3). The accumulation register 240 may be part of a finite state machine (FSM). In one example, the first enable register and the second enable register are the same accumulation register 240. In one example, the second enable register exhibits a power-off sequence (POFF) that varies with time. For example, the sixth entry (represented in line 736) of the second enable register at time t6 has a fifth state value of "1111".The seventh entry (represented in line 737) of the second enable register at time t7 is the XOR of the fourth word 714 (shown in table 710) of the unique hot list and the sixth entry (represented in line 736) Logic (eg, XOR logic 230) determines to produce a seventh state value of "1101". The eighth entry (represented in line 738) of the second enable register at time t8 consists of the XOR of the third word 713 (shown in table 710) of the one-hot list and the seventh entry (represented in line 737) Logic (eg, XOR logic 230) determines to produce an eighth state value that is "1001". The ninth entry (represented in line 739) of the second enable register at time t9 consists of the XOR of the second word 712 (shown in table 710) of the unique hot list and the eighth entry (represented in line 738) Logic (eg, XOR logic 230) determines to produce a ninth state value of "0001". The power-off sequence continues in this manner until the tenth state value of "0000" of the tenth entry of the second enable register (represented in line 740) is generated at time t10. In one example, the tenth state indicates that all four client units are disabled or powered off.FIG. 8 illustrates a first example 800 of the error detection operation of the arrangement sequencer. In the example 800, the table 810 is a unique hot list with four words, and each word has 4 bits in the third arrangement for four client units. Table 820 shows the contents of the first enable register. In one example, the first enable register is the accumulation register 240 (shown in FIG. 2). The accumulation register 240 may be part of a finite state machine (FSM). In one example, the first enable register exhibits a power-on sequence (PON) that varies with time.In one example, the first entry of the first enable register at time t1 (represented in line 821) consists of the first word 811 of the one-hot list (shown in table 810) and the XOR logic of the initial entry of zero ( For example, XOR logic 230) determines to produce a first state value of "0001". The second entry (represented in line 822) of the first enable register at time t2 is the XOR of the second word 812 (shown in table 810) of the one-hot list and the first entry (represented in line 821) Logic (eg, XOR logic 230) determines to produce a second state value of "1001".However, in Example 800, the third entry (represented in line 823) of the first enable register at time t3 should be represented by the third word 813 (shown in Table 810) of the one-hot list with the value "0100" Determine with the XOR logic (for example, XOR logic 230) of the second entry (represented in line 822) with the value "1001" to produce the correct third state value of "1101". Instead, due to the error condition, an erroneous third state value of "1001" is generated (shown in line 823). In one example, the fourth entry (represented in line 824) of the first enable register at time t4 has an erroneous fourth state value equal to the erroneous third state value of "1001" (on line 824) Shown). Since the state transition of the first enable register should contain and contain only one bit change, the arrangement sequencer can easily detect an error condition and transition to a power-off sequence.Table 830 shows the content of the second enable register. In one example, the second enable register is an accumulation register 240 (shown in FIGS. 2 and 3). The accumulation register 240 may be part of a finite state machine (FSM). In one example, the first enable register and the second enable register are the same accumulation register 240.The second enable register 830 exhibits a power-off sequence (POFF) that varies with time, the power-off sequence (POFF) starting from the fifth entry (represented in line 835) of the second enable register, the fifth entry has The value equal to the wrong fourth state value is "1001". In one example, the power-off sequence proceeds sequentially until the eighth state value of "0000" of the eighth entry (shown in line 838) of the second enable register is generated at time t8. In one example, the eighth state indicates that all four client units are disabled or powered off.FIG. 9 illustrates a second example 900 of the error detection operation of the arrangement sequencer. In the example 900, the table 910 is a unique hot list with four words, and each word has 4 bits in the third arrangement for four client units. The content of the first enable register is shown in Table 920. In one example, the first enable register is the accumulation register 240 (shown in FIG. 2). The accumulation register 240 may be part of a finite state machine (FSM).In one example, the first enable register exhibits a power-on sequence (PON) that varies with time. For example, the first entry of the first enable register at time t1 (represented in line 921) consists of the first word 911 (shown in table 910) of the solo list and the XOR logic of the initial entry of zero (eg, XOR logic 230) is determined to produce a first state value of "0001".The second entry (represented in line 922) of the first enable register at time t2 is the XOR of the second word 912 (shown in table 910) of the one-hot list and the first entry (represented in line 921) Logic (eg, XOR logic 230) determines to produce a second state value of "1001". However, in this example, the third entry (shown in line 923) of the first enable register at time t3 should be represented by the third word 913 (shown in table 910) of the one-hot list with the value "0100" Out) is determined with the XOR logic (eg, XOR logic 230) of the second entry (represented in line 922) with the value "1001" to produce the correct third state value of "1101". Instead, due to the error condition, an erroneous third state value of "1100" is generated (shown in line 923).In example 900, the fourth entry (shown in line 924) of the first enable register at time t4 has an incorrect fourth state value that is equal to the incorrect first Four state values. Since the state transition of the first enable register should contain and contain only one bit change, the arrangement sequencer can easily detect an error condition and transition to a power-off sequence.The second enable register 930 exhibits a power-off sequence (POFF) that changes according to time, the power-off sequence (POFF) starts from the fifth entry 935 of the second enable register, the fifth entry 935 has a value of "1101" The expected value and the actual value of "1100" equal to the incorrect fourth state value. In one example, the XOR operation in the accumulation register produced a value of "1001", but in the case of a value of "1100", the client unit responded incorrectly. Due to this failed comparison, the FSM may time out and perform a fast forced shutdown. In one example, the power-off sequence proceeds quickly until a seventh state value of "0000" of the seventh entry (shown in line 937) of the second enable register is generated at time t7. In one example, the seventh state indicates that all four client units are disabled or powered off. In one example, fast progress means jumping to all zero values immediately after reaching the timeout.FIG. 10 illustrates an example flow chart 1000 for the operation of the arrangement sequencer. In block 1010, a unique hot list is created. In one example, the words in the one-hot list correspond to the permutation sequence. The arrangement sequence is the chronological order in which the client units are activated, that is, the chronological order in which the client units are powered on. In one example, the permutation sequencer may have a timeout feature, where a limited amount of time may be allocated for the propagation time to the client unit and the confirmation time from the client unit. In one example, the client unit may be a power source. Those skilled in the art will understand that the client unit may include any device or any component of the device that can be powered on or off. In one example, the one-hot list uses one-hot bit coding for state coding. For example, the one-hot list has a word length equal to the number N of client units. That is, the word length is N, and the number of client units is also N. Wherein, if the number of one or more client units 280 is N number, the one-hot list 220 includes N number of words, wherein each word of the N number of words has a word length equal to N bits. The unique hot bit coding can be used to uniquely identify each client unit in the unique hot list, where each arrangement word has only one HIGH bit. In one example, the one-hot list is created by a processor, where the processor may be coupled to a memory for storing information related to the one-hot list. The processor may be programmable.In block 1020, the current word of the unique hot list is selected and output as the unique hot list. In one example, the one-hot list output has N bits. This selection may be performed by a read pointer (eg, read pointer 210 shown in FIGS. 2 and 3).In block 1030, the one-hot list output is compared with the current accumulation register value of the accumulation register to produce a first logical comparison. In one example, the first logical comparison is performed using XOR logic (eg, XOR logic 230 shown in FIGS. 2 and 3). In one example, the first logical comparison compares N bits corresponding to the number of client units. In one example, the processor is used to compare the one-hot list output with the current accumulation register value to produce a first logical comparison. The processor may or may not be the same processor that created the one-hot list in block 1010.In block 1040, a first logical comparison is input to an accumulation register (eg, accumulation register 240 shown in FIGS. 2 and 3) to generate an updated accumulation register value. In one example, the updated accumulation register has N bits corresponding to the N number of client units. In one example, the updated accumulation register state is generated by XOR logic (eg, XOR logic 230) outputting the current accumulation register value and the one-hot list.In block 1050, the updated accumulation register state is output to the client unit to enable the client unit. In one example, the client unit is a power source. In one example, the accumulation register outputs the updated accumulation register state to the client unit. In one example, enabling may depend on confirmation. This confirmation may be based on a second logical comparison between the contents of the accumulation register (eg, accumulation register 240) and the contents of the power good register shown in FIG. In the first example, the power good register 350 stores the actual client unit enable state. In the second example, the power good register 350 stores an abstract representation of the actual client unit enabled state.In one example, the actual client unit activation state is a list indicating whether one or more client units are enabled or disabled. In one example, the abstract representation of the actual client unit activation state is a generalization of the list, which indicates whether one or more client units are enabled or disabled. In the second example, the power good register 350 presents an abstract representation of the actual client unit enabled state to the accumulation register 240 without being exposed to client unit interface details. In the second example, the power good register 350 is an emulation register that stores the emulated client unit enable state.In one example, the permutation sequence is the chronological order in which the client unit is disabled (ie, the client unit is powered off). The chronological order in which the client unit is disabled is the reverse order of the chronological order in which the client unit is enabled. In one example, the time sequence in which the client units are disabled follows the same sequence as described in FIG. 10 except block 1050, which can be modified to disable (rather than enable) the client unit based on the updated accumulation register state . Therefore, to disable, the step in block 1050 may be to output the updated accumulation register state to the client unit to disable the client unit.In one aspect, one or more of the steps in FIG. 10 for providing an arrangement sequencer may be performed by one or more processors, which may include hardware, software, firmware, and so on. In one aspect, one or more of the steps in FIG. 10 may be performed by one or more processors that may include hardware, software, firmware, etc. For example, one or more processors may be used to execute software or firmware required to perform the steps in the flowchart of FIG. Software should be interpreted broadly to mean instructions, instruction sets, codes, code segments, program codes, programs, subroutines, software modules, applications, software applications, software packages, routines, subroutines, objects, executable files , Threads of execution, procedures, functions, etc., whether it is called software, firmware, middleware, microcode, hardware description language, or other. The software may reside on computer readable media. The computer-readable medium may be a non-transitory computer-readable medium. As examples, non-transitory computer-readable media include magnetic storage devices (eg, hard disks, floppy disks, magnetic stripes), optical disks (eg, compact discs (CD) or digital versatile discs (DVD)), smart cards, and flash memory devices (eg , Card, stick, or key driver), random access memory (RAM), read only memory (ROM), programmable ROM (PROM), erasable PROM (EPROM), electrically erasable PROM (EEPROM), registers, A removable disk, and any other suitable medium for storing software and / or instructions that can be accessed and read by a computer. As an example, a computer-readable medium may also include carrier waves, transmission lines, and any other suitable medium for transmitting software and / or instructions that can be accessed and read by a computer. The computer-readable medium may reside in the processing system, external to the processing system, or distributed across multiple entities including the processing system. The computer-readable medium can be implemented in a computer program product. As an example, a computer program product may include a computer-readable medium in packaging materials. The computer-readable medium may include software or firmware for arranging the sequencer. Those skilled in the art will recognize that how best to implement the described functions presented throughout this disclosure depends on the particular application and the overall design constraints imposed on the overall system.Any circuit device (s) included in the processor (s) are provided only as examples, and other components for performing the described functions may be included in various aspects of the present disclosure, including but not limited to storage on computer-readable media Instructions, or any other suitable device or component described herein, and utilize, for example, the processes and / or algorithms described herein with respect to example flowcharts.Within this disclosure, the word "exemplary" is used to mean "serving as an example, instance, or illustration." Any implementation or aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects of the present disclosure. Likewise, the term "aspects" does not require that all aspects of the disclosure include the discussed feature, advantage, or mode of operation. The term "coupled" is used herein to refer to a direct or indirect coupling between two objects. For example, if object A physically contacts object B, and object B contacts object C, objects A and C can still be considered to be coupled to each other—even if they are not in direct physical contact with each other. For example, even if the first die has never been in direct physical contact with the second die, the first die may be coupled to the second die in the package. The terms "circuit" and "circuit arrangement" are widely used and are intended to include both: the hardware implementation of electrical equipment and conductors, which when connected and configured enable the execution of the functions described in this disclosure without Restrictions on the types of electronic circuits; and software implementation of information and instructions, which when executed by a processor, enables the execution of the functions described in this disclosure.One or more of the components, steps, features, and / or functions illustrated in the figures may be rearranged and / or combined into a single component, step, feature, or function, or implemented in several components, steps, or functions. Additional elements, components, steps, and / or functions may also be added without departing from the novel features disclosed herein. The apparatus, devices, and / or components illustrated in the drawings may be configured to perform one or more of the methods, features, or steps described herein. The novel algorithms described herein can also be effectively implemented in software and / or embedded in hardware.It should be understood that the specific order or hierarchy of steps in the disclosed methods is an illustration of exemplary processes. Based on design preferences, it is understood that the specific order or hierarchy of steps in the method can be rearranged. The accompanying method claims present elements of various steps in a sample order, and are not meant to be limited to the specific order or hierarchy presented, unless specifically recited therein.The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other aspects. Therefore, the claims are not intended to be limited to the aspects shown in this document, but are to comply with the full scope consistent with the language of the claims, where singular references to elements are not intended to mean "one and only one" unless Specifically stated so, but means "one or more". Unless specifically stated otherwise, the term "some" refers to one or more. The phrase referring to "at least one" in the item list refers to any combination of these items, including a single member. As an example, "at least one of a, b, or c" is intended to cover: a; b; c; a and b; a and c; b and c; and a, b and c. All structural and functional equivalents of elements known to those of ordinary skill in the art or later becoming known throughout the various aspects described in this disclosure are expressly incorporated herein by reference and are intended to be claimed by the claims Covered. In addition, the content disclosed herein is not intended to contribute to the public regardless of whether such disclosure is explicitly recited in the claims. |
A biometric-based security circuit in which the user database, processor, and biometric map generation functions are all located on the same integrated circuit whose secure contents are inaccessible from external to the integrated circuit. Biometric data, such as a fingerprint, retina scan, or voiceprint, is taken from a user requesting access to restricted resources. The biometric data is transferred into integrated circuit, where it is converted to a biometric map and compared with a database of biometric maps stored in a non-volatile memory in the integrated circuit. The stored maps represents pre-authorized users, and a match triggers the security circuit to send a signal to a host processor authorizing the host processor to permit the requesting user access to the restricted resources. The integrated circuit essentially serves as a write-only memory for the secure data, because the secure data and security functions in the integrated circuit are not directly accessible through any pin or port, and therefore cannot be read or monitored through a dedicated security attack. A second non-volatile memory, accessible from external to the integrated circuit, can also be provided in the integrated circuit for holding non-secure data. This second memory has its own interface port, and is isolated from the security-related functions and memory so that secure and non-secure functions are physically isolated from each other and cannot be modified to overcome that isolation. |
We claim: 1. An apparatus, comprising: an integrated circuit including: a first processor; a first interface coupled to the first processor to communicate with a second processor external to the integrated circuit; a first non-volatile memory decoupled from the first interface and coupled to the first processor to store first biometric data identifying at least one authorized user, and having contents that are unreadable external to the integrated circuit; and a second interface coupled to the first processor to input second biometric data from a biometric reader.2. The apparatus of claim 1, wherein the integrated circuit further includes a second non-volatile memory coupled to a third interface and decoupled from the first processor, first interface, second interface, and first non-volatile memory, and having contents that are accessible external to the apparatus through the third interface 3. The apparatus of claim 1, wherein the first non-volatile memory is a flash memory.4. The apparatus of claim 1, wherein the second non-volatile memory is a flash memory. 5. The apparatus of claim 1, wherein the biometric reader is a fingerprint reader.6. The apparatus of claim 1, wherein: the first biometric data includes a first biometric map; and the integrated circuit contains code to cause the first processor to convert the second biometric data to a second biometric map.7. The apparatus of claim 6, wherein the integrated circuit contains code to cause the first processor to perform a comparison between the second biometric map and the first biometric map.8. The apparatus of claim 7, wherein: the integrated circuit contains code to cause the first processor to send a verification signal through the first interface if a match is found in the comparison; and the integrated circuit contains code to cause the first processor to send a non verification signal through the first interface if a match is not found in the comparison.9. The apparatus of claim 1, wherein the integrated circuit contains code to cause the first processor to authenticate a program downloaded into the integrated circuit. 10. A system, comprising: a host processor; a biometric reader; an integrated circuit coupled to the biometric reader and host processor and including: a first processor; a first interface coupled to the first processor and the host processor; a first non-volatile memory decoupled from the first interface and coupled to the first processor to store first biometric data identifying at least one authorized user, and having contents that are unreadable external to the integrated circuit; and a second interface coupled to the first processor and the biometric reader to input second biometric data.11. The system of claim 10, wherein the integrated circuit further includes a second non-volatile memory coupled to the host processor through a third interface and decoupled from the first processor, first interface, second interface, and first non-volatile memory, and having contents that are accessible external to the apparatus through the third interface.12. The system of claim 10, wherein: the first biometric data includes a first biometric map; and the integrated circuit contains code to cause the first processor to convert the second biometric data to a second biometric map. 13. The system of claim 12, wherein the integrated circuit contains code to cause the first processor to perform a comparison between the second biometric map and the first biometric map.14. The system of claim 13, wherein: the integrated circuit contains code to cause the first processor to send a verification signal through the first interface if a match is found in the comparison; and the integrated circuit contains code to cause the first processor to send a non verification signal through the first interface if a match is not found in the comparison.15. The system of claim 10, wherein the integrated circuit contains code to cause the first processor to authenticate a program downloaded into the integrated circuit.16. A method, comprising: inputting a user's biometric data into an integrated circuit; reading a database of previously stored biometric data from a non-volatile memory in the integrated circuit, wherein contents of the non-volatile memory are non-readable external to the integrated circuit; comparing the user's biometric data with at least a portion of the database, using a processor disposed on the integrated circuit; sending a verification signal to an external device if comparing produces a match; and sending a non-verification signal to the external device if comparing does not produce a match.17. The method of claim 16, wherein: the stored biometric data includes a stored biometric map; and comparing includes converting the user's biometric data into a user's biometric map and comparing the user's biometric map with the stored biometric map.18. The method of claim 16, wherein the non-volatile memory is a flash memory.19. The method of claim 16, wherein sending a verification signal includes sending an indication of resources the user is authorized to access.20. A machine-readable medium having stored thereon instructions, which when executed by at least one processor cause said at least one processor to perform: inputting a user's biometric data into an integrated circuit; reading a database of previously stored biometric data from a non-volatile memory in the integrated circuit, wherein contents of the non-volatile memory are non-readable external to the integrated circuit; comparing the user's biometric data with at least a portion of the database, using a processor disposed on the integrated circuit; sending a verification signal to an external device if comparing produces a match; and sending a non-verification signal to the external device if comparing does not produce a match.21. The medium of claim 20, wherein: the stored biometric data includes a stored biometric map; and comparing includes converting the user's biometric data into a user's biometric map and comparing the user's biometric map with the stored biometric map.22. The medium of claim 20, wherein the non-volatile memory is a flash memory. |
BIOMETRIC-BASED AUTHENTICATION IN A NONVOLATILEMEMORY DEVICEBACKGROUND OF THE INVENTION 1. Field of the InventionThe invention pertains generally to security systems. In particular, it pertains to an improved security device based on biometric characteristics of the user.2. Description of the Related ArtImprovements in circuit miniaturization, radio technology, and battery power have led to widespread use of portable devices that access the resources of much larger distributed systems. An example is the use of cellular telephones, which allow subscribers to access the resources of national and global telephone systems with a device they can carry on their person. The typical cell phone allows access to these resources to anyone possessing the cell phone. With larger devices, such as desktop computers that are located in secure areas, basing security on possession is not an issue. But with small, portable devices that are easily lost or stolen, this level of security is inadequate. A conventional way to address this problem is through the use of passwords.However, password-based security is based entirely on protecting the password.Passwords can be illicitly obtained by unauthorized persons in various ways, such as by observing a person entering the password, electronic monitoring of password entry, or intercepting a new password as it is being delivered to the intended user. Since the user still has the password, the security breach may not be detected until some time after it has been improperly used by the unauthorized person. Another problem is that passwords are sometimes forgotten by the legitimate user, leading to frustration, inconvenience, and taking steps to avoid this problem in ways that may compromise the security of the password. Another approach is the subscriber interface module (SIM), which combines a password with an artifact such as a machine-readable plastic card containing both secure data and processing capability. Since both the card and the password are necessary for access, this provides an improved level of security over a password-only approach, but it still suffers from many of the same problems. Problems with these conventional approaches are that passwords can be stolen or forgotten, while artifacts can be lost, stolen, copied, or forged. An improved approach to access control uses biometric data to identify a specific user without the need for passwords or artifacts. Biometric data is data that describes a unique physical characteristic of the user, and which is read directly from the user's person at the time access is requested. Some of the known biometric approaches identify users through fingerprints, retina scans, and voice prints. Each has its own strengths and weaknesses, but all are based on unique physical characteristics of the user that are difficult to duplicate and do not require the user to memorize anything. However, biometric-based security systems also have a weakness. If the biometric data can be obtained, the fingerprint, retina image, voice, etc. can be forged or duplicated and used illicitly to obtain access to the system. Fig. 1 shows a conventional biometric security system 1. A host system 11 contains a host processor 12, a memory 13, a reader interface 14 to a biometric reader 16, and a general purpose interface 18 to other parts of the system. Memory 13 can include various types of memory, such as random access memory (RAM), read-only memory (ROM), and flash memory. The flash memory is typically used to store valid biometric data on approved users, and can be updated as users are added, removed, or need to have their data modified. This biometric data might be in raw form, such as a digitized image of a fingerprint, but is more likely in a reduced form, representing a coded'map'of the image that defines the pertinent points of the image in a redefined digital format. At the time access is requested, biometric reader 16 takes the appropriate biometric inputs from the user. For example, reader 16 might be a fingerprint reader, a retina scanner, or a voice print identification device. Biometric reader 16 converts the raw biometric data into a digitized map and sends the map through reader interface 14 to host processor 12, which compares it with the reference map in flash memory. If there is a match, processor 12 will initiate access to the requested resources, typically through general purpose interface 18.This design has at least three major weaknesses. 1) The link between reader 16 and interface 14 can expose the biometric map to monitoring and copying. The illicitly copied map can later be presented to reader interface 14 directly, without the need to duplicate the actual biometric image or data, thereby tricking system 11 into believing it is reading valid data from an authorized user. 2) Host processor 12 typically handles non-secure functions, such as the operational functions of a cell phone. Host processor 12 is therefore subject to hacking and other invasive tampering. It can be falsely directed to provide secure user data through general purpose interface 18, or to store false user data in the flash memory. Either act can permit an unauthorized person to later use the system in the normal manner through reader 16.3) Flash memory (and therefore secure data) is accessible from outside system 11 through a common bus 15 tying together processor 12, memory 13 and interfaces 14,18. These weaknesses also expose the system to destructive tampering, whose goal is to disrupt normal operations rather than obtain unauthorized use of those operations. BRIEF DESCRIPTION OF THE DRAWINGSFig. 1 shows a device of the prior art. Fig. 2 shows a device of the invention. Fig. 3 shows a more detailed view of the device of Fig. 2. Fig. 4 shows a system of the invention. DETAILED DESCRIPTION OF THE INVENTIONThe invention provides a self-contained security circuit that maintains secure data in a memory that is inaccessible from outside the security circuit, but which can be used to verify data provided from outside the security circuit. Fig. 2 shows one embodiment of a system 2 of the invention. Host processor 20 can be a non-secure processor, such as the processor in a cell phone that controls overall cell phone operations. Secure circuit 21 is a single integrated circuit that provides a self-contained security environment within system 2, and which cannot be accessed externally without its permission. Any transfer of data into or out of circuit 21 can be controlled by circuit 21. Circuit 21 includes its own embedded processor 22, so called because it is embedded within the perimeters of secure circuit 21. Processor 22 can also control a host interface 28 to host processor 20, and a reader interface 24 to biometric reader 23. Embedded processor 22 can operate with memories 25,26 and 27 over internal bus 29. Program memory 26 can be programmable read-only memory (PROM) or other non-volatile memory that contains the instructions for operating processor 22. RAM 25 can be used as working space while the processor is in operation, but should not be used to store permanent data, since RAM 25 will lose it contents if device 2's battery become discharged or disconnected. Flash memory 27 can be used for data that will change periodically, but must survive a power loss. Flash memory 27 is where the user-specific data can be stored, such as reference biometric data for each user authorized to use the system. Although RAM 25, program memory 26 and flash memory 27 are shown as three separate types of memory, two or more of them can be consolidated into a single memory type. For example, flash memory can be used in place of RAM 25 and/or program memory 26. Although this disclosure uniformly describes the use of flash memory, other types of writeable non-volatile memory may also be used without departing from the scope of the invention. Main flash array 29 can provide a separate writeable non-volatile memory that can be used for non-secure data, and is accessible by host processor 20 through flash host interface 30. Although host interface 28 and flash host interface 30 are shown as sharing a common bus, they can also be implemented with completely separate connections. In one embodiment, main flash array 29 can be functionally separate from the security functions in integrated circuit 21. In another embodiment, embedded processor 22 may be able to enable all or part of main flash array 29 when a user is authenticated, and disable all or part of main flash array 29 under other conditions. Secure circuit 21 is a single integrated circuit that provides a secure boundary surrounding the security functions because the operation of those functions are not accessible from outside circuit 21, and the secure data contained therein cannot be read or written except under specific, limited conditions that it controls. However, for the system to be useful, some type of initial user information must be written into circuit 21. To provide a staring point for entering user information, in one embodiment relevant user data can be initially stored in flash memory 27 under controlled conditions, before device 2 has been placed into operation. For example, this initial setup can establish the biometric map and functionality for a system administrator, who would then be the only one who could subsequently authorize the entry of new user data. Alternately, the first user to input biometric information could automatically be established as the system administrator.Methods of entering initial user information in a security system are well known in the art. Once user data has been entered into the system, when a potential user tries to use the system by inputting his or her biometric data through reader interface 24, secure circuit 21 can simply give a verified/not verified indication (and possibly an indication of approved privileges) for that user to host 20 through interface 28. The stored reference data for the user is therefore not exposed, and cannot be read from circuit 21 by any device external to it. This has significant advantages over the prior art system of Fig. 1. In Fig. 1, some form of secret data, such as a fingerprint map, is stored in flash memory, which may be accessible to other devices through interface 18. In addition, host processor 12 is not secure, and can be tampered with. It can be directed to expose the secret data to external devices through interface 18, and can also be directed to store a forged user file in flash memory. If the control circuits of the flash memory are accessible over the shared bus, forged data can be written directly into the flash memory without the knowledge or participation of host processor 12 By comparison, in the system of Fig. 2, secure data is stored in hidden flash memory 27, which does not share a bus with any external interface and therefore cannot be read by any external device. In addition, embedded processor 22 can be devoted entirely to providing the security functions performed by security circuit 21. Embedded processor 22 can therefore be controlled by non-modifiable code, which is not susceptible to hacking or other tampering with the security functions. All non-secure functions can be performed by host processor 20, which has no access to any security functions or secure data in security circuit 21. Among its other functions, circuit 21 essentially provides a write-only storage device for security information. After the initial data is written into circuit 21 under controlled conditions, circuit 21 does not permit any of the security data to be read out by external devices, and does not permit further entry of security data except under the control of circuit 21. Since all of circuit 21 is contained in a single integrated circuit, there are no accessible pins or interface connections that would expose the secure data or enable it to be read or modified by an external device. This makes device 2 virtually impervious to security attacks. Not only is the secure data protected, but proper checks on input data can prevent destructive data from being entered into circuit 21. Fig. 3 shows a more detailed view of security circuit 21. Embedded processor 22 interfaces with hidden flash memory 27, program memory 26, RAM 25, random number generator (RNG) 38, multiplier/accumulator 39, algorithm accelerator 37, biometric accelerator 41, monotonic counter 40, and watchdog timer 36 over a common internal bus that is not accessible to external devices. The first three devices are the same as those shown in Fig. 2 ; the remainder are used to perform security-related functions and are described in more detail below. Also as shown in Fig. 2, processor 22 is coupled to reader interface 24 and host interface 28. Base clock 31 provides a clock source for circuit 21. One embodiment provides a 70 megahertz (MHz) clock to processor 22. Clock divide circuit 33 can divide the base clock down to a slower rate, to be used as a source clock for watchdog timer 36 and other functions, such as alarm logic 34. Clock detector 32 can determine if base clock 31 is active and within predetermined frequency limits, while undervoltage/overvoltage (UV/OV) detector 35 can monitor the voltage levels in circuit 21. Alarm logic 34 can receive various types of alarm signals from other parts of circuit 21 and provide a consolidated alarm indication to processor 22 and to other circuits. The functions of circuit 21 are described in more detail below:ProcessorEmbedded processor 22 can process commands and perform flash memory management. In one embodiment, processor 22 processes standard SIM commands so that existing legacy software can be used in the system. processor 22 may also perform some of the cryptographic related processing, such as a hashing algorithm or a crypto algorithm. The processor can have enough performance to execute these algorithms in real time without impacting performance. Processor 22 can also incorporate a MemoryManagement Unit (MMU). The MMU is a highly desirable component in security designs. It can enforce separation of code from data, and can separate the data for one processing context from that of another processing context. This separation can be used to assure that no private data inadvertently becomes mixed with non-private data that is subsequently transmitted out of secure circuit 21. Host InterfaceHost interface 28 can provide an interface to host processor 20 of Fig. 2. This interface can be of various types, such as parallel or serial, high or low speed, etc. To preserve compatibility with existing host devices, host interface 28 can duplicate the interface currently used in existing host systems. In one embodiment, transfers between host processor 20 and embedded processor 22 can be performed one byte (or other unit of data) at a time with appropriate handshaking signals. In another embodiment, a first-in first-out buffer (FIFO) can be used in interface 28 to buffer multiple bytes, thus allowing either or both processors to operate efficiently in a burst mode. Host interface 28 can also include other signals, such as one or more pins to transfer alarm information from alarm logic 34, and to receive an external clock signal (not shown) into circuit 21. The operation of host interface 28 can be under the control of embedded processor 22, which may be able to enable or disable all or part of host interface 28 to control the flow of data and other signals being transferred to or from host processor 20.Program MemoryProgram memory 26 contains the instructions for performing the functions that processor 22 performs. To protect the security of the system, program memory 26 can be made non-modifiable while in the system. It can be permanent memory such as PROM, or semi-permanent such as EPROM or flash memory. Flash MemoryFlash memory 27 is used to store data that may change from time to time, but must survive a power loss. Flash memory is well suited for this purpose in portable devices, since it operates at voltages that are commonly available in portable devices. Flash memory can only be erased in blocks, so sufficient amounts of flash memory are used to assure that when data is changed, the entire block containing the change can be copied into a blank block. The old block is then erased to provide a blank block for the next change. Although uniformly described as flash memory in this disclosure, other types of non-volatile memory that are programmable in-circuit can also be used and are included within the scope of the invention. Main flash array 29 can be used for non-secure information, and can be accessible by host processor 20 through flash host interface 30. Although main flash array 29 and its interface 30 are functionally separated from the remainder of circuit 21, placing it on the same integrated circuit as hidden flash 27 can make efficient use of integrated circuit real estate, as well as reduce overall chip count and improve manufacturing efficiencies.Interface 30 may be the same type of interface as host interface 28, and may even connect to a common bus, as shown in Fig. 2. Interfaces 28 and 30 may also be of different types, and/or may have no common connections in the system.RAM MemoryRandom access memory 25 is used as workspace memory while the system is operating. Since the contents of RAM memory are lost when power is removed from theRAM circuits, the data placed in RAM should not include anything that cannot be lost, or that cannot be recovered upon resumption of power. Random Number GeneratorEncryption may be used for communications between secure circuit 21 and other devices. Many types of encryption require the generation of truly random numbers. A hardware generator such as RNG 38 can provide greatly superior performance over software RNG's. Hardware RNG's are known in the art. Some standards require the randomness of the RNG results to be tested in-circuit. This can require approximately 2500 bits of RAM (or alternatively, flash) memory be devoted to the analysis function.Multiplier/AccumulatorTo perform encryption functions, multiplier/accumulator (M/A) 39 can support fast exponentiation and modulo reduction, and can be optimized for those functions. It need not be used for general purpose arithmetic operations, which can be performed in processor 22. Design of the M/A function is closely related to the design of the embedded processor. If processor 22 is a digital signal processor (DSP), then the M/A of the DSP can be used and a separate M/A 39 on the bus may not be necessary.Algorithm AcceleratorAlgorithm accelerator 37 is specific to the cryptographic algorithm being used.This dedicated hardware requires much less processing time to perform the algorithm than will a processor. Algorithm accelerator 37 is separate in function and implementation from M/A 39. The M/A can be used to accelerate multiplication and exponentiation operations that are used in asymmetrical algorithms such as public key encryption. The algorithm accelerator speeds up symmetrical algorithms that are frequently employed to provide message privacy. Both the need for, and the specific design of, M/A 39 and accelerator 37 will depend on the particular cryptographic algorithm (s) to be employed in the circuit. RNG 38, M/A 39, and algorithm accelerator 37 can also be used to authenticate and encrypt data traveling between circuit 21 and biometric reader 23 in either direction.Biometric AcceleratorBiometric accelerator 41 can be similar in function to algorithm accelerator 37, except its purpose is to accelerate processing of the biometric data. Conversion of raw biometric data into a biometric map may involve intensive, repetitive processing, which can best be performed by a hardware accelerator specifically designed for the particular processing required. Undervoltage/Overvoltage Detection Undervoltage/Overvoltage (UV/OV) detector 35 can protect the system from a class of cryptographic attacks based on varying the voltage inputs. These attacks drive the supply voltage outside the specified operating range for the device in an attempt to force the subject under attack to mis-operate so that plain text or keys are exposed. UV/OV 35 can detect these out-of-range voltage conditions and alert processor 22, which can take action to stop operating before the secret information can be exposed. This also protects the system against an uncontrolled crash in the event the power supplies degrade or fail.In one embodiment, comparators are used to monitor the input voltage against reference voltages. The reference voltages are set using precision resistors as a voltage divider to bias an op amp. ClockBase clock 31 can provide a clock source for circuit 21. In one embodiment, base clock 31 is an internal clock operating at 70 MHz. It can be fed directly to processor 22 as a processor clock. It can also be divided down to lower frequencies by clock divide circuit 33 to operate such things as watchdog timer 36 and alarm logic 34. The use of an internal clock rather than an external clock prevents a dedicated attacker from manipulating the circuit by controlling the clock.Clock DetectorClock detector 32 can monitor the frequency of the clock signal. If the clock frequency is outside a preset range, an alarm can be generated so that the processor can take appropriate action to shut down or otherwise protect private information. This detector is useful primarily when an external clock source is used.Watchdog TimerWatchdog timer 36 can monitor program execution and data transfers. The program can be designed to pre-load the timer with predetermined values, either at periodic intervals or at the start of a particular routine. If the program operates as expected, the timer will always be reloaded or stopped before time expires. If the timer expires, it indicates that an unexpected change has occurred in program execution and an alarm can be generated. Watchdog timer 36 can also be used to monitor events that depend on external operations, such as data transfers between circuit 21 and another device. Because watchdog timers normally measure time in milliseconds rather than microseconds or nanoseconds, base clock 31 can be reduced to a lower frequency clock to provide a more useful time base for the watchdog timer.Alarm LogicAn alarm system is critical to any security design because it protects against failures or malicious attacks by alerting the system to take additional protective measures.Alarm logic 34 provides a consolidation point for the various alarms that can be generated, and sends appropriate signals to processor 22 so that it can take action to prevent loss of private information or other data. As shown in Fig. 3, alarm signals can also be sent to host interface 28, and from there to the host system, and can also be provided directly to external devices. In addition to the alarms described in the previous paragraphs, alarm logic 34 can also process the following alarms:1) Bad key alarm-This monitors cryptographic keys and generates an alarm when a bad key is encountered. The specific identification of bad keys is unique for each algorithm. 2) Manual key entry alarm-The monitors the veracity of keys that are manually loaded. Manually loaded keys should have an error detection code, such as a parity code, or should use duplicate entries in order to verify the accuracy of the entered keys. 3) Randomizer alarm-This tests the output of RNG 38 and verifies that the output is statistically random. Various known tests can be used to perform this verification, both at power up and at various points during operation. 4) Software/firmware alarm-On power up, the program can be tested to verify that it has not been corrupted. This can be done by an Error Detection Code (EDC) or by a digital signature applied to the program contents. 5) Self Tests-Various system self tests can be performed on power up, after a reset, or when commanded by the host. Self tests can include an instruction set test, a flash memory test, a RAM test, and known-answer test with M/A 39.Monotonic CounterMonotonic counter 40 is shown connected to the internal bus, but can also be implemented with other connections, or can be implemented in software or firmware. A monotonic counter is a counter that can only increment (or only decrement) and never repeats a number, implying that it must never be allowed to reset or cycle back to its starting count. Monotonic counter 40 can be used to provide a unique identification number for every communication to/from circuit 21. This prevents a communication from being recorded and later played back to simulate a legitimate communication. Since the counter value used with the recorded communication would no longer match the current counter value, this type of security attack can be detected as soon as the recorded communication is transmitted to circuit 21. Additional security can be achieved by having the counter increment in a non-linear fashion, so that the current counter value cannot be guessed simply by counting the number of communications that have taken place since the recorded transmission. Although the security contents of circuit 21 are generally inaccessible and unmodifiable from external to the circuit, in one embodiment the program of embeddedCPU 22 can be modified or replaced by downloading a new program into secure circuit 21. The downloaded program can be authenticated by embedded CPU 22 before being accepted and used, to prevent an illicit program from being inserted to compromise the security of the system. The downloading can take place through host interface 28, or can take place through a separate security interface (not shown). In one embodiment, an authorized user may be granted direct access to the contents of hidden flash memory 27, if that user is first authenticated.System OperationFlash memory 27 can be used to store the secure biometric map that identifies each authorized user. Whenever a user requests access to the system, his or her biometric data can be read by biometric reader 23 and provided through reader interface 24. This biometric data can be compared to the stored biometric data of all authorized users in the system. If a match is found, a'user verified'message can be sent to host processor 20 through host interface 28, permitting host processor 20 to initiate the requested operation.In one embodiment, the host is also told which functions or resources this particular user is authorized to use. Once secure user data is placed in a file in hidden flash memory 27, that user data is inaccessible to any device outside the perimeters of secure circuit 21. Bus 29 that connects to hidden flash memory 27 does not have an external port. Embedded processor 22 is the only device that is coupled to both hidden flash memory 27 and the external world, and the operation of processor 22 can be restricted by placing its operating code inPROM so that the code cannot be modified to redirect processor 22's operations.Alternatively, processor 22 can permit new operating code to be downloaded, provided processor 22 authenticates the new code before accepting it or using it. Most biometric readers do not transmit the raw biometric data for comparison purposes, but rather convert it into data that focuses on the most relevant parameters. For example, the digitized image of a fingerprint may require several thousand bytes of data.But fingerprint technology focuses on the location, orientation and nature of specific features of a fingerprint, which can be reduced down to a few hundred bytes. These few hundred bytes define a fingerprint'map', and it is this map that is stored and later used as a reference for comparison purposes. When a user requests access to the system, his recently-input fingerprint is also converted to a map, which is then compared with the maps currently stored in hidden flash memory 47 to determine if the user is authorized. In conventional systems, the user's fingerprint map is generated in biometric reader 23. However, public policy concerning privacy issues treats this data as extremely sensitive information, and generation of the map should take place only in a secure environment. Depending on the construction of the system, the link between biometric reader 23 and reader interface 24 may be subject to monitoring, and the fingerprint map should not appear on this link. For that reason, one embodiment of the invention generates biometric maps within circuit 21, using processor 22 and the memories on bus 29 as needed. The resulting map is therefore never exposed to any external interface of secure circuit 21, and cannot be read by any external device. Other types of biometric data can be treated similarly. Voice data can be converted into relevant frequency, amplitude, and time components, which can then be processed through an algorithm to produce a voice map of the speaker's voice. A retina scan can produce an image of the user's eye, which is then processed to generate a retina map that describes the characteristics of the user's retina. Although each technology has its own identifying characteristics, each can be processed by a system of the invention by following the steps of : 1) registering a user by reading the relevant biometric data, converting that data to a map, and storing the map in non-volatile memory, 2) identifying an authorized user by reading the requestor's relevant biometric data, converting it to a map, and comparing the map with the previously-stored maps, 3) if a match is found, sending a message to a host system designating the requestor as an authorized user, and in some embodiments identifying the scope of that user's access to the system, 4) if a match is not found, sending a message to the host system that the requestor is not an authorized user. Fig. 4 shows a specific system-level embodiment, in which the aforementioned security system is placed into a cellular telephone 4 having a fingerprint reader 23 integrated into cell phone 4 to identify the user. The reader can be conveniently placed on the cell phone to read the fingerprint of a person holding the phone. The user can initially be registered in the phone by a pre-authorized system administrator, who directs the system to enter the new user's thumbprint data into its database of authorized users. The first person to enter their print into the phone might be automatically designated as a system administrator. Alternately, a separate facility can be provided to create the fingerprint map, which is then downloaded into the system through a designated channel. Regardless of how the database is loaded, a user requesting access can place their thumbprint over fingerprint reader 23, which will digitize the image and send it through user interface 24 to processor 22. Processor 22 can then generate the fingerprint map for that image, and compare it with the one or more maps stored in non-volatile memory 27.Each stored map can also have an associated list of resources that that user is authorized to use. If the comparison is successful (i. e., if the map matches one stored in memory), processor 22 can send a signal to host processor 20 indicating the requestor is an authorized user, and indicating which resources that user is permitted to use. Host processor 20 can then enable the requested services, such as accepting a telephone number from the cell phone keypad 45 and using communications circuits 46 to transmit that number over the cell phone network. In a system designed for voice print identification, the existing microphone in the cell phone can be used for the biometric reader. Some form of random word prompting might be necessary to avoid the problem of a recorded voice being used to improperly gain access to the system. The invention can be implemented in hardware and/or as a method. The invention can also be implemented as instructions stored on a machine-readable medium, which can be read and executed by at least one processor to perform the functions described herein.A machine-readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e. g., a computer). For example, a machinereadable medium can include read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other form of propagated signals (e. g., carrier waves, infrared signals, digital signals, etc.), and others. The foregoing description is intended to be illustrative and not limiting. Variations will occur to those of skill in the art. Those variations are intended to be included in the invention, which is limited only by the spirit and scope of the appended claims. |
Systems and methods consistent with the present disclosure may be utilized to negate the distinction between a display device operating in video and command modes in that commands associated with either mode are prioritized and executed according to a command scheduler consistent with the present disclosure. A command scheduler consistent with the present disclosure includes a display driver stack and a scheduler coupled to the display driver stack. The scheduler is configured to receive commands from the driver stack. Further, the scheduler is configured to queue and schedule the commands to be executed during a boot environment and during runtime. A host controller may also be coupled to the scheduler and may receive at least one of the commands from the scheduler. In time, the host controller transfers the commands to a device for execution. |
A system, comprising:a display driver stack;a scheduler coupled to the display driver stack wherein the scheduler is to receive at least one command from the driver stack;wherein the scheduler is to queue the at least one command and is to schedule a plurality of commands to be executed during a boot environment and during runtime; anda host controller coupled to the scheduler wherein the host controller is to receive the at least one of the commands from the scheduler.The system of claim 1, wherein the scheduler includes a timer triggered queue, static queue, and dynamic queue which receive commands from the display driver stack.The system of claim 1 further comprising a display device coupled to execute commands received from the host controller.The system of claim 1, wherein the host controller includes a MIPI host controller.The system of claim 1, wherein the plurality of commands includes display commands.The system of claim 1, wherein the host controller includes a FIFO buffer to receive commands from the scheduler.The system of claim 1, wherein the display driver stack may receive a tearing effect signal which in turns forwards to an operating system as an indication that a device coupled to the display driver stack is functional.The system of claim 1, wherein the scheduler is further to send queued commands on timer input set by the display driver stack.The system of claim 1, wherein the scheduler is further to send queued commands on TE interrupt.An apparatus comprising:a host controller; anda scheduler for a display interface, the scheduler comprising a first queue to store commands that are to be flushed and provided to the host controller on tearing effect (TE) signal event and a second queue that is to provide its contents to a host controller and not be flushed in response to a TE signal event, wherein the scheduler is to select between the first queue and the second queue to support an emulated video mode.The apparatus of claim 10, wherein the scheduler includes a first queue to store commands that are to be executed upon a tearing effect signal event; a second queue to store a fixed set of commands to write data in a fixed set of memory addresses; and a third queue to store commands which are to be delayed before these commands are executed.The apparatus of claim 10, wherein the command scheduler further includes a control register to perform at least one of the following functions: force reset the scheduler, force flush the first, second, or third queues, enable or disable the first, second, or third queues, and provide a status of the first, second, or third queues.The system of claim 10, wherein the host controller includes a FIFO buffer to receive commands from the scheduler.Machine-readable storage including machine-readable instructions, when executed, to implement a method or realize an apparatus as claimed in any preceding claim.An apparatus comprising means to perform a method as claimed in any preceding claim. |
FIELD This disclosure pertains to computing systems, and in particular (but not exclusively), to techniques for scheduling commands within a display system. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a diagram illustrating an embodiment of a block diagram for a computing system including a multicore processor.FIG. 2 is a diagram illustrating an embodiment of a low power computing platform.FIG. 3 is a diagram illustrating an embodiment of a low power data transmission platform.FIG. 4 is a diagram illustrating an embodiment of a System-On-A-Chip (SoC) device sending data to a display device along a communication interface.FIG. 5 is a diagram illustrating an embodiment of an operation of a display memory during operation of a command mode.FIG. 6 is a configuration of display controller hardware.FIG. 7 is a diagram illustrating an embodiment of display controller hardware consistent with the present disclosure.FIG. 8 is a diagram illustrating another embodiment of display controller hardware consistent with the present disclosure.FIG. 9 is a diagram illustrating a flowchart of a method of using tearing effect signals to indicate the functionality of a display device.FIG. 10 is another diagram illustrating a flowchart of a method of scheduling commands within a display system. DETAILED DESCRIPTION In the following description, numerous specific details are set forth, such as examples of specific types of processors and system configurations, specific hardware structures, specific architectural and micro architectural details, specific register configurations, specific instruction types, specific system components, specific measurements/heights, specific processor pipeline stages and operation etcetera in order to provide a thorough understanding of the present disclosure. It will be apparent, however, to one skilled in the art that these specific details need not be employed to practice the present disclosure. In other instances, well known components or methods, such as specific and alternative processor architectures, specific logic circuits/code for described algorithms, specific firmware code, specific interconnect operation, specific logic configurations, specific manufacturing techniques and materials, specific compiler implementations, specific expression of algorithms in code, specific power down and gating techniques/logic and other specific operational details of computer system haven't been described in detail in order to avoid unnecessarily obscuring the present disclosure.Although the following embodiments may be described with reference to energy conservation and energy efficiency in specific integrated circuits, such as in computing platforms or microprocessors, other embodiments are applicable to other types of integrated circuits and logic devices. Similar techniques and teachings of embodiments described herein may be applied to other types of circuits or semiconductor devices that may also benefit from better energy efficiency and energy conservation. For example, the disclosed embodiments are not limited to desktop computer systems or Ultrabooks™. And may be also used in other devices, such as handheld devices, tablets, other thin notebooks, systems on a chip (SOC) devices, and embedded applications. Some examples of handheld devices include cellular phones, Internet protocol devices, digital cameras, personal digital assistants (PDAs), and handheld PCs. Embedded applications typically include a microcontroller, a digital signal processor (DSP), a system on a chip, network computers (NetPC), set-top boxes, network hubs, wide area network (WAN) switches, or any other system that may perform the functions and operations taught below. Moreover, the apparatus', methods, and systems described herein are not limited to physical computing devices, but may also relate to software optimizations for energy conservation and efficiency. As will become readily apparent in the description below, the embodiments of methods, apparatus', and systems described herein (whether in reference to hardware, firmware, software, or a combination thereof) are vital to a 'green technology' future balanced with performance considerations.As computing systems are advancing, the components therein are becoming more complex. As a result, the interconnect architecture to couple and communicate between the components is also increasing in complexity to ensure bandwidth requirements are met for optimal component operation. Furthermore, different market segments demand different aspects of interconnect architectures to suit the market's needs. For example, servers require higher performance, while the mobile ecosystem is sometimes able to sacrifice overall performance for power savings. Yet, it's a singular purpose of most fabrics to provide highest possible performance with maximum power saving. Below, a number of interconnects are discussed, which would potentially benefit from aspects of the disclosure described herein.Note that the apparatus, methods, and systems described above may beimplemented in any electronic device or system as aforementioned. As specific illustrations, the figures below provide exemplary systems for utilizing the invention as described herein. As the systems below are described in more detail, a number of different interconnects are disclosed, described, and revisited from the discussion above. And as is readily apparent, the advances described above may be applied to any of those interconnects, fabrics, or architectures.Referring to FIG. 1 , an embodiment of a block diagram for a computing system including a multicore processor is depicted. Processor 100 includes any processor or processing device, such as a microprocessor, an embedded processor, a digital signal processor (DSP), a network processor, a handheld processor, an application processor, a co-processor, a system on a chip (SOC), or other device to execute code. Processor 100, in one embodiment, includes at least two cores-core 101 and 102, which may include asymmetric cores or symmetric cores (the illustrated embodiment). However, processor 100 may include any number of processing elements that may be symmetric or asymmetric.In one embodiment, a processing element refers to hardware or logic to support a software thread. Examples of hardware processing elements include: a thread unit, a thread slot, a thread, a process unit, a context, a context unit, a logical processor, a hardware thread, a core, and/or any other element, which is capable of holding a state for a processor, such as an execution state or architectural state. In other words, a processing element, in one embodiment, refers to any hardware capable of being independently associated with code, such as a software thread, operating system, application, or other code. A physical processor (or processor socket) typically refers to an integrated circuit, which potentially includes any number of other processing elements, such as cores or hardware threads.A core often refers to logic located on an integrated circuit capable of maintaining an independent architectural state, wherein each independently maintained architectural state is associated with at least some dedicated execution resources. In contrast to cores, a hardware thread typically refers to any logic located on an integrated circuit capable of maintaining an independent architectural state, wherein the independently maintained architectural states share access to execution resources. As can be seen, when certain resources are shared and others are dedicated to an architectural state, the line between the nomenclature of a hardware thread and core overlaps. Yet often, a core and a hardware thread are viewed by an operating system as individual logical processors, where the operating system is able to individually schedule operations on each logical processor.Physical processor 100, as illustrated in FIG. 1 , includes two cores-core 101 and 102. Here, core 101 and 102 are considered symmetric cores, i.e. cores with the same configurations, functional units, and/or logic. In another embodiment, core 101 includes an out-of-order processor core, while core 102 includes an in-order processor core. However, cores 101 and 102 may be individually selected from any type of core, such as a native core, a software managed core, a core adapted to execute a native Instruction Set Architecture (ISA), a core adapted to execute a translated Instruction Set Architecture (ISA), a co-designed core, or other known core. In a heterogeneous core environment (i.e. asymmetric cores), some form of translation, such a binary translation, may be utilized to schedule or execute code on one or both cores. Yet to further the discussion, the functional units illustrated in core 101 are described in further detail below, as the units in core 102 operate in a similar manner in the depicted embodiment.As depicted, core 101 includes two hardware threads 101a and 101b, which may also be referred to as hardware thread slots 101a and 101b. Therefore, software entities, such as an operating system, in one embodiment potentially view processor 100 as four separate processors, i.e., four logical processors or processing elements capable of executing four software threads concurrently. As alluded to above, a first thread is associated with architecture state registers 101a, a second thread is associated with architecture state registers 101b, a third thread may be associated with architecture state registers 102a , and a fourth thread may be associated with architecture state registers 102b. Here, each of the architecture state registers (101a, 101b, 102a , and 102b) may be referred to as processing elements, thread slots, or thread units, as described above. As illustrated, architecture state registers 101a are replicated in architecture state registers 101b, so individual architecture states/contexts are capable of being stored for logical processor 101a and logical processor 101b. In core 101, other smaller resources, such as instruction pointers and renaming logic in allocator and renamer block 130 may also be replicated for threads 101a and 101b. Some resources, such as re-order buffers in reorder/retirement unit 135, ILTB 120, load/store buffers, and queues may be shared through partitioning. Other resources, such as general purpose internal registers, page-table base register(s), low-level data-cache and data-TLB 115, execution unit(s) 140, and portions of out-of-order unit 135 are potentially fully shared.Processor 100 often includes other resources, which may be fully shared, shared through partitioning, or dedicated by/to processing elements. In FIG. 1 , an embodiment of a purely exemplary processor with illustrative logical units/resources of a processor is illustrated. Note that a processor may include, or omit, any of these functional units, as well as include any other known functional units, logic, or firmware not depicted. As illustrated, core 101 includes a simplified, representative out-of-order (OOO) processor core. But an in-order processor may be utilized in different embodiments. The OOO core includes a branch target buffer 120 to predict branches to be executed/taken and an instruction-translation buffer (I-TLB) 120 to store address translation entries for instructions.Core 101 further includes decode module 125 coupled to fetch unit 120 to decode fetched elements. Fetch logic, in one embodiment, includes individual sequencers associated with thread slots 101a, 101b, respectively. Usually core 101 is associated with a first ISA, which defines/specifies instructions executable on processor 100. Often machine code instructions that are part of the first ISA include a portion of the instruction (referred to as an opcode), which references/specifies an instruction or operation to be performed. Decode logic 125 includes circuitry that recognizes these instructions from their opcodes and passes the decoded instructions on in the pipeline for processing as defined by the first ISA. For example, as discussed in more detail below decoders 125, in one embodiment, include logic designed or adapted to recognize specific instructions, such as transactional instruction. As a result of the recognition by decoders 125, the architecture or core 101 takes specific, predefined actions to perform tasks associated with the appropriate instruction. It is important to note that any of the tasks, blocks, operations, and methods described herein may be performed in response to a single or multiple instructions; some of which may be new or old instructions. Note decoders 126, in one embodiment, recognize the same ISA (or a subset thereof). Alternatively, in a heterogeneous core environment, decoders 126 recognize a second ISA (either a subset of the first ISA or a distinct ISA).In one example, allocator and renamer block 130 includes an allocator to reserve resources, such as register files to store instruction processing results. However, threads 101a and 101b are potentially capable of out-of-order execution, where allocator and renamer block 130 also reserves other resources, such as reorder buffers to track instruction results. Unit 130 may also include a register renamer to rename program/instruction reference registers to other registers internal to processor 100. Reorder/retirement unit 135 includes components, such as the reorder buffers mentioned above, load buffers, and store buffers, to support out-of-order execution and later in-order retirement of instructions executed out-of-order.Scheduler and execution unit(s) block 140, in one embodiment, includes a scheduler unit to schedule instructions/operation on execution units. For example, a floating point instruction is scheduled on a port of an execution unit that has an available floating point execution unit. Register files associated with the execution units are also included to store information instruction processing results. Exemplary execution units include a floating point execution unit, an integer execution unit, a jump execution unit, a load execution unit, a store execution unit, and other known execution units.Lower level data cache and data translation buffer (D-TLB) 150 are coupled to execution unit(s) 140. The data cache is to store recently used/operated on elements, such as data operands, which are potentially held in memory coherency states. The D-TLB is to store recent virtual/linear to physical address translations. As a specific example, a processor may include a page table structure to break physical memory into a plurality of virtual pages.Here, cores 101 and 102 share access to higher-level or further-out cache, such as a second level cache associated with on-chip interface 110. Note that higher-level or further-out refers to cache levels increasing or getting further way from the execution unit(s). In one embodiment, higher-level cache is a last-level data cache-last cache in the memory hierarchy on processor 100 -such as a second or third level data cache. However, higher level cache is not so limited, as it may be associated with or includes an instruction cache. A trace cache-a type of instruction cache-instead may be coupled after decoder 125 to store recently decoded traces. Here, an instruction potentially refers to a macro-instruction (i.e. a general instruction recognized by the decoders), which may decode into a number of micro-instructions (micro-operations).In the depicted configuration, processor 100 also includes on-chip interface module 110. Historically, a memory controller, which is described in more detail below, has been included in a computing system external to processor 100. In this scenario, on-chip interface 110 is to communicate with devices external to processor 100, such as system memory 175, a chipset (often including a memory controller hub to connect to memory 175 and an I/O controller hub to connect peripheral devices), a memory controller hub, a northbridge, or other integrated circuit. And in this scenario, bus 105 may include any known interconnect, such as multi-drop bus, a point-to-point interconnect, a serial interconnect, a parallel bus, a coherent (e.g. cache coherent) bus, a layered protocol architecture, a differential bus, and a GTL bus.Memory 175 may be dedicated to processor 100 or shared with other devices in a system. Common examples of types of memory 175 include DRAM, SRAM, non-volatile memory (NV memory), and other known storage devices. Note that device 180 may include a graphic accelerator, processor or card coupled to a memory controller hub, data storage coupled to an I/O controller hub, a wireless transceiver, a flash device, an audio controller, a network controller, or other known device.Recently however, as more logic and devices are being integrated on a single die, such as SOC, each of these devices may be incorporated on processor 100. For example, in one embodiment, a memory controller hub is disposed on the same package and/or die as processor 100. Here, a portion of the core (an on-core portion) 110 includes one or more controller(s) for interfacing with other devices such as memory 175 or a graphics device 180. The configuration including an interconnect and controllers for interfacing with such devices is often referred to as an on-core (or un-core configuration). As an example, on-chip interface 110 includes a ring interconnect for on-chip communication and a high-speed serial point-to-point link 105 for off-chip communication. Yet, in the SOC environment, even more devices, such as the network interface, co-processors, memory 175, graphics processor 180, and any other known computer devices/interface may be integrated on a single die or integrated circuit to provide small form factor with high functionality and low power consumption.In one embodiment, processor 100 is capable of executing a compiler, optimization, and/or translator code 177 to compile, translate, and/or optimize application code 176 to support the apparatus and methods described herein or to interface therewith. A compiler often includes a program or set of programs to translate source text/code into target text/code. Usually, compilation of program/application code with a compiler is done in multiple phases and passes to transform hi-level programming language code into low-level machine or assembly language code. Yet, single pass compilers may still be utilized for simple compilation. A compiler may utilize any known compilation techniques and perform any known compiler operations, such as lexical analysis, preprocessing, parsing, semantic analysis, code generation, code transformation, and code optimization.Larger compilers often include multiple phases, but most often these phases are included within two general phases: (1) a front-end,i.e.generally where syntactic processing, semantic processing, and some transformation/optimization may take place, and (2) a back-end,i.e.generally where analysis, transformations, optimizations, and code generation takes place. Some compilers refer to a middle, which illustrates the blurring of delineation between a front-end and back end of a compiler. As a result, reference to insertion, association, generation, or other operation of a compiler may take place in any of the aforementioned phases or passes, as well as any other known phases or passes of a compiler. As an illustrative example, a compiler potentially inserts operations, calls, functions, etcetera in one or more phases of compilation, such as insertion of calls/operations in a front-end phase of compilation and then transformation of the calls/operations into lower-level code during a transformation phase. Note that during dynamic compilation, compiler code or dynamic optimization code may insert such operations/calls, as well as optimize the code for execution during runtime. As a specific illustrative example, binary code (already compiled code) may be dynamically optimized during runtime. Here, the program code may include the dynamic optimization code, the binary code, or a combination thereof.Similar to a compiler, a translator, such as a binary translator, translates code either statically or dynamically to optimize and/or translate code. Therefore, reference to execution of code, application code, program code, or other software environment may refer to: (1) execution of a compiler program(s), optimization code optimizer, or translator either dynamically or statically, to compile program code, to maintain software structures, to perform other operations, to optimize code, or to translate code; (2) execution of main program code including operations/calls, such as application code that has been optimized/compiled; (3) execution of other program code, such as libraries, associated with the main program code to maintain software structures, to perform other software related operations, or to optimize code; or (4) a combination thereof.Referring to FIG. 2 , an embodiment of a low power computing platform is depicted. In one embodiment, low power computing platform 200 includes a user endpoint, such as a phone, smartphone, tablet, ultraportable notebook, a notebook, a desktop, a server, a transmitting device, a receiving device, or any other known or available computing platform. The illustrated platform depicts a number of different interconnects to couple multiple different devices. Exemplary discussion of these interconnect are provided below to provide options on implementation and inclusion. However, a low power platform 200 is not required to include or implement the depicted interconnects or devices. Furthermore, other devices and interconnect structures that are not specifically shown may be included.Starting at the center of the diagram, platform 200 includes application processor 205. Often this includes a low power processor, which may be a version of a processor configuration described herein or known in the industry. As one example, processor 200 is implemented as a system on a chip (SoC). As a specific illustrative example, processor 200 includes an Intel® Architecture Core™-based processor such as an i3, i5, i7 or another such processor available from Intel Corporation, Santa Clara, CA. However, understand that other low power processors such as available from Advanced Micro Devices, Inc. (AMD) of Sunnyvale, CA, a MIPS-based design from MIPS Technologies, Inc. of Sunnyvale, CA, an ARM-based design licensed from ARM Holdings, Ltd. or customer thereof, or their licensees or adopters may instead be present in other embodiments such as an Apple A5/A6 processor, a Qualcomm Snapdragon processor, or TI OMAP processor.FIG. 3 is a diagram illustrating an embodiment of a low power data transmission platform. As shown, an application layer, protocol standard layer, and physical standard layer are displayed in the figure. In particular, the application layer provides various instances of a camera serial interface (CSI) - 311, 316, 356, 361, 367, 371, and 376. Notably, CSI may include a unidirectional differential serial interface to transmit data and clock signals.The protocol standard layer includes another instance of a CSI interface 310 and a Digital Serial Interface (DSI) 315. DSI may define a protocol between a host processor and a peripheral device using a D-PHY physical interface. In addition, the protocol standard layer includes a DigRF interface 355, UniPro interface 360, Low Latency Interface (LLI) 365, SuperSpeed InterChip (SSIC) interface 370, and Peripheral Component Interconnect Express (PCIe) 375 interface.Lastly, the physical standard layer provides a D-PHY 305 sub-layer. It may be understood by one having ordinary skill in the art that D-PHY includes a physical layer solution upon which MIPI camera interfaces, display serial interfaces, and general purpose high-speed/low-power interfaces are based. In addition, the physical standard layer includes a M-PHY sub-layer 350 which is the successor of D-PHY, requiring less pins and providing more bandwidth per pin (pair) with improved power efficiency.Many conventional computing devices require a boot up sequence before the devices are able to operate in normal runtime mode. One having ordinary skill in the art may appreciate that boot up (e.g., booting) refers to the initial set of operations that a computer system performs after the device is turned on(e.g.,when electrical power to the CPU is switched on or when the computer is reset). On modern general purpose computers, booting up may take tens of seconds to perform any of a power-on self-test to locate and initialize peripheral devices, along with finding, loading and starting an operating system. The boot up process typically ends when the device is ready to perform its normal operations during runtime.Typically, booting occurs during "video mode" whereas runtime occurs during "command mode." Boot up process occurs most often in a limited environment which may not support interrupts. In MIPI DSI applications, a display device may operate in video mode during pre-runtime in an Unified Extensible Firmware Interface (UEFI), EFI, or BIOS environment.During video mode, display data is continuously sent by a display controller to a display device without software intervention. However, because a framework to handle interrupts is often not present in video mode, interrupts may not be handled in time or may not be handled at all.In addition, many computing devices utilize non-real time operating systems (e.g., Windows and Linux operating systems) which are non-deterministic. As such, many of these systems cannot guarantee that display related commands are processed in real time. It should be understood by one having ordinary skill in the art that display related commands may not be considered time critical in view of other requests within the computer system which needs immediate attention. However, because users interface directly with display devices (e.g., monitors), it is desirable for computing systems to be more responsive to display commands in real time.Moreover, as the frame rate set by software applications increases (e.g., from 22, 48, 60 fps and beyond), display devices need to be capable of processing larger quantities of display information to avoid frame drops and display glitches. Accordingly, it is desirable for a display system to satisfy both time and non-critical demands. The present disclosure addresses this need.As described above, in MIPI DSI based communication systems, a display device typically operates in one of two modes - video or command modes. Video mode may be described as a mode of operation used in the pre-operating or pre-runtime environment which does not require any software intervention. Video mode may be post boot but this is not often the case as video mode typically requires an update to source buffer and does not implement some power saving features like Dynamic Self Refresh. In addition, video mode uses a vertical sync ("vsync") pulse to determine the live status of a display device/subsystem. A vsync pulse may also be used to indicate an end of a current scan of a display buffer. Alternatively, the command mode may be engaged during runtime.In some implementations, a vsync pulse is generated after each frame is processed or according to a set number of frames processed per second. For example, for computing systems that feature frame rates of 60 Hz, the corresponding number of vsync pulses for this frame rate is 60 pulses per second. Accordingly, display devices with higher refresh rate requirements will typically have higher vsync pulses per interval (e.g., per second).During the transition from video to command mode, the display device is powered down for a short period of time, during which, the display screen flickers or blanks outs. Accordingly, the transition from video to command mode is not typically a smooth transition. However, because user experience is becoming increasingly more important, flickers and blank outs may no longer be acceptable. The present disclosure provides a solution to address these shortcomings.Alternatively, the command mode utilizes tearing effect signals or interrupts to prevent tearing from occurring. One having ordinary skill in the art may appreciate that tearing effects may be exhibited as visual artifact(s) of two or more frames(e.g.,a partial old frame and a partial new frame) within a single screen draw. Tearing may occur when the video feed to the display device is not in sync with the display device's refresh. Tearing effect signals may prevent tearing as will be described in more detail below in reference to FIG. 5 . Operating in command mode may be advantageous because the power requirements are significantly less than the power typically required for a device to operate in video mode.FIG. 4 is a diagram illustrating an embodiment of a System-On-A-Chip (SoC) device 401 shown sending data to a display device 402 along a communication interface 404(e.g.,a MIPI communication interface 404). In one or more embodiments, the data sent to the display device 402 includes pixel information. One having ordinary skill in the art may appreciate that pixel information may be generated via instructions from a software application, instructions to change the brightness setting on a screen, etcetera.However, the present disclosure is not limited to a MIPI communication interface 404 and may be equally applied to other communication protocols consistent with this disclosure. When the devices are in a display system, the communication interface 404 may be referred to as a MIPI Display Serial Interface 404. Referring back to FIG. 4 , display device 402 may comprise a local memory buffer 403 which may provide several advantages to a display system. For example, local memory buffer 403 may store data and display-related commands close to the display device 402. One having ordinary skill in the art may appreciate that having the local memory buffer 403 close to the display device 402 reduces traffic and thereby reduces power.Without a local memory buffer, a display controller (not shown) of the display device 402 will have to access the video data (or commands) from system memory (e.g., external memory) relatively far away from the display device 402. In one or more embodiments, the display data and commands are retrieved within one hop.During the command mode, the display device 402 has an accessible local memory buffer 403 from which it executes commands to make updates to the physical screen (not shown) of the display device 402. Typically, new frame data is transmitted to the local memory buffer 403 when there is an update to a frame. During video mode, the display device 402 typically does not have access to a local memory buffer and therefore display data needs to be transmitted continuously.Moving forward, FIG. 5 is a diagram illustrating an embodiment of an operation of a display memory 500 during operation of a command mode. In one or more embodiments of the present disclosure, display memory 500 includes memory to store pixel information into each memory cell 503. Specifically, memory cells 523, 533 refer to the first and last cells of the display memory 500. A read pointer 506 is shown pointing to memory cell 533 and a write pointer 507 is shown pointing to memory cell 523. In one or more embodiments, data is read from a memory cell 503 pointed to by read pointer 506 whereas data is written into a memory cell 503 pointed to by write pointer 507. For instance, in FIG. 5 , data is read from memory cell 533 whereas data is written into memory cell 523. To avoid tearing effects, none of the memory cells 503 should be read from and written to at the same time. As such, to avoid tearing within the command mode, a tearing effect signal/interrupt provides the operating system with the location of the read pointer 506 on the display screen. For instance, a tearing effect signal 508 may be generated once the read pointer reaches a specific location of the screen such as the end of the frame or near the end of the frame.In one or more embodiments, the tearing effect signal is sent from the display device and triggers the write pointer 507 to begin writing on the frame. Therefore, the likelihood that the read pointer 506 and the write pointer 507 points to the same memory cell 503 is significantly reduced. One having ordinary skill in the art may appreciate that the tearing effect signal is used for synchronization between the read and write pointers 506, 507. FIG. 6 is a configuration of display controller hardware 600. Display controller hardware 600 includes a display driver stack 610 coupled to a generic MIPI host controller 620. Display drive stack 610 may include software that communicates to the operating system and other software application on how to communicate with the display device 650. In operation, display driver stack 610 receives tearing effect signals from display device 650 from signal path 615. When the tearing effect signal propagates along the signal path 615 to display driver stack 610, display data and commands are forwarded to host controller 620. Host controller 620 may translate the signals(e.g.,in MIPI) and may subsequently send the transmitted signals to the display device 650 when a new frame is received (e.g., user moves a mouse). In conventional systems, the operating system typically issues memory writes immediately after a tearing effect signal arrives and detects a need to send a new frame to display when pixel data has changed.Most general purpose operating systems are non-deterministic and therefore do not time guarantee the handling of tearing effect signals/interrupts. Accordingly, unpredictable delays may be introduced between the generation of tearing effect signals and updating the display. If tearing effect signals are not handled timely, tearing effects may be observed on the display screen of the display device.Moving forward, FIG. 7 is a diagram illustrating an embodiment of display controller hardware 700 consistent with the present disclosure. As will be described in more detail below, the addition of a command scheduler 740 may function to store and send commands to a host controller 720. In the embodiment shown, command scheduler 740 supports the following logic blocks - controller register 730, timer 731, timer triggered queue 732, static queue 733, dynamic queue 734, and bus arbiter 735. The command scheduler 740 may send queued commands on timer input set by the display driver 710. In addition, the command scheduler 740 may send queued commands, pre-set commands, and pixel data upon each occurrence of a tearing effect signal.Command scheduler 740 may support several modes of operation. For instance, command scheduler 740 may send commands based on a tearing effect signal and based on a timer effect event. In one or more embodiments, the commands sent based on a tearing effect signal event may be categorized as one of two command types. The first command type may include commands which are stored in the static queue 733 but are never auto flushed therefrom the queue 733. As such, the commands in this queue 733 are repeated for every TE event until the command(s) are removed therefrom. Accordingly, the static queue 733 may be referred to as a "sticky queue" because the commands therein are never flushed according to one or more embodiments of the present disclosure. However, in some embodiments, software may be utilized to remove commands from the static queue 733. The second command type includes commands which are sent to the dynamic queue 734 when the operating system detects a new frame. In contrast to the commands stored in the static queue 733, commands stored in the dynamic queue 734 are flushed after they are sent (via bus arbiter 735, generic host controller 720, etc.) to a display device 750 for execution. For example, a command stored in the dynamic queue 734 may be a command type which instructs that the display device 750 perform a specific function after a certain period of time (e.g., milliseconds). Accordingly, instituting a delay in the execution of a command may ensure that a command is not sent and executed too soon. For instance, delaying the execution of commands may help in booting time/brightness setting/panel power sequencing/etc.During operation, display device 750 generates a tearing effect signal which is propagated along signal line 715 to the display driver 710 via signal line 716. In addition, the tearing effect signal is propagated to the command scheduler 740 along signal line 715. Advantageously, the display driver stack 710 may send the tearing effect signal to the operating system as an indication that the display device 750 is functioning. In return, the operating system may send any new commands according to a user's input (e.g., detection of a new screen artifact, etc.) to the display driver stack 710. In time, these command(s) are sent to the dynamic queue 734. As such, the dynamic queue 734 may store commands which are to be executed by the display device 750 for a new frame to be displayed. The dynamic queue 734 therefore solves the limitations present with general purpose operating systems of not being time guaranteeing.The command scheduler 740 ensures that commands are timely executed to prevent frame drops and glitches from occurring. In one or more embodiments, the commands stored in the dynamic queue 734 may be continuously flushed of old commands ("dirty frames") after they have been executed by the display device 750. Accordingly, the dynamic queue 734 is filled with new commands to display new frame(s) on the display device 750. Advantageously, the operating system is able to distinguish between new frames and dirty frames such that new, unexpected commands are executed to display the new frames.For example, when a new frame is detected by the operating system, commands for displaying the new frame is sent to the dynamic queue 734. In time, a memory write command(e.g.,0x2c), along with the memory address where the pixel data associated with the new frames are stored, may be stored in the dynamic queue 734. Once a tearing effect signal is received, a command (or set of commands) is propagated to the First In First Out (FIFO) buffer 725 of the host controller 720 via the bus arbiter 735 and eventually executed by the display device 750. It should be appreciated by those having ordinary skill in the art that the commands for the new frame are synchronized with the tearing effect signal to prevent tearing effects on the display device 750. Advantageously, the command scheduler 740 schedules and sends the commands to the display device 750 in a timely manner according to its dedicated hardware without any aid from the operating system.In addition, when a tearing effect signal is received by the command scheduler 740, commands within the static queue 733 are executed by the display device 750. Most importantly, the commands within the static queue 733 are not flushed after they are executed by the display device 750 such that the commands within this queue 733 are executed each time a tearing effect signal is received from the command scheduler 740. For example, when a tearing effect signal is received, a memory write command (e.g., 0x2c), along with the address where the pixel data information is stored, may be sent to the FIFO buffer 725. In this example, the memory write command (0x2c) remains in the static queue 733 after the command is executed by the display device 750. In particular, static queue 733 may retain commands that are executed during boot up of display device 750. More specifically, the commands executed in a typical EFI (or UEFI) or GOP environment may be stored in static queue 733 and scheduled by the command scheduler 740. Although the display device may operate exclusively in the command mode, the execution of commands stored in the static queue 733 may emulate operating in the video mode without software intervention.Accordingly, the command scheduler 740 may be utilized to negate the distinction between operating in traditionally defined "video" and "command" modes because all commands associated with either mode is prioritized and executed according to the configuration of the command scheduler 750 as described herein.In addition, status bits may be associated with both static queue 733 and dynamic queue 734 to indicate whether commands are presently stored in each respective queue. In addition, commands may also be removed from queues 733, 734 according to a dequeue instruction.In one or more embodiments of the present disclosure, the static queue 733 and the dynamic queue 734 cannot send commands to the host controller 720 simultaneously. As such, commands from only one of the queues 733, 734 may be sent to the bus arbiter 735 at any given time. The order that the commands are sent from the static queue 733 and the dynamic queue 734 (in addition to the timer triggered queue) may be determined by a priority scheme employed by the bus arbiter 735 as will be described in more detail below.The command scheduler 740 may also schedule commands which need to be executed at a particular time. For instance, these type of commands may have a value component associated therewith. For example, if a user adjusts the brightness of the display device 750 (e.g., backlight) from a setting A(e.g.,25) to a setting B(e.g.,75), the brightness may increase linearly to effect a smooth transition. Accordingly, a timer block 731 may work in cooperation with the timer triggered queue 732. Timer block 731 may include a free running, general purpose timer.In addition, there are some commands which should be delayed before being executed by the display device 750. For example, command scheduler 740 may institute a delay between two or more command executions to preserve and extend the lifetime of the display device 750. For commands based on timer interrupt(s), the display driver stack 710 may write the commands to be sent in the timer triggered queue 732 along with a time duration at which these commands need to be sent for execution. When the timer value matches the time stamp, the command may be transferred to the host controller FIFO 725 and sent across the display device 750 for execution.Accordingly, the command scheduler 740 may schedule various types of commands such as those which are time sensitive (e.g., commands which need to be delayed and stored in the time triggered queue 732 ) . The command scheduler 740 may also store commands that are always executed upon receipt of each tearing effect signal (commands stored in the static queue 733). Furthermore, commands related to the operating system's detection of a new frame, and executed at the event a TE interrupt signal, are stored in the dynamic queue 734. Moving along in the figure, bus arbiter 735 selects the timer triggered queue 731, static queue 733 or the dynamic queue 734 to receive commands from either queue. Thus, only one of the queues 733, 734 may send commands to the bus arbiter 735 at any given time. In one or more embodiments, bus arbiter 735 includes a multiplexer component (not shown) to select commands from the timer triggered, static and dynamic queues 731, 733, 734. In one or more embodiments, bus arbiter 735 selects commands according to the order of the following priority scheme: first, dynamic queue 734; secondly, static queue 733, and thirdly, timer triggered queue 731. However, one having ordinary skill in the art may appreciate that the present disclosure is not limited to a bus arbiter 735 comprising a multiplexer component therein. Any component within the bus arbiter 735 which may enable the bus arbiter 735 to selectively choose commands from the static queue 733 or dynamic queue 734 is within the spirit and scope of the present disclosure.In time, after the bus arbiter 735 receives the commands from either the static queue 733 or dynamic queue 734, the bus arbiter 735 writes the commands to host controller 720. Host controller 720 may translate the commands into a format that the display device 750 may read. In one or more embodiments, host controller 720 is a generic MIPI host controller 720 and therefore translates the commands according to MIPI. The display device 750 executes the commands when received from the generic MIPI host controller 720. Command scheduler 740 may also include a control register 730. In one implementation, control register 730 provides the overall configuration for the command scheduler 740. For example, the control register 730 may provide any of the following features - force reset the command scheduler 740, force flush the queues, enable or disable any of the queues, and set priorities for the bus arbiter 735. The control register 730 may also provide a status of the queues(e.g.,full/half/empty,etc.).FIG. 8 is a diagram illustrating another embodiment of display controller hardware 800 consistent with the present disclosure. In the implementation illustrated in the figure, the command scheduler 840 may support an emulated video mode as will be described in more detail below.In one or more embodiments of the present disclosure, when commands are sent on the receipt of each tearing effect signal, the display driver stack 810 selects an emulated video mode bit to enable the static queue 833. When the static queue 833 is enabled, the dynamic queue 834 is disabled because only one of the queues 833, 834 may be enabled at any given time according to one or more embodiments of the present disclosure. Disabling either one of the static or dynamic queues 833, 834 may be achieved by NOT gate 860. In an embodiment, when the tearing effect signal is triggered, the contents of the static queue 833 is copied to the host controller FIFO 825 (coordinated by bus arbiter 835) and in time sent to the display device 850. Accordingly, the same copy actions are executed when the next tearing effect signal occurs until the display driver 810 deselects the emulated video mode bit.In addition, when commands associated with a new frame(s) are sent upon receipt of a TE signal, the display driver stack 810 selects the compliment of the emulated video mode bit to enable the dynamic queue 834. The display driver will then write the commands, in addition to the memory address of any associated pixel data in some instances, to the dynamic queue 834. Once the tearing effect signal is triggered, the commands in the dynamic queue 834 are transferred to the host controller FIFO 825 and in time executed by the display device 850. FIG. 9 is a diagram illustrating a flowchart 900 of a method of using tearing effect signals to indicate the functionality of a display device. The method disclosed in flowchart 900 may be applicable to both FIGS. 7 and 8 .Block 901 provides receiving a tearing effect signal at a display driver stack from a display device. As described above, a display device may send a tearing effect signal once the read pointer reaches a specific location on a display screen. Next, in response to receiving the tearing effect signal, forwarding the tearing effect signal to the operating system (block 902) as an indication that the display device is functional.FIG. 10 is a diagram illustrating a flowchart 1000 of another method of scheduling commands within a display system. The method disclosed in flowchart 1000 may be applicable to both FIGS. 7 and 8 .Block 1001 provides receiving a tearing effect signal at a display driver stack. In one or more embodiments, the tearing effect signal is generated and sent from the display device. Next, in response to receiving the tearing effect signal, forwarding the tearing effect signal to the operating system (block 1002). Further, according to block 1003, sending a plurality of commands from the display driver stack to a command scheduler. The plurality of commands may include at least one command to be executed during a boot environment and at least one command to be executed during runtime. In time, the commands are sent from the command scheduler to the host controller (block 1004). In one or more embodiments, the commands are sent to a FIFO buffer of the host controller such that the commands are executed in the order they are received in the FIFO buffer.The present disclosure solves many of the limitations present in the current state of the art. For example, systems and methods consistent with the present disclosure meets the challenge of timely executing commands associated with new frames. A command scheduler may include a plurality of queues to store commands therein. In addition, a software application may populate the queue(s) at any time, select the tearing effect signal trigger, and exit to prevent tearing from occurring even for non-real time operating systems.Because the command scheduler receives and forwards the commands for execution without software intervention, no flickering or blanking will occur on a screen of a display device during the transition from boot up to runtime. Furthermore, the addition of a timer block within the command scheduler may be used to delay sending commands. This capability may be particularly useful for panel power sequencing during hibernation or standby modes thereby negating the need for the operating system to wait to issue sequencing commands. Moreover, a display controller hardware system consistent with the present disclosure may issue multiple updates per cycle regardless to the present state of the host controller (such as controller choke ups).The present disclosure is not limited to a hardware configuration of a command scheduler. Accordingly, the present disclosure is amenable to include a software configuration of a command scheduler. For example, a display hardware system consistent with the present disclosure may include a microcontroller which utilizes a software program to send the various types of commands to a host controller for subsequent execution by a display device.The present disclosure includes a system comprising a display driver stack and a scheduler coupled to the display driver stack wherein the scheduler is to receive at least one command from the driver stack. The scheduler is to queue the at least one command and is to schedule a plurality of commands to be executed during a boot environment and during runtime.The system further comprises a host controller coupled to the scheduler wherein the host controller is to receive commands from the scheduler. The scheduler may include a timer triggered queue, static queue, and dynamic queue which all receive commands from the display driver stack. The system may further comprise a display device coupled to the scheduler to execute commands received from the host controller. The host controller includes a MIPI host controller.The host controller may include a FIFO buffer to receive commands from the scheduler. The display driver stack may receive a tearing effect signal which in turns forwards to an operating system as an indication that a device coupled to the display driver stack is functional.The scheduler may also send queued commands on timer input set by the display driver stack. In addition, the scheduler may also send queued commands on receipt of a TE interrupt.The present disclosure further includes an apparatus, which includes a command scheduler to interface with a display driver and a host controller. The command scheduler may include, but is not limited to, a first queue to store commands that are to be executed upon a tearing effect signal event. In one or more embodiments, the command scheduler includes a second queue to store a fixed set of commands to write data in a fixed set of memory addresses. In addition, a command scheduler includes a third queue to store commands which are to be delayed before these commands are executed.The apparatus may also include a timer unit to send time related triggers to the third queue. Further, the apparatus includes a bus arbiter unit coupled to the first queue, second queue, and third queue to select commands from these queues according to a predetermined priority scheme. In one or more embodiments, the bus arbiter unit is communicatively coupled to a host controller such that a host controller receives commands from the bus arbiter unit.The command scheduler may further comprise a control register which sets the priority scheme of the bus arbiter unit. In addition, the scheduler may comprise a NOT gate coupled to the first queue and the second queue.In some embodiments, the first queue stores commands to be executed during runtime. The second queue stores commands to be executed in an unified extensible firmware interface (UEFI), EFI, or basic input/output system (BIOS) environment. The third queue may store commands for setting the brightness of a display device.The command scheduler further includes a control register to force reset the scheduler, force flush the first, second, or third queues, enable or disable the first, second, or third queues, and provide a status of the first, second, or third queues.The present disclosure further includes a method which includes receiving a tearing effect signal at a display driver stack. In response to receiving the tearing effect signal, forwarding the tearing effect signal to an operating system. Further, sending a plurality of commands from the display driver stack to a scheduler wherein the plurality of commands includes at least one command to be executed during a boot environment and at least one command to be executed during runtime. In addition, the method includes sending the plurality of commands from the scheduler to the host controller.In addition, sending the plurality of commands from the display driver stack to the scheduler includes sending the commands to a plurality of queues in the scheduler. Moreover, the method includes sending the plurality of commands from the scheduler to the host controller according to a predetermined priority set by a control register component of the scheduler.The present disclosure further discloses an apparatus which includes a host controller and a scheduler for a display interface. The scheduler includes a first queue to store commands that are to be flushed therefrom and provided to the host controller on receipt of a tearing effect (TE) signal event. The scheduler further includes a second queue to provide its contents to a host controller. The contents of the second queue are not flushed in response to a TE signal event. The scheduler may also select between the first queue and the second queue to support an emulated video mode.The scheduler includes a first queue to store commands that are to be executed upon a tearing effect signal event, a second queue to store a fixed set of commands to write data in a fixed set of memory addresses, and a third queue to store commands which are to be delayed before these commands are executed. Lastly, the host controller includes a FIFO buffer to receive commands from the scheduler.While the present disclosure has been described with respect to a limited number of embodiments, those skilled in the art will appreciate numerous modifications and variations therefrom. It is intended that the appended claims cover all such modifications and variations as fall within the true spirit and scope of this present disclosure.A design may go through various stages, from creation to simulation to fabrication. Data representing a design may represent the design in a number of manners. First, as is useful in simulations, the hardware may be represented using a hardware description language or another functional description language. Additionally, a circuit level model with logic and/or transistor gates may be produced at some stages of the design process. Furthermore, most designs, at some stage, reach a level of data representing the physical placement of various devices in the hardware model. In the case where conventional semiconductor fabrication techniques are used, the data representing the hardware model may be the data specifying the presence or absence of various features on different mask layers for masks used to produce the integrated circuit. In any representation of the design, the data may be stored in any form of a machine readable medium. A memory or a magnetic or optical storage such as a disc may be the machine readable medium to store information transmitted via optical or electrical wave modulated or otherwise generated to transmit such information. When an electrical carrier wave indicating or carrying the code or design is transmitted, to the extent that copying, buffering, or re-transmission of the electrical signal is performed, a new copy is made. Thus, a communication provider or a network provider may store on a tangible, machine-readable medium, at least temporarily, an article, such as information encoded into a carrier wave, embodying techniques of embodiments of the present disclosure.A module as used herein refers to any combination of hardware, software, and/orfirmware. As an example, a module includes hardware, such as a micro-controller, associated with a non-transitory medium to store code adapted to be executed by the micro-controller. Therefore, reference to a module, in one embodiment, refers to the hardware, which is specifically configured to recognize and/or execute the code to be held on a non-transitory medium. Furthermore, in another embodiment, use of a module refers to the non-transitory medium including the code, which is specifically adapted to be executed by the microcontroller to perform predetermined operations. And as may be inferred, in yet another embodiment, the term module (in this example) may refer to the combination of the microcontroller and the non-transitory medium. Often module boundaries that are illustrated as separate commonly vary and potentially overlap. For example, a first and a second module may share hardware, software, firmware, or a combination thereof, while potentially retaining some independent hardware, software, or firmware. In one embodiment, use of the term logic includes hardware, such as transistors, registers, or other hardware, such as programmable logic devices.Use of the phrase "to" or "configured to," in one embodiment, refers to arranging,putting together, manufacturing, offering to sell, importing and/or designing an apparatus, hardware, logic, or element to perform a designated or determined task. In this example, an apparatus or element thereof that is not operating is still "configured to" perform a designated task if it is designed, coupled, and/or interconnected to perform said designated task. As a purely illustrative example, a logic gate may provide a 0 or a 1 during operation. But a logic gate "configured to" provide an enable signal to a clock does not include every potential logic gate that may provide a 1 or 0. Instead, the logic gate is one coupled in some manner that during operation the 1 or 0 output is to enable the clock. Note once again that use of the term "configured to" does not require operation, but instead focus on the latent state of an apparatus, hardware, and/or element, where in the latent state the apparatus, hardware, and/or element is designed to perform a particular task when the apparatus, hardware, and/or element is operating.Furthermore, use of the phrases "capable of/to," and or "operable to," in oneembodiment, refers to some apparatus, logic, hardware, and/or element designed in such a way to enable use of the apparatus, logic, hardware, and/or element in a specified manner. Note as above that use of to, capable to, or operable to, in one embodiment, refers to the latent state of an apparatus, logic, hardware, and/or element, where the apparatus, logic, hardware, and/or element is not operating but is designed in such a manner to enable use of an apparatus in a specified manner.A value, as used herein, includes any known representation of a number, a state, alogical state, or a binary logical state. Often, the use of logic levels, logic values, or logical values is also referred to as 1's and 0's, which simply represents binary logic states. For example, a 1 refers to a high logic level and 0 refers to a low logic level. In one embodiment, a storage cell, such as a transistor or flash cell, may be capable of holding a single logical value or multiple logical values. However, other representations of values in computer systems have been used. For example the decimal number ten may also be represented as a binary value of 1010 and a hexadecimal letter A. Therefore, a value includes any representation of information capable of being held in a computer system.Moreover, states may be represented by values or portions of values. As anexample, a first value, such as a logical one, may represent a default or initial state, while a second value, such as a logical zero, may represent a non-default state. In addition, the terms reset and set, in one embodiment, refer to a default and an updated value or state, respectively. For example, a default value potentially includes a high logical value, i.e. reset, while an updated value potentially includes a low logical value, i.e. set. Note that any combination of values may be utilized to represent any number of states.The embodiments of methods, hardware, software, firmware or code set forthabove may be implemented via instructions or code stored on a machine-accessible, machine readable, computer accessible, or computer readable medium which are executable by a processing element. A non-transitory machine-accessible/readable medium includes any mechanism that provides (i.e., stores and/or transmits) information in a form readable by a machine, such as a computer or electronic system. For example, a non-transitory machine-accessible medium includes random-access memory (RAM), such as static RAM (SRAM) or dynamic RAM (DRAM); ROM; magnetic or optical storage medium; flash memory devices; electrical storage devices; optical storage devices; acoustical storage devices; other form of storage devices for holding information received from transitory (propagated) signals (e.g., carrier waves, infrared signals, digital signals); etc, which are to be distinguished from the non-transitory mediums that may receive information there from.Instructions used to program logic to perform embodiments of the disclosure maybe stored within a memory in the system, such as DRAM, cache, flash memory, or other storage. Furthermore, the instructions may be distributed via a network or by way of other computer readable media. Thus a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), but is not limited to, floppy diskettes, optical disks, Compact Disc, Read-Only Memory (CD-ROMs), and magneto-optical disks, Read-Only Memory (ROMs), Random Access Memory (RAM), Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), magnetic or optical cards, flash memory, or a tangible, machine-readable storage used in the transmission of information over the Internet via electrical, optical, acoustical or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.). Accordingly, the computer-readable medium includes any type of tangible machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer)Reference throughout this specification to "one embodiment" or "anembodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases "in one embodiment" or "in some embodiments" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.In the foregoing specification, a detailed description has been given with reference to specific exemplary embodiments. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the disclosure as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. Furthermore, the foregoing use of embodiment and other exemplarily language does not necessarily refer to the same embodiment or the same example, but may refer to different and distinct embodiments, as well as potentially the same embodiment. |
Semiconductor memory devices, resistive memory devices, memory cell structures, and methods of forming a resistive memory cell (106, 230, 350, 360, 470, 480) are provided. One example method of a resistive memory cell (106, 230, 350, 360, 470, 480) can include a number of dielectric regions (236, 356, 366, 476, 486) formed between two electrodes (102/104, 232/234, 352/354, 362/364, 472/474, 482/484), and a barrier dielectric region (238, 358, 368, 478, 488) formed between each of the dielectric regions (236, 356, 366, 476, 486). The barrier dielectric region (238, 358, 368, 478, 488) serves to reduce an oxygen diffusion rate associated with the dielectric regions (236, 356, 366, 476, 486). |
What is claimed is; 1. A resistive memory cell, comprising: a number of dielectric regions formed between two electrodes; and a barrier dielectric region formed between each of the dielectric regions, wherein the barrier dielectric region serves to reduce an oxygen diffusion rate associated with the dielectric regions. 2. The resistive memory cell of claim 1 , wherein the resistive memory cell is configured to have more than two non-volatile resistive states. 3. The resistive memory cell of claim 1 , wherein number of dielectric regions is at least three. 4. The resistive memory cell of claim 1, wherein the barrier dielectric region has a thickness of less than about 20 Angstroms. 5. The resistive memory cell of claim 1, wherein each of the number of dielectric regions has a thickness of between about 10 and about 100 Angstroms. 6. The resistive memory cell as in one of claims 1-5, wherein the number of dielectric regions having barrier dielectric region formed therebetween are arranged to have an interface between dielectric region and barrier dielectric region be substantially perpendicular to the two electrodes. 7. A resistive memory cell, comprising: an electrode; a first dielectric region formed on the electrode; a barrier dielectric region formed on the first dielectric region, the barrier dielectric region having a slower oxygen diffusion rate and/or being a grain- boundary disruptor relative to the first dielectric region; a second dielectric region formed on the barrier dielectric region; and an other electrode formed on the second dielectric region. 8. The resistive memory cell of claim 7, further comprising one or more additional barrier dielectric regions and one or more additional dielectric regions formed between the barrier dielectric region and the second dielectric, wherein the dielectric regions and barrier dielectric regions alternate, with each barrier dielectric region being located between dielectric regions. 9. The resistive memory cell of claim 8, further comprising a buffer barrier dielectric region between the electrode and one of the dielectric regions, wherein the buffer barrier dielectric region is adjacent the electrode. 10. The resistive memory cell of claim 9, wherein the buffer dielectric region is between the electrode and the first dielectric region, the buffer dielectric region having a lower k value and higher resistance than the first dielectric region. 1 1. The resistive memory cell of claim 9, further comprising a buffer barrier dielectric region between the other electrode and one of the dielectric regions, wherein the buffer barrier dielectric region is adjacent the other electrode. 12. The resistive memory cell of claim 1 1, wherein the dielectric regions include a metal oxide material. 13. The resistive memory cell of claim 12, wherein the first, second, and buffer dielectric regions include titanium dioxide (Ti02), and the barrier dielectric regions include AI2O3. 14. The resistive memory cell of claim 13, wherein first, second, and buffer dielectric regions each have a thickness of between about 10 and about 100 Angstroms. 15. The resistive memory cell of claim 14, wherein the barrier dielectric regions each have a thickness of less than about 20 Angstroms. 16. The resistive memory cell of claim 15, wherein the thickness of the resistive memory cell materials excluding the electrode and the other electrode is less than about 1000 Angstroms. 17. A method of forming a resistive memory cell, comprising: forming a first dielectric region between two electrodes; forming a barrier dielectric region on the first dielectric region; and forming a second dielectric region on the barrier dielectric region, wherein the barrier dielectric region includes a material having a slower oxygen diffusion rate and/or is a grain-boundary disruptor relative to the first and second dielectric regions. 18. The method of claim 17, further comprising forming one or more additional instances of barrier dielectric regions and dielectric regions, wherein dielectric regions and barrier dielectric regions alternate, and each barrier dielectric region is located between dielectric regions. 19. The method of claim 17, wherein forming each of the first and second dielectric regions and the barrier dielectric region includes forming a sub- nanometer thickness thereof. 20. The method of claim 17, wherein forming the barrier dielectric region includes forming to a thickness of less than about 20 Angstroms. 21. The method of claim 17, wherein forming the first and second dielectric regions includes forming the first and second dielectric regions to a thickness of between from about 10 to about 100 Angstroms. 22. The method of claim 17, wherein the thickness of the first and second dielectric regions and the barrier dielectric region is between from about 50 to about 1000 Angstroms. 23. The method as in one of claims 17-22, wherein forming the first and second dielectric regions includes forming a first and second metal oxide region. 24. The method of claim 23, wherein forming the first and second metal oxide regions includes forming a first and second amorphous metal oxide region. 25. The method of claim 23, wherein forming the first and second metal oxide regions includes forming a first and second crystalline metal oxide region, 26. The method of claim 23, wherein forming the first and second metal oxide regions includes forming at least one amorphous metal oxide region and at least one crystalline metal oxide region. 27. The method of claim 23, wherein forming the first and second metal oxide regions includes forming at least one of a titanium dioxide (Ti02) region; a lanthanum oxide (La203) region; a gallium oxide (Ga203) region; a zirconium oxide (Zr02) region; a zirconium silicon oxide (ΖΓχΒίγΟζ) region, a hafnium oxide (Hf02) region; a hafnium silicon oxide (HfxSiyOz) region, and a strontium titanate (SrTi03) region. 28. The method as in one of claims 17-22, wherein forming the barrier dielectric region includes forming at least one of a silicon dioxide (Si02) region; an aluminum oxide (A1203) region; a zirconium oxide (Zr02) region; and an amorphous doped silicon region. 29. The method as in one of claims 17-22, wherein forming the barrier dielectric region includes forming a material having a smaller grain size relative to a grain size of the first and second dielectric regions, the first and second dielectric regions being crystalline metal oxide materials. 30. The method as in one of claims 17-22, further comprising forming a buffer dielectric region adjacent one of the two electrodes and the first dielectric region, wherein the buffer dielectric region has non-reactive stable electrical properties, and a lower k value and higher resistance than the first dielectric region. 31. The method as in one of claims 17-22, further comprising forming a buffer dielectric region adjacent one of the two electrodes and the second dielectric region, wherein the buffer dielectric region has non-reactive stable electrical properties, and a higher k value and lower resistance than the first dielectric region. 32. The method as in one of claims 17-22, wherein forming the barrier dielectric region includes forming a crystalline barrier dielectric region adjacent an amorphous first or second dielectric region, and wherein forming the barrier dielectric region includes forming an amorphous barrier dielectric region adjacent a crystalline first or second dielectric region. 33. A method of forming a resistive memory cell, comprising: forming an electrode; forming alternating dielectric region and barrier dielectric region on the electrode such that each is arranged to be substantially perpendicular to the electrode, wherein each instance of barrier dielectric region is located between the dielectric regions; and forming an other electrode on the alternating dielectric regions and barrier dielectric region, wherein the other electrode is arranged to be substantially parallel to the electrode, wherein the barrier dielectric region has a slower oxygen diffusion rate and/or is a grain-boundary disruptor relative to the dielectric region. 34. The method of claim 33, wherein forming alternating dielectric region and barrier dielectric region on the electrode includes: forming a first dielectric region on the electrode such that the first dielectric region is arranged to have a substantially vertical orientation perpendicular to the electrode; depositing a barrier dielectric region on a sidewall of the first dielectric region;forming the barrier dielectric region to have a substantially vertical orientation perpendicular to the electrode; depositing a second dielectric region on the formed barrier dielectric region; forming the second dielectric region to have a substantially vertical orientation perpendicular to the electrode. 35. The method of claim 33, wherein forming alternating dielectric region and barrier dielectric region on the electrode includes: forming dielectric material on the electrode; etching the dielectric material to form at least one trench substantially perpendicular to the electrode; depositing a barrier dielectric material into the at least one trench; and chemical-mechanical polishing (CMP) the dielectric and barrier dielectric materials opposite the electrode. |
RESISTIVE MEMORY CELL Technical Field [0001] The present disclosure relates generally to semiconductor memory devices and methods, and more particularly, to resistive memory devices, cell structures and methods. Background [0002] Memory devices are typically provided as internal, semiconductor, integrated circuits in computers or other electronic devices. There are many different types of memory, including random-access memory (RAM), read only memory (ROM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), resistive memory, and flash memory, among others. Types of resistive memory include programmable conductor memory, and resistive random access memory (RRAM), among others. [0003] Memory devices are utilized as non-volatile memory for a wide range of electronic applications in need of high memory densities, high reliability, and data retention without power. Non- volatile memory may be used in, for example, personal computers, portable memory sticks, solid state drives (SSDs), digital cameras, cellular telephones, portable music players such as MP3 players, movie players, and other electronic devices. [0004] RRAM devices include resistive memory cells that store data based on the resistance level of a storage element. The cells can be programmed to a desired state, e.g., corresponding to a particular resistance level, such as by applying sources of energy, such as positive or negative voltages to the cells for a particular duration. Some RRAM cells can be programmed to multiple states such that they can represent, e.g., store, two or more bits of data. [0005] The programmed state of a resistive memory cell may be determined, e.g., read, for example, by sensing current through the selected resistive memory cell responsive to an applied interrogation voltage. The sensed current, which varies based on the resistance level of the memory cell, can indicate the programmed state of the resistive memory cell.[0006] A two-state resistive memory cell can have a low resistance state and a high resistance state. Each respective resistance state can correspond with a logic state, e.g., "0" or "1." According to a previous resistive memory cells approach, the low resistance state can occur due to a non- volatile formation of one or more conductive filaments in a dielectric between electrodes, and the high resistance state can occur due to a non- volatile dissolution of the conductive filament(s) in the dielectric. Ions in the dielectric and/or electrode(s) can be relocated by the application of electrical energy to form or dissolve a conductive filament. A relatively smaller application of electrical energy can be used to ascertain the resistive state. Brief Description of the Drawings [0007] Figure 1 is a block diagram of a portion of an array of resistive memory cells in accordance with one or more embodiments of the present disclosure. [0008] Figure 2 illustrates a cross-sectional view of a portion of multi- state resistive memory cells in accordance with one or more embodiments of the present disclosure. [0009] Figures 3 A-3B illustrate cross-sectional views of a portion of a resistive memory cell having dielectric and barrier dielectric materials formed horizontally in accordance with one or more embodiments of the present disclosure. [0010] Figures 4A-4B illustrate cross-sectional views of a portion of a resistive memory cell having dielectric and barrier dielectric materials formed vertically in accordance with one or more embodiments of the present disclosure. Detailed Description [0011] Semiconductor memory devices, resistive memory devices, memory cell structures, and methods of forming a resistive memory cell are provided. One example method of a resistive memory cell can include a number of dielectric regions formed between two electrodes, and a barrier dielectric region formed between each of the dielectric regions. The barrier dielectricregion serves to reduce an oxygen diffusion rate associated with the dielectric regions. [0012] Embodiments of the present disclosure can provide benefits such as resistive memory cells having multiple states and/or improved switching characteristics as compared to previous resistive memory cells, among other benefits. As described further herein, forming a slow oxygen diffusion barrier and/or grain boundary disruptor between dielectric portions of a resistive memory cell can have various benefits, such as multiple states and/or improved switching characteristics. The dielectric and/or barrier dielectric regions can be formed, for example, via an atomic layer deposition (ALD) process, which is well-suited to deposit dielectric materials with sub-nanometer thickness control. The present disclosure provides dielectric laminates and alloys that support one or more of the following benefits: 1) controlled oxygen diffusion barriers, 2) grain-boundary disruption, 3) crystalline or amorphous control, and 4) reduced dielectric roughness by control of grain size, among other benefits. [0013] In the following detailed description of the present disclosure, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration how one or more embodiments of the disclosure may be practiced. These embodiments are described in sufficient detail to enable those of ordinary skill in the art to practice the embodiments of this disclosure, and it is to be understood that other embodiments may be utilized and that process, electrical, and/or structural changes may be made without departing from the scope of the present disclosure. [0014] The figures herein follow a numbering convention in which the first digit or digits correspond to the drawing figure number and the remaining digits identify an element or component in the drawing. Similar elements or components between different figures may be identified by the use of similar digits. As will be appreciated, elements shown in the various embodiments herein can be added, exchanged, and/or eliminated so as to provide a number of additional embodiments of the present disclosure. In addition, the proportion and the relative scale of the elements provided in the figures are intended to illustrate various embodiments of the present disclosure and are not to be used in a limiting sense.[0015] Figure 1 is a block diagram of a portion of an array 100 of memory cells 106 in accordance with one or more embodiments of the present disclosure. Memory devices may include a number of memory cells 106 arranged in a matrix, e.g., array, 100. A memory cell may include a storage element coupled to a select device, e.g., an access device. The storage element can include a programmable portion that may have a variable resistance, for example. The access device can be a diode, field effect transistor (FET), or bipolar junction transistor (BJT), among others. In the example illustrated in Figure 1, the array 100 is an array including a first number of access conductive lines 102-0, 102-1, . . ., 102-N, e.g., access lines, which may be referred to herein as word lines, and a second number of data/sense conductive lines 104-0, 104-1, . . ., 104-M, e.g., data lines, which may be referred to herein as bit lines. As illustrated, the word lines 102-0, 102-1 , . . ., 102-N are substantially parallel to each other and are substantially orthogonal to the bit lines 104-0, 104-1, . . ., 104-M, which are substantially parallel to each other; however, embodiments are not so limited. [0016] As used herein, the term "substantially" intends that the modified characteristic need not be absolute, but is close enough so as to achieve the advantages of the characteristic. For example, "substantially parallel" is not limited to absolute parallelism, and can include structure orientations that are non-intersecting for a given application and at least closer to a parallel orientation than a perpendicular orientation. [0017] In this example, a memory cell 106 is located at each of the intersections of the word lines 102-0, 102-1, . . ., 102-N and bit lines 104-0, 104- 1, . . ., 104-M. The memory cells 106 can function in a two-terminal architecture e.g., with a particular word line 102-0, 102-1, . . 102-N and bit line 104-0, 104- 1 , . . ., 104-M serving as a bottom and top electrode. A memory cell may be coupled to a word line forming a "row" of the array. Each memory cell may be coupled to a bit line forming a "column" of the array. [0018] According to one or more embodiments, the memory cells 106 of array 100 can be resistive memory cells such as those described in connection with Figures 2, 3 A, 3B, 4 A and 4B. More particularly, the memory cells 106 of array 100 can be configured as a resistive random access memory ( RAM).[0019] As previously mentioned, the storage element can include a programmable portion. The programmable portion may be programmable to a number of different logic states. For instance, the programmable portion of a storage element can be programmed to particular levels corresponding to particular logic states responsive to applied programming voltage and/or current pulses. The programmable portion of a storage element can include, for example, one or more materials such as a transition metal oxide material or a perovskite including two or more metals, e.g., transition metals, alkaline earth metals, and/or rare earth metals. Embodiments are not limited to a particular material or materials associated with the programmable portion of a storage element of the memory cells 106. For instance, the programmable portion of a storage element can be formed of various doped or undoped materials. Other examples of materials that can be used to form the programmable portion of a storage element include binary metal oxide materials, colossal magnetoresistive materials, and/or various polymer-based resistive variable materials, among others. [0020] In operation, the memory cells 106 of array 100 can be programmed by applying a voltage, e.g., a write voltage, across the memory cells 106 via selected word lines 102-0, 102-1, . . ., 102-N and bit lines 104-0, 104-1 , . . ., 104-M. The width and/or magnitude of the voltage pulses across the memory cells 106 can be adjusted, e.g., varied, in order to program the memory cells 106 to particular logic states, e.g., by adjusting a resistance level of the storage element. [0021] A sensing, e.g., read, operation can be used to determine the logic state of a memory cell 106 by a magnitude of sensing current, for example, on a bit line 104-0, 104-1, . . ., 104-M corresponding to the respective memory cell 106 responsive to a particular voltage applied to the selected word line 102-0, 102-1, . . ., 102-N to which the respective cell 106 is coupled. Sensing operations can also include biasing unselected word lines and bit lines at particular voltages in order to sense the logic state of a selected cell 106. [0022] Figure 2 illustrates a cross-sectional view of a portion of a resistive memory cell including a dielectric and barrier dielectric in accordance with one or more embodiments of the present disclosure. According to one or more embodiments of the present disclosure, one or more thin, discrete barrierdielectric materials can create local changes in ion diffusion rates, e.g., oxygen ion diffusion rates, through a bulk dielectric so that a conductive filament extending from cathode to anode can be avoided for certain programming energies. Discrete barrier dielectric materials within bulk dielectric materials can result in discrete regions of stoichiometric oxides and sub-oxides being created under programming, next to a highly oxygen-deficient and oxygen- loving electrode. [0023] The resistive memory cell structure illustrated in Figure 2, for example, can provide improved controllability and/or multiple write states over the art. Multiple write states in a resistive memory device can increase bit densities in memory devices such as RRAM. Additionally, for crystalline dielectrics, benefits can also be achieved by the discrete barrier dielectric materials causing grain-boundary disruption with respect to the bulk dielectric materials. Certain problems of resistive memory device performance, for example within an RRAM, such as cycling and/or bit-to-bit reproducibility, may arise if the switching mechanism is "filamentary" in nature. Crystalline grain- boundaries are often both leakage paths and oxygen diffusion paths. By disrupting the grains with one or more grain-boundaries, e.g., created at an interface of a barrier dielectric material and dielectric material, the pathways can be broken between electrodes such that filamentary switching will be prevented, or at least reduced and moderated, which can provide improved cell-to-cell performance consistency. For instance, cell-to-cell performance can be based on the average of many filamentary switching events, e.g., within a greater number of discrete dielectric material regions, rather than that of a single filamentary switching event, e.g., from cathode to anode. [0024] Figure 2 illustrates a cross-sectional view of a portion of multi- state resistive memory cells in accordance with one or more embodiments of the present disclosure. As described above with respect to a prior art resistive memory cell, a two-state resistive memory cell can have a low resistance state and a high resistance state, each respective resistance state being associated with a corresponding logic state, e.g., "0" and "1." A multi-state resistive memory cell can have a number of intermediate resistance states between the lowest resistance state and the highest resistance state. A respective intermediate resistance state can be associated with a corresponding logic state.[0025] Figure 2 shows a cross-sectional view of a portion of a resistive memory cell 230. Resistive memory cell 230 can include a thin film comprising a number of solid laminate dielectric materials 235 between electrodes, e.g., a cathode and an anode, as shown in Figure 2. The laminate dielectric materials 235 can include alternating dielectric 236 regions and barrier dielectric 238 regions, e.g., layers. The laminate dielectric materials 235 can further include an optional buffer dielectric region 240 between the dielectric region 236 closest to electrode 232 and electrode 232. The laminate dielectric materials 235 can also include an additional barrier dielectric 242 region between the dielectric region 236 furthest away from an electrode 232, and the electrode 234. The electrode 232 can be a metal alloy anode, and an electrode 234 can be a metal cathode. [0026] Although not shown in Figure 2, the electrode 232, 234 can be coupled to a word line or bit line of a memory array, such as is shown in Figure 1. A control transistor can also be associated with each resistive memory cell 230for selection thereof. The electrodes 232 and 234 can be comprised of the same or different materials and can have the same or different physical sizes and/or shapes. The resistive memory cell 230 can be symmetric or asymmetric. [0027] Example electrode materials include a metal or metal alloy, such as titanium, tungsten, and platinum, or combinations thereof, although embodiments are not limited to particular electrode materials. More particularly, one electrode 232 can be comprised of material that is relatively inert, e.g., titanium nitride (TiN) or platinum. Another electrode 234 can be a material that is electrochemically active, e.g., titanium, However, embodiments of the present disclosure are not so limited, and the electrode 234 may be nickel, strontium, hafnium, zirconium, tantalum, aluminum, and/or tungsten, among other metals and/or combinations thereof. [0028] The number of laminate dielectric materials 235 can include alternating dielectric regions 236 and barrier dielectric regions 238, e.g., alternating layers of dielectric materials and barrier dielectric materials. An optional buffer dielectric material 240 may be located adjacent the electrode 232, e.g., between the electrode 232 and a nearest dielectric region 236. An optional barrier dielectric material 242 may be located adjacent electrode 234, e.g., between electrode 234 and a nearest dielectric region 236. According to one or more embodiments, a resistive memory cell 230 includes at least two dielectricregions 236 having a barrier dielectric region 238 therebetween. According to one or more embodiments, a resistive memory cell 230 includes a plurality of barrier dielectric materials 238, each barrier dielectric region 238 being located between dielectric regions 236, such that the barrier dielectric materials 238 and dielectric regions 236 alternate. [0029] According to an example method of forming a resistive memory cell in accordance with one or more embodiments of the present disclosure, a dielectric region 236 is formed on an electrode 232, and a barrier dielectric region 238 is formed on the dielectric region 236. Another dielectric region 236 is formed on the barrier dielectric region 238, and then electrode 234 is formed on the another dielectric region 236. The barrier dielectric region 238 is a material having a slower oxygen diffusion rate and/or serves as a grain-boundary disruptor relative to the dielectric regions 236. [0030] According to another example method of forming a resistive memory cell in accordance with one or more embodiments of the present disclosure, an optional buffer dielectric material 240 can be formed on electrode 232. One or more instances of a dielectric region 236 and a barrier dielectric region 238 are formed on the optional buffer dielectric material 240, with a dielectric region 236 being adjacent the optional buffer dielectric material 240. Another dielectric region 236 is formed on the one or more instances of the dielectric region 236 and the barrier dielectric region 238, such that it is located adjacent a barrier dielectric region 238 and furthest away from the electrode 232. [0031] An optional barrier dielectric material 242 can be formed on the dielectric region 236 located furthest away from electrode 232, and electrode 234 can be formed on the optional barrier dielectric material 242 (if present). If the optional buffer dielectric 240 is not formed, the one or more instances of a dielectric region 236 and a barrier dielectric region 238 can be formed directly on electrode 232. Also, if the optional barrier dielectric material 242 is not included in the resistive memory cell, electrode 234 can be formed directly on the dielectric region 236 located furthest away from electrode 232. [0032] The resistive memory cell 230 can be an oxide based RRAM cell, for example. An oxide based resistive memory cell 230 can refer to a cell that includes a resistive oxide material, e.g., an oxygen source as the dielectric region 236 and/or barrier dielectric region 238 between the two electrodes 232 and 234.Some oxide based memory cells can include one or more additional oxide materials and/or second electrodes along with the oxide material(s) between the two electrodes. [0033] Examples of metal oxides (ΜΟχ) that can be included in the dielectric region 236 include a near-stoichiometric, stoichiometric, and/or sub- stoichiometric metal oxide material. A near-stoichiometric oxide can be an oxide that has an oxygen percentage at or approximately at a stoichiometric ratio for the oxide. A sub-stoichiometric oxide can be an oxide that has an oxygen percentage below a stoichiometric ratio for the oxide. [0034] According to one or more embodiments, the dielectric region 236 can include titanium dioxide (Ti02). According to some embodiments, the dielectric region 236 can include other metal oxides such as lanthanum oxide (La203), lanthanum aluminate (LaA103), gallium oxide (Ga203), zirconium oxide (Zr02), ), zirconium silicon oxide (ZrxSivOz), zirconium titanium oxide (ΖΓχΤϊγΟζ), hafnium oxide (Hf02), hafnium titanium oxide (HfxTiYOz), strontium titanate (SrTi03), lanthanum calcium manganese oxide (LCMO), magnesium oxide (MgO), aluminum oxide (Αΐχθγ) such as A1203, tin dioxide (Sn02), zinc peroxide (Zn02), titanium silicon oxide (TixSiyOz), and/or a hafnium silicon oxide (HfxSiyOz), among other metal oxide materials that are suitable oxygen sources. However, embodiments are not limited to the dielectric region 236 including metal oxides, and the dielectric region 236 can be formed using other resistive metal alloys. The dielectric regions 236 can be formed to be amorphous, crystalline, or combinations thereof. For example, one dielectric region 236 can be amorphous and another one dielectric region 236 can be crystalline. [0035] The barrier dielectric region 238 is a slow oxygen diffusion barrier and/or grain-boundary disruptor material with respect to the dielectric regions 236. The resistive state of the resistive memory cell 230, fabricated in accordance with the present disclosure, can change depending on the location of the oxygen ions within the laminate dielectric materials 235 between the two electrodes. The inclusion of barrier dielectric region 238 between instances of the bulk dielectric region 236 is intended to disrupt the formation of continuous filaments between the cathode and anode. As such, the barrier dielectric region238 can have a bulk anion, e.g., oxygen, diffusion rate that differs from that of the dielectric region 236 alone. [0036] Examples of materials that can be included in the barrier dielectric region 238 include zirconium oxide (Zr02), silicon dioxide (Si02), and aluminum oxide (AlxOY) such as A1203, among others. Barrier dielectric region 238 can be formed to be amorphous or crystalline. Where multiple barrier dielectric regions 238 are formed, some may be amorphous and others may be crystalline. Also, a barrier dielectric region 238 may be amorphous adjacent an amorphous or crystalline dielectric region 236, or may be crystalline adjacent an amorphous or crystalline dielectric region 236. [0037] Where the dielectric region 236, e.g., Ti02, is formed to have a crystalline structure, the dielectric material anion, e.g., oxygen, can diffuse out more rapidly along boundaries of the dielectric region 236. The barrier dielectric region 238 can serve to disrupt the grain boundaries of the dielectric region 236, thereby helping to moderate the diffusion paths and reduce filamentary properties, for instance. [0038] According to one or more embodiments, one or more portions of the dielectric region 236 may be formed from a different material than another portion of the dielectric region 236. That is, the various dielectric regions 236 may be, but need not be, formed from a same, e.g., metal oxide, material. According to one or more embodiments, one or more portions of the barrier dielectric region 238 may be formed from a different material than another portion of the barrier dielectric region 238. [0039] According to one or more embodiments, the dielectric and/or barrier dielectric regions can be discrete regions with well-defined boundaries. However, embodiments of the present disclosure are not so limited, and the dielectric and/or barrier dielectric regions can be formed having less than discrete boundaries. For example, regions can be defined by a gradual transition from one material to another, e.g., a gradient, such as between dielectric and barrier dielectric materials rather than an abrupt and distinct transition. As previously mentioned, the dielectric and/or barrier dielectric regions can be formed, for example, via an atomic layer deposition (ALD) process, which is well-suited to deposit dielectric materials with sub-nanometer thickness control.[0040] According to various embodiments of forming a resistive memory cell, a single bulk film of metal oxide, e.g., ZrxSiYOz, Ηίχ3ϊγΟζ, Τϊχ8ίγΟζ, is formed by ALD. During the ALD, an initial quantity of metal oxide material is deposited, after which the metal oxide is appropriately doped and/or augmented by a barrier dielectric material for some intermediate quantity of material deposition, after which another quantity of metal oxide material is deposited. The dielectric/barrier dielectric/dielectric structure can exist within a single bulk film. The barrier dielectric region can be a region intermediate to the surrounding metal oxide regions. As such, the barrier dielectric region can be a mixture including the metal oxide and/or having a gradient from metal oxide to doped/augmented metal oxide and/or barrier dielectric material, and back to metal oxide. [0041] According to various embodiments, the optional buffer dielectric material 240, located between the electrode 232 and the dielectric region 236 located closest to the electrode 232 may have non-reactive stable electrical properties. According to some embodiments, the optional buffer dielectric material 240 can have a lower dielectric constant (i.e., k) value, and thus a greater resistance, than the dielectric regions 236, e.g., the dielectric region 236 located closest to electrode 232. According to some embodiments, the optional buffer dielectric material 240 can have a greater dielectric constant (i.e., k) value, and thus a lesser resistance, than the dielectric regions 236, e.g., the dielectric region 236 located closest to electrode 232. The optional buffer dielectric material 240 can have a higher resistance than the dielectric regions 236 in order to function as a current limiting material in the resistive memory cell, e.g., especially when the resistive memory cell is in a low resistance state. Accordingly, the optional buffer dielectric material 240 can serve as a tunable material for the resistive memory cell with respect to the resistive and dielectric properties thereof. For example, the optional buffer dielectric material 240 can be a material having an appropriate resistance to limit current to a desired magnitude with respect to a particular memory cell structure. [0042] The optional buffer dielectric material 240 can also be selected, in part, to have appropriate adhesion properties with respect to the electrode 232. That is, the optional buffer dielectric material 240 can provide an adhesion interface between electrode 232 and a dielectric region 236 located closest toelectrode 232. According to some embodiments, the optional buffer dielectric material 240 doesn't deplete an anion element. The optional buffer dielectric material 240 can prevent or mitigate switching of the resistive memory cell from switching at electrode 232. According to various embodiments, the optional barrier dielectric 242 can be formed as any other barrier dielectric region 238, and include similar materials. [0043] The dielectric region 236, barrier dielectric region 238, optional buffer dielectric material 240, and optional barrier dielectric 242 can be formed, e.g., deposited, via an atomic layer deposition (ALD) process or other suitable deposition process. According to one or more embodiments, the dielectric region 236 and barrier dielectric region 238 are formed with sub-nanometer thickness control, to which ALD is well-suited. However, embodiments are not limited to a particular deposition process. In some embodiments, the dielectric region 236 can have a thickness of from about 10 to about 100 Angstroms, the barrier dielectric region 238 can have a thickness of less than about 20 Angstroms, e.g., from about 2 to about 20 Angstroms, and the thin film comprising a number of laminate dielectric materials 235 can have a thickness of less than about 1000 Angstroms, e.g., from about 50 to about 1000 Angstroms. However, embodiments are not limited to a particular thickness of dielectric region 236, barrier dielectric region 238, or the thin film comprising a number of laminate dielectric materials 235. [0044] The electrodes 232 and/or 234 can be formed via an ALD process, in situ using a CVD process, or other suitable deposition process. Additional materials, e.g., materials other than a metal associated with electrode 234, such as additional materials associated with the metal precursor source, e.g., titanium chloride, titanium tetrachloride (TiCl4), chlorine, as well as other precursor materials and/or reactants, e.g., hydrogen, argon, etc., associated with an in situ CVD process can react with previously formed dielectric region 236, e.g., titanium oxide, or optional barrier dielectric material 242 and contribute to the formation of the electrodes. Some examples of precursor materials include, but are not limited to, hydrogen, argon, e.g., argon plasma, and/or a titanium chloride material such as titanium tetrachloride, titanium trichloride, or titanium dichloride, for example.[0045] The formation of electrode 234 onto a dielectric region 236 can result in a reaction that can create a "reacted" metal oxide (not shown in Figure 2) at the interface between electrode 234 and a deposited metal oxide material 236. The reacted metal oxide can include materials such as aluminum oxide (Αΐχθγ) , aluminum titanium oxide (ΑΐχΤίγΟζ), aluminum hafnium oxide (AlxHfYOz), silicon oxide (SixOy), silicon oxynitride (SixOYNz), hafnium silicon oxide (HfxSiyOz), zirconium silicon oxide (ZrxSivOz), zirconium silicon oxynitride (ZrwSixOYNz), hafnium oxide (Ηίχθγ), zirconium oxide (ΖΓΧΟΥ)} titanium oxide (Τΐχθγ), hafnium zirconium oxide (HfxZrYOz), hafnium titanium oxide (ΗΐχΤίγΟζ), zirconium titanium oxide (ΖΓχΤίγΟζ), and/or strontium oxide (SrxOy), among other materials. [0046] The resistivity of the metal oxide portion of the resistive memory cell 230 can be dependent on the location of oxygen ions and can change as the location of the oxygen ions change, either in dielectric regions 236 or the reacted metal oxide portion. For example, where dielectric region 236 located furthest away from electrode 232 is titanium dioxide (Ti02), electrode 234 is titanium, and a plasma CVD (PECVD) process used to deposit materials includes a titanium tetrachloride (TiCl4) metal precursor source along with hydrogen (H2) and an argon plasma component, the metal oxide portion can be a sub- stoichiometric titanium oxide (Ti02.x). Regardless, embodiments of the resistive memory cell of the present disclosure are not limited to those materials shown in Figure 2, and may include other materials formed during the formation of the materials shown in Figure 2. [0047] The resistance (and therefore the logic state) of the resistive memory cell 230 can change depending on the location of the ions, e.g., oxygen. However, the presence of the barrier dielectric region 238 between dielectric regions 236 interferes with the formation of a conductive filament extending from the cathode to the anode. A resistive memory cell in which a conductive filament extending from the cathode to the anode in a continuous dielectric typically has only two resistive (and logic) states, a high resistance state (i.e., conductive filament present) and a low resistance state (i.e., conductive filament not present). [0048] The state of resistive memory cell 230 can be read by applying a read voltage across the resistive memory cell 230 via electrodes 232, 234. Thestate of resistive memory cell 230 can be programmed by applying a programming voltage across the resistive memory cell 230 via electrodes 232, 234 sufficient to cause ion, e.g., oxygen ion for metal oxide materials, vacancy movement. When an electric field is applied, the ion vacancies drift, which is reversible by changing the direction of the current through the resistive memory cell. The migration of ion vacancies in resistive memory cell can occur due to application of pulsed voltages and/or voltages of different magnitudes. The resistance, and corresponding logic state, of resistive memory cell 230 can be set to a desired value by applying an appropriate voltage pulse/magnitude. [0049] According to one or more embodiments of the present disclosure, a resistive memory cell 230 formed having at least one instance of barrier dielectric region 238 between dielectric regions 236 can be operated to have more than two resistance (logic) states. Resistance of resistive memory cell 230 does not switch from a highest resistance state to a lowest resistance state (or from a lowest resistance state to a highest resistance state) all at once, thereby providing one or more stable, non-volatile resistive states (and corresponding logic states) in between the lowest and highest resistance states, as well as improved switching control. Rather than switching rapidly to/from a highest resistive state directly to a lowest resistive state, conductivity of resistive memory cell 230 increases (i.e., resistance decreases) to a greater extent in those dielectric regions 236 located closest to electrode 234 for a given applied programming voltage. That is, as a result of an applied programming voltage the two dielectric regions 236 shown in Figure 2 located closest to the second electrode may be most conductive, the dielectric region 236 located next closest to electrode 234 may be somewhat conductive, and the dielectric region 236 located closest to electrode 232 may be mostly insulative, resulting in a cumulative resistance between electrode 232 and electrode 234 intermediate between a low resistance state and a high resistance state. [0050] With appropriate application of programming voltage in excess of a threshold voltage/duration for a given resistance state, an increased number of dielectric regions 236 can be controlled to be conducive, the level of conductivity for each particular dielectric region decreases based on distance from electrode 234 towards electrode 232. A plurality of programming voltage magnitude/durations can correspond to a plurality of discreet total resistancelevels for the resistive memory cell. Conversely, with appropriate application of a reverse polarity of programming voltage, an increased number of dielectric regions 236 can be controlled to be more insulative based on distance from electrode 232 towards electrode 234. [0051] The resistive switching characteristics can vary depending on factors such as the particular dielectric and barrier dielectric materials involved, the number and arrangement of instances of a dielectric region 236 and a barrier dielectric region 238, use of the optional buffer dielectric material 240 and/or optional barrier dielectric material 242, among other factors. Increasing the number of instances of dielectric 236/barrier dielectric 238 regions can provide an increasing quantity of stable resistance states (and corresponding logic states). Additionally, an increasing number of instances of dielectric 236/barrier dielectric 238 regions can provide increasing granularity of resistance control, and thus generally improving switching characteristics. [0052] Figures 3A-3B illustrate cross-sectional views of a portion of a resistive memory cell having dielectric and barrier dielectric regions formed horizontally in accordance with one or more embodiments of the present disclosure. Figure 3A shows a resistive memory cell 350 comprising a number of laminate dielectric regions 355 between two electrodes 352 and 354. The number of laminate dielectric regions 355 comprise alternating instances of dielectric regions 356 and barrier dielectric regions 358, with dielectric regions 356 being located adjacent each electrode. The resistive memory cell 350 is fabricated using horizontal laminates. That is, each successive material is deposited on previously deposited materials such that the materials are "grown" from the bottom-up, as indicated in Figure 3A at 357. Figure 3A shows that the instances of dielectric regions 356 and barrier dielectric regions 358 are formed substantially parallel to the electrode 352. [0053] According to an example method of forming a resistive memory cell 350 in accordance with one or more embodiments of the present disclosure, a dielectric region 356 is formed on an electrode 352, and a barrier dielectric region 358 is formed on the dielectric region 356. Additional instances of alternating dielectric region 356 and barrier dielectric region 358 are formed until electrode 354 is formed on last dielectric region 356, e.g., located furthest away from the electrode 352. The barrier dielectric region 358 is a slow oxygendiffusion barrier and/or grain-boundary disruptor material with respect to the dielectric regions 356. [0054] The resistive memory cell 360 can also fabricated using horizontal laminates. That is, each successive material can be deposited on previously deposited materials such that the materials are "grown" from the bottom-up, as indicated in Figure 3B at 367. [0055] According to one or more embodiments, each of the dielectric regions 356 and barrier dielectric regions 358 are formed to be amorphous, such that the boundary between the dielectric regions 356 and barrier dielectric regions 358 are an amorphous/amorphous boundary. That is, resistive memory cell 350 includes amorphous/amorphous laminates. [0056] Figure 3B shows a resistive memory cell 360 having a thin film comprising a number of solid laminate dielectric regions 365 between two electrodes 362 and 364. The number of laminate dielectric regions 365 comprise alternating instances of dielectric regions 366 and barrier dielectric regions 368, with dielectric regions 366 being located adjacent each electrode. The resistive memory cell 360 is fabricated using horizontal formation of the various materials, as described with respect to the resistive memory cell illustrated in Figures 3 and 4A. Figure 3B shows that the instances of dielectric regions 366 and barrier dielectric regions 368 are formed substantially parallel to the electrode 362. [0057] According to one or more embodiments, each of the barrier dielectric regions 368 of resistive memory cell 360 are formed to be amorphous; however, each of the dielectric regions 366 of resistive memory cell 360 are formed to be crystalline, as may be achieved by annealing, for example. Therefore, the boundary between the dielectric regions 366 and barrier dielectric regions 368 are a crystalline/amorphous boundary. That is, resistive memory cell 360 includes crystalline/amorphous laminates. [0058] Figures 4A-4B illustrate cross-sectional views of a portion of a resistive memory cell having dielectric and barrier dielectric regions formed vertically in accordance with one or more embodiments of the present disclosure. Figure 4A shows a resistive memory cell 470 comprising a number laminate dielectric regions 475 between two electrodes 472 and 474. The number of laminate dielectric regions 475 comprise alternating instances ofdielectric regions 476 and barrier dielectric regions 478. However, the instances of dielectric regions 476 and barrier dielectric regions 478 are formed using vertical laminates. That is, the instances of dielectric regions 476 and barrier dielectric regions 478 are formed substantially perpendicular to the electrode 472. [0059] As used herein, the term "substantially" intends that the modified characteristic need not be absolute, but is close enough to the characteristic so as to achieve the advantages of the characteristic. For example, "substantially perpendicular" is not limited to absolute perpendicularity, and can include structure orientations that are oriented sufficiently close to being at a right angle to one another so as to achieve the advantages associated with a perpendicular orientation. For example, "substantially perpendicular" intends at least being closer to an orthogonal orientation than to a parallel orientation. [0060] According to an example method of forming a resistive memory cell 470 in accordance with one or more embodiments of the present disclosure, electrode 472 is formed. Bulk dielectric material 476 is formed, e.g., deposited, on the electrode 472. The bulk dielectric material 476 is patterned, etched, and filled with barrier dielectric material 478. Chemical mechanical polishing (CMP), or other suitable processing, may be used to remove barrier dielectric material 478 outside the etched trenches, e.g., from the portions of the dielectric material 476 and barrier dielectric material 478, on which electrode 474 is to be formed. Electrode 474 can be formed on the instances of dielectric materials 476 and barrier dielectric materials 478, oriented as shown in Figure 4A, e.g., parallel to electrode 472 and perpendicular to the instances of dielectric materials 476 and barrier dielectric materials 478. [0061] The barrier dielectric material 478 has a slower oxygen diffusion rate and/or is a grain-boundary disruptor with respect to the dielectric materials 476. However, as will be appreciated, the electric field between the electrodes 472 and 474 is oriented parallel to the boundaries between the dielectric materials 476 and barrier dielectric materials 478. As such, the instances of barrier dielectric materials 478 do not interrupt the formation of continuous filaments in the dielectric materials 476, as is the case for a lateral construction, e.g., Figures 2, 3A, and 3B, where barrier dielectric materials are formed to beperpendicular to the electric field between the electrodes, e.g., anode and cathode. [0062] According to another example method of forming a resistive memory cell 470 in accordance with one or more embodiments of the present disclosure, electrode 472 is formed. A vertical instance of dielectric region 476 is formed, e.g., deposited, on the electrode 472 such that the dielectric region 476 is perpendicular to the electrode 472. Additional alternating instances of barrier dielectric region 478 and dielectric region 476 can be formed using sidewall deposition techniques and a contact punch, for instance. The direction of growth using sidewall deposition techniques can be as shown in Figure 4A using vertical laminates. The barrier dielectric region 478 for this configuration of a resistive memory cell 470 is a slow oxygen diffusion barrier and/or grain- boundary disruptor material with respect to the dielectric regions 476. [0063] Once the intended number of instances of dielectric regions 476 and barrier dielectric regions 478 are deposited and appropriately formed, CMP, or other suitable processing, may be used to remove dielectric region 476 and barrier dielectric region 478 from the build-up of dielectric region 476 and barrier dielectric region 478 on which electrode 474 is to be formed. Subsequently, electrode 474 can be formed on the instances of dielectric regions 476 and barrier dielectric regions 478, oriented as shown in Figure 4A, e.g., parallel to electrode 472 and perpendicular to the instances of dielectric regions 476 and barrier dielectric regions 478. [0064] Despite barrier dielectric region 478 not being located across a path for conductive filaments, vertically oriented laminates can still provide some unique switching control as the number of channels (and channel width) within which conductive filaments can form can be precisely controlled, which may be beneficial for oxygen diffusion moderation mechanisms. Controlling the number of channels (and channel width), such as by limiting the number and geometry of discrete filamentary electrical paths available, can limit radial and/or control lateral, e.g., from cathode to anode, growth of conductive filaments. [0065] For example, vertically- oriented barrier dielectric regions 478, e.g., A1203, located between vertically-oriented dielectric regions 476 can maintain amorphous dielectric regions 476, e.g., Ti02, as-deposited by reducingthe volume of individual dielectric regions 476, particularly with respect to the horizontal thickness thereof. Thick dielectric regions 476, e.g., Ti02, can crystallize as-deposited by ALD. The barrier dielectric regions 478 can provide a large decrease in as-deposited roughness. Roughness is generally unwanted as it tends to concentrate electric fields near an electrode interface, thereby degrading resistive memory cell performance due to enhanced filament formation. [0066] According to one or more embodiments, each of the dielectric regions 476 and barrier dielectric regions 478 shown in Figure 4A can be formed to be amorphous, such that the boundary between the dielectric regions 476 and barrier dielectric regions 478 are amorphous/amorphous boundaries. That is, resistive memory cell 470 can be fabricated to include amorphous/amorphous laminates. [0067] After annealing, Al203-Ti02 laminates, e.g., instances of barrier dielectric regions 478 and dielectric regions 476 respectively, can become crystalline. However, the full-width at half-maximum (FWHM) of a peak intensity for a diffraction measurement, e.g., plotted with respect to the diffraction angle, theta, is larger than for Zr02-Ti02 laminates, indicating the grain size is smaller for Al203-Ti02 laminates due to disruption. [0068] According to one or more example embodiments of the present disclosure, a resistive memory cell can be configured to have one or multiple SrTi03-LaA103 interfaces that can be activated/deactivated under one or more auxiliary electric fields. For example, the one or more auxiliary electric fields can be provided from field-effects at small feature size for the two-electrode system shown in Figure 4A, for example, or can be associated with a third electrode, e.g., auxiliary electrode, positioned parallel to the vertically oriented laminates and acting as a control gate, for instance. [0069] Figure 4B shows a resistive memory cell 480 comprising a number of laminate dielectric regions 485 between two electrodes 482 and 484. The number of laminate dielectric regions 485 comprise alternating instances of dielectric regions 486 and barrier dielectric regions 488. As described above with respect to Figure 4A, the instances of dielectric regions 486 and barrier dielectric regions 488 are formed using vertically-oriented laminates. That is, the instances of dielectric regions 486 and barrier dielectric regions 488 areformed substantially perpendicular to the electrode 482. Resistive memory cell 480 can be fabricated in accordance with the deposition techniques described above with respect to Figure 4A, resulting in a lateral growth direction for the laminates. [0070] In contrast to Figure 4A, Figure 4B depicts dielectric regions 486 formed to be crystalline, as may be achieved by annealing, for example. Therefore, the boundaries between the dielectric regions 486 and barrier dielectric regions 488 are crystalline/crystalline boundaries. That is, resistive memory cell 480 can be fabricated to include crystalline/crystalline vertical laminates. [0071] Although specific embodiments have been illustrated and described herein, those of ordinary skill in the art will appreciate that an arrangement calculated to achieve the same results can be substituted for the specific embodiments shown. This disclosure is intended to cover adaptations or variations of various embodiments of the present disclosure. It is to be understood that the above description has been made in an illustrative fashion, and not a restrictive one. Combination of the above embodiments, and other embodiments not specifically described herein will be apparent to those of skill in the art upon reviewing the above description. The scope of the various embodiments of the present disclosure includes other applications in which the above structures and methods are used. Therefore, the scope of various embodiments of the present disclosure should be determined with reference to the appended claims, along with the full range of Equivalents to which such claims are entitled. [0072] In the foregoing Detailed Description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the disclosed embodiments of the present disclosure have to use more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment. |
An analog to digital conversion system having a plurality of analog to digital converters (ADCs). Each one of such ADCs is configured to convert a corresponding one of a plurality of analog signals into a corresponding sequence of digital words. The ADCs have different degrees of conversion performance. A source of the pulses is included. Each one of the ADCs is configured to provide a corresponding one of the sequences of digital words in response to the pulses. Each one of the digital words in each of the sequences is provided at substantially the same time. A controller is provided for interrupting and/or changing the configuration of one or more of the ADCs. The controller provides the interrupt and/or change in configuration with a priority to one of the ADCs over the other one of the ADCs. |
What is claimed is: 1. An analog to digital conversion system, comprising: an integrated circuit chip; and a plurality of analog to digital converters formed on the integrated circuit chip, each one of said analog to digital converters being configured to convert a corresponding one of a plurality of analog signals into a corresponding digital signal in response to pulses fed to said converter; said plurality of analog to digital converters having a first analog to digital converter and a second analog to digital converter; said first analog to digital converter performing an analog to digital conversion with a first degree of conversion performance; said second analog to digital converter performing an analog to digital conversion with a second degree of conversion performance; said first degree of conversion performance being different from said second degree of conversion performance. 2. The system recited in claim 1 wherein said plurality of analog to digital converters perform analog to digital conversions with different input signal to internal noise resolutions. 3. The system recited in claim 1 wherein said plurality of analog to digital converters perform analog to digital conversions with different conversion rates. 4. The system recited in claim 1 wherein said plurality of analog to digital converters have different input impedances. 5. The system recited in claim 1 wherein said plurality of analog to digital converters have different gains. 6. The system recited in claim 1 including a controller for changing a configuration of one or more of said plurality of analog to digital converters. 7. The system recited in claim 6 wherein the controller provides the change in configuration with a priority to one of said plurality of analog to digital converters over another one of said plurality of analog to digital converters. 8. An analog to digital conversion system, comprising: an integrated circuit chip; a plurality of analog to digital converters formed on the integrated circuit chip, each one of said analog to digital converters being configured to convert a corresponding one of a plurality of analog signals into a corresponding sequence of digital words; and said plurality of analog to digital converters having a first analog to digital converter and a second analog to digital converter; said first analog to digital converter performing an analog to digital conversion with a first degree of conversion performance; said second analog to digital converter performing an analog to digital conversion with a second degree of conversion performance; said first degree of conversion performance being different from said second degree of conversion performance; a source of pulses; wherein each one of said plurality of analog to digital converters is configured to provide a corresponding one of the sequences of digital words in response to the pulses; and wherein each one of the digital words in each of the sequences is provided at substantially the same time. 9. The system recited in claim 8 wherein said plurality of analog to digital converters perform analog to digital conversions with different input signal to internal noise resolutions. 10. The system recited in claim 8 wherein said plurality of analog to digital converters have different input impedances. 11. The system recited in claim 8 wherein said plurality of analog to digital converters have different gains. 12. The system recited in claim 8 including a controller for changing a configuration of one or more of said plurality of analog to digital converters. 13. The system recited in claim 12 wherein the controller provides the change in configuration with a priority to one of said plurality of analog to digital converters over another one of said plurality of analog to digital converters. 14. An analog to digital conversion system, comprising: an integrated circuit chip having formed thereon: a plurality of analog to digital converters, each one of said analog to digital converters being configured to convert a corresponding one of a plurality of analog signals into a corresponding digital word in response to pulses fed to said converter, a first one of said plurality of analog to digital converters having a higher degree of performance than a second one of said plurality of analog to digital converters; said second one of said plurality of analog to digital converters occupying less area on the chip and consuming less power than said first one of said plurality of analog to digital converters based upon said second one of said plurality of analog to digital converters having a lesser degree of performance than said first one of said plurality of analog to digital converters. 15. The system recited in claim 14 wherein said plurality of analog to digital converters have different input impedances. 16. The system recited in claim 14 wherein said plurality of analog to digital converters have different gains. 17. The system recited in claim 14 wherein each one of the digital words is produced by said plurality of analog to digital converters at substantially the same time. 18. An analog to digital conversion system, comprising: an integrated circuit chip; a plurality of analog to digital converters formed on the chip, each one of said analog to digital converters being configured to convert a corresponding one of a plurality of analog signals into a corresponding digital word in response to pulses fed to said converter; and said plurality of analog to digital converters having a first analog to digital converter and a second analog to digital converter; said first analog to digital converter performing an analog to digital conversion with a first degree of conversion performance; said second analog to digital converter performing an analog to digital conversion with a second degree of conversion performance; said first degree of conversion performance being different from said second degree of conversion performance; a microcontroller formed on the chip. 19. The system recited in claim 18 wherein the microcontroller processes the digital words produced by said plurality of analog to digital converters. 20. An analog to digital conversion system, comprising: an integrated circuit chip; a plurality of analog to digital converters formed on the chip, each one of said analog to digital converters being configured to convert a corresponding one of a plurality of analog signals into a corresponding digital word in response to pulses fed to said converter, a first one of said plurality of analog to digital converters performing an analog to digital conversion with a higher degree of conversion performance than a second one of said plurality of analog to digital converters performing an analog to digital conversion; and a controller for interrupting and/or changing a configuration of said plurality of analog to digital converters; said controller providing the interrupt and/or change in configuration in accordance with a predetermined priority criteria, said predetermined priority criteria being that a change in configuration of said second one of said plurality of analog to digital converters will not interrupt said first one of said plurality of analog to digital converters and a change in configuration of said first one of said plurality of analog to digital converters causes an interrupt in both said first and second converters. 21. The system recited in claim 20 wherein the interrupt to said second one of said plurality of analog to digital converters, resulting from the change in configuration of said second one of said plurality of analog to digital converters, inhibits said second one of said plurality of analog to digital converters from converting; and wherein the interrupt is released at a time such that said second one of said plurality of analog to digital converters produces digital words at substantially the same time as the digital words are produced by said first one of said plurality of analog to digital converters. |
BACKGROUND OF THE INVENTION This invention relates generally to analog-to-digital conversion systems and more particularly to analog-to-digital conversion systems adapted to convert a plurality of analog signals into corresponding digital signals, or words. As is known in the art, analog-to-digital (ADCs) have a wide range of applications. In some applications, it is required that more than one analog signal be converted into a corresponding digital signal. One arrangement is shown in FIG. 1. In such an arrangement, the analog signals, here N analog signals, are fed to the input of a multiplexer (MUX). A control, or select, signal is fed to the multiplexer and the multiplexer couples one of the plurality of analog signals to an analog-to-digital converter (ADC) selectively in accordance with the control signal. The ADC produces a new conversion result at an update rate, or conversion period of TADC seconds. However, after the multiplexer, in response to the control signal, changes from one input signal to another input signal, a number of conversion periods may be required before a valid, settled ADC result is produced, i.e., TSETTLE.gtoreq.TADC, as indicated in FIG. 2. A particular example of this is with a sigma-delta ADC featuring a second order sigma-delta modulator plus a third-order (sinc@3) decimation filter. This particular ADC will not produce a valid result until a time period of TSETTLE =3*TADC has elapsed because it takes the sinc@3 filter 3 outputs update periods to settle (i.e., TSETTLE =3*TSINC3. In the case where this ADC is chopped, as described in U.S. Pat. No. 5,675,334, TSETTLE =2*TADC. Thus, for a chopped ADC, TADC =3*TSINC3, so that TSETTLE =6*TSINC3. If two independent inputs are to be converted with this chopped ADC, the time required will therefore be equal to 2*TSETTLE, i.e., 4*TADC. Another approach for converting more than one input analog signal is to use a separate ADC for each analog signal. For example, one such an arrangement is shown in FIG. 3 for two analog signals. Both ADCs convert simultaneously. Both ADCs are identical and are therefore capable of the same performance. That is, in the analog-to-digital conversion process, noise internal to the converter is generated. For example, with a switched capacitor sigma delta ADC, there is thermal noise generated. One way to increase the ADC's performance, more particularly, increase the resolution of the input signal in the presence of this thermally generated internal noise, is to increase the size of the capacitors used in the switching networks of the ADC. Increasing the size of the capacitors, however, increases the power required by the ADC and also increases the chip area required for the ADC. Another way to increase performance, here again by increasing the resolution of the input signal in the presence of this thermally generated internal noise is to increase the gain provided to the analog input signal. This, however, also requires an increase in the power required for the ADC. Thus, as the performance of an ADC is increased, the power and chip area required for the ADC generally increases. A third way to increase performance is to include a high impedance buffer for the ADC to reduce the loading effect of the ADC on the analog signal source. A fourth way the performance of an ADC may be improved is to increase the conversion rate of the ADC. Thus, an increase in performance may be achieved by: increasing the resolution of the input signal in the presence of thermally generated noise and/or providing a high input impedance to the ADC and/or increasing the conversion rate of the ADC and/or increasing the gain of the ADC. Thus, if a first ADC has, relative to a second ADC, a higher resolution of the input signal in the presence of thermally generated noise and/or a higher input impedance to the ADC and/or a higher conversion rate and/or higher gain, the first ADC has, as defined herein, a higher degree of performance than the second ADC. SUMMARY In accordance with the present invention, an analog to digital conversion system is provided having a plurality of analog to digital converters. Each one of such converters is configured to convert a corresponding one of a plurality of analog signals into a corresponding digital signal in response to pulses fed to such one of the converters. The converters perform such conversion with different degrees of conversion performance. In one embodiment of the invention, the ADCs perform such conversion with different input signal to internal noise resolutions. In another embodiment of the invention, the ADCs perform such conversion with different conversion rates. In yet another embodiment of the invention, the ADCs have different input impedances. In still yet another embodiment of the invention, the ADCs have different gains. In accordance with another feature of the invention a controller is provided for interrupting and/or changing the configuration of one or more of the ADCs. The controller provides the interrupt and/or change in configuration with a priority to one of the ADCs over the other one of the ADCs. With such an arrangement, a relatively higher throughput for a given power dissipation is achieved compared with a multiplexed ADC. Further, the invention allows for lower power in a main/auxiliary signal scenario compared to a system, which uses two identical ADCs. Thus, in applications which require converting a main (i.e., primary) input signal and a secondary signal, as for example in a thermocouple temperature transducer that requires an auxiliary measurement of a "cold junction", the auxiliary input signal is processed with the main input signal to compensate the main measurement for influence of the auxiliary input. In such application, the auxiliary input typically may not need to be calculated as often as the main input signal, and does not need to be measured as accurately. In accordance with the invention, an analog to digital conversion system is provided having a plurality of analog to digital converters (ADCs). Each one of such ADCs is configured to convert a corresponding one of a plurality of analog signals into a corresponding sequence of digital words. The ADCs perform such conversion with different degrees of performance. A source of the pulses is included. Each one of the ADCs is configured to provide a corresponding one of the sequences of digital words in response to the pulses. Each one of the digital words in each of the sequences is provided at substantially the same time. In accordance with still another feature of the invention, a analog to digital conversion system is provided comprising: a plurality of analog to digital converters, each one of such converters being configured to convert a corresponding one of a plurality of analog signals into a corresponding digital signal in response to pulses fed to such one of the converters, such converters performing such conversion with different degrees of conversion performances; and, a common source of the pulses for enabling the plurality of converter ADCs to convert the analog signals fed thereto synchronously. In accordance with another feature of the invention, an analog to digital conversion system is provided comprising: a plurality of analog to digital converters, each one of such converters being configured to convert a corresponding one of a plurality of analog signals into a corresponding digital signal in response to pulses fed to such one of the converters, a first one of such converters performing such conversion with higher degree of conversion performance than a second one of the converters; and wherein the second one of the converts consumes less power than the first one of the converters. In accordance with still another feature of the invention, an analog to digital conversion system is provided comprising: an integrated circuit chip having formed thereon: a plurality of analog to digital converters, each one of such converters being configured to convert a corresponding one of a plurality of analog signals into a corresponding digital signal in response to pulses fed to such one of the converters, a first one of such converters having a higher degree of performance than a second one of such converters and; wherein the second one of the converters occupies less area on the chip than the first one of the converters. The details of one or more embodiments of the invention are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the invention will be apparent from the description and drawings, and from the claims. DESCRIPTION OF DRAWINGS FIG. 1 is a diagram of an analog-to-digital (ADC) adapted to convert one of a plurality of analog input signals selectively in accordance with a select signal in accordance with the PRIOR ART; FIG. 2 is a timing diagram of the ADC of FIG. 1 illustrating the effect of settling time in converting two of the analog input signals; FIG. 3 is another ADC system according to the PRIOR ART adapted to convert a pair of analog input signals; FIG. 4 is an analog-to-digital conversion system according to the invention, such system having a high performance, main ADC and a lower performance auxiliary ADC; FIGS. 5A-5E are timing diagrams showing an example of the priority criteria used by the ADC system of FIG. 4, here illustrating the effect of an interrupt in the conversion of the pair of ADC of the ADC system of FIG. 4; FIGS. 6A-6E are timing diagrams showing an example of the priority criteria used by the ADC system of FIG. 4, here illustrating the effect of a change in the configuration of the main ADC of the ADC system of FIG. 4 while the auxiliary ADC was enabled; FIGS. 7A-7E are timing diagrams showing an example of the priority criteria used by the ADC system of FIG. 4, here illustrating the effect of enabling the main ADC of the ADC system of FIG. 4 during a period of time the auxiliary ADC was enabled; FIGS. 8A-8E are timing diagrams showing an example of the priority criteria used by the ADC system of FIG. 4, here illustrating the effect of a change in the configuration of the auxiliary ADC during a period of time the main ADC was enabled; FIGS. 9A-9E are timing diagrams showing an example of the priority criteria used by the ADC system of FIG. 4, here illustrating the enabling the auxiliary ADC during a period of time the main ADC was enabled; FIGS. 10A-10E are timing diagrams showing an example of the priority criteria used by the ADC system of FIG. 4, here illustrating a change in the configuration of the auxiliary ADC during a period of time the main ADC was disabled; FIGS. 11A-11E are timing diagrams showing an example of the priority criteria used by the ADC system of FIG. 4, here illustrating the enabling the auxiliary ADC during a period of time the main ADC was disabled; and FIGS. 12A-12E are timing diagrams showing an example of the priority criteria used by the ADC system of FIG. 4, here illustrating enabling the main ADC for a single conversion followed by enabling the auxiliary ADC during a period of time after the main ADC was enabled but before the main ADC has produced the converted digital word. Like reference symbols in the various drawings indicate like elements. DETAILED DESCRIPTION Referring now to FIG. 4, an analog-to-digital conversion system 10 is shown to include a plurality of, here two, analog-to-digital converters (ADCs) 12, 14. Both ADCs 12, 14 are here chopped, switched capacitor, sigma-delta ADCs such as described in U.S. Pat. No. 5,675,334 "Analog to Digital Conversion system", inventor Damien McCartney, issued Oct. 7, 1997, assigned to the same assignee as the present invention. Thus, each one of the ADCs 12, 14 includes a sigma-delta modulator 16, 18, respectively, and a decimation filter 20, 22, respectively. Here, however, the ADC 12 has a high degree of performance compared to the degree of performance of ADC 14. Thus because of its lower degree of performance, ADC 14 is used as an auxiliary ADC. Thus, the high performance ADC 12 may be considered as a main ADC 12. More particularly, main ADC 12 can have a higher degree of performance because it has larger capacitors in the switching network thereof compared to the capacitors used in the switching network of auxiliary ADC 14, and/or have the same size capacitors but operates at a higher conversion rate compared to the auxiliary ADC 14, and/or have a higher input impedance than that of the auxiliary ADC 14 and/or higher gain than ADC 14. Here, the ADC 12 is coupled to the analog input signal source through a buffer 24, it being noted that such buffer is not included in the auxiliary ADC 14. Further, here the main ADC 12 has a higher gain than the auxiliary ADC 14. Thus, the internally generated thermal noise in main ADC 12 is less than that in auxiliary ADC 14. This thereby increases the input signal to internally generated noise resolution of the main ADC 12 compared to auxiliary ADC 14 Further, main ADC 12 includes as the sigma delta modulator 16 thereof a programmable gain/attenuator (PGA) modulator as described in U.S. Pat. No. 5,134,410 entitled "Delta Sigma Modulator having Programmable Gain/Attenuation" inventors Damien McCartney and David Welland, issued Jul. 28, 1992, assigned to the same assignee as the present invention. Here the modulator 16 is programmed to provide additional gain to the analog input signal fed to it thereby further increasing the input signal to internal noise resolution of the main ADC 12 compared to the auxiliary ADC 14. Still further, the main ADC 12 includes, as noted above, the high input buffer amplifier 24 (here, having a gain of unity), which enables the main ADC 12 to be used with high output impedance analog input signal sources. It is noted that in order to further reduce the power required for the auxiliary ADC 14, such ADC 14 does not include such a high input impedance buffer amplifier 24. Finally, the high performance main ADC 12 includes an input multiplexer 26. The input multiplexer 26 is fed by a plurality of, here N, analog input signals on lines 281 -28N, respectively. One of the plurality of analog input signal on lines 281 -28N is coupled to the output of the multiplexer 26 selectively in accordance with the control signal on INPUT SELECT_line 30. It is noted that here the auxiliary ADC 14 also includes a multiplexer 23 fed by a plurality of, here M, analog input analog signals on line 321 -32M. One of the plurality of analog input signal on lines 321 -32M is coupled to the output of the multiplexer 23 selectively in accordance with the control signal on INPUT SELECT_2 line 25. As will be described in more detail below, the analog-to-digital conversion system 10, as noted above, includes the plurality of, here two, ADCs 12, 14. Each one of such ADCs 12, 14 is configured to convert a corresponding one of a plurality of analog input signals (i.e., one of the signals on lines 281 -28N and one of the signals on lines 321 -32M, respectively) into a corresponding sequence of digital words on output buses 34, 36, respectively, with each one of the digital words in each of the sequences is provided at substantially the same time. As noted above, the ADCs 12, 14 are configured to perform such conversion with different input signal to internal noise resolutions (i.e., the input signal to internal noise resolutions of main ADC 12 being higher than the input signal to internal noise resolutions of the auxiliary ADC 14. More particularly, the system 10 includes a microcontroller 50 coupled to a memory 52. The microcontroller 50 provides a write command (i.e., WRITE COMMAND) and configuration data (i.e., CONFIG DATA) to a section 54 of registers and to also provides the write command to a configuration change detector 56. The configuration detector 56 will be described below. Suffice it to say here however that the present configuration of the main and auxiliary ADCs 12, 14 are stored in one of the configuration registers in section 54 after an initialization configuration data is fed to such registers via the CONFIG DATA bus of microcontroller 50. If, during operation of the system 10 the microcontroller 50 issues a WRITE COMMAND with a new configuration for main ADC 12, for example, such new configuration is written into one of the registers, to be described, in section 54 and the change in configuration is detected by the configuration change detector 56. In response to such detected configuration change, the configuration change detector 56 issues a RESET_1 signal to the PGA sigma delta modulator 16 and the decimation filter 20 of main ADC 12. The process of resetting the ADC is described in the above-referenced U.S. Pat. No. 5,675,334. In like manner, if, during operation of the system 10 the microcontroller 50 issues a WRITE COMMAND with a new configuration for auxiliary ADC 12, for example, such new configuration is written into one of the registers, to be described, in section 54 and the change in configuration is detected by the configuration change detector 56. In response to such detected configuration change, the configuration change detector 56 issues a RESET_2 signal to the sigma delta modulator 18 and the decimation filter 22 of auxiliary ADC 14. As will be described below clock pulses to the high performance, main ADC 12 are provided by a clock rate controller 58 on bus CLK_1 and clock pulses to the auxiliary ADC 14 are provide by the clock rate controller on bus CLK_2. The clock pulses on CLK1 and CLK2 are synchronized with each other because both are derived from a common master clock 60. More particularly, the configuration register section 54 includes a plurality of registers, some of which are: DECIMATION REGISTERS for storing the amount of decimation to be perform in the decimation filters 20, 22, respectively; MODE REGISTERS for storing data indicating the operating modes of the ADCs 12, 14 including a converting mode, a calibration mode, a power-down mode, etc.; A set of MAIN ADC 12 CONFIGURATION REGISTERS for storing data indicating: PGA gain of the main ADC 12, the multiplexer 26 control signal on bus 30, decimation filter 20 scaling parameters which effect the digital representation of the digital words produced by the main ADC 12 on bus 34; A set of AUXILIARY ADC 14 CONFIGURATION REGISTERS for storing data indicating: the multiplexer 23 control signal on bus 25, and decimation filter 22 scaling parameters which effect the digital representation of the digital words produced by auxiliary ADC 14 on bus 36. When the microcontroller 50 writes data to the registers in section 54 there can be two effects on system 10. Firstly, it can interrupt one or both of the main and auxiliary ADCs 12, 14 by asserting a reset signal to such ADC or ADCs 12, 14 for a period of time determined by the priority criteria to be described in more detail below, and secondly the data can change the set-up (i.e., configuration) of one or both of the ADCs 12, 14 (e.g., select a new analog input signal via multiplexers 26 and/or 23). The following are some examples of rules that effects the state of the main and auxiliary ADCs 12, 14 and how such ADCs are interrupted: (1) If the operating mode is changed to a power-own mode, the configuration section 54 asserts a reset signal on lines RESET_1, RESET_2 to both the main and auxiliary ADCs 12 and 14, respectively, and power off signals are sent on the POWER ON/OFF_1 and POWER ON/OFF_2 lines to open switches 61 and 63, respectively, to thereby remove power (+V) from the ADCs 12 and 14, respectively, and to open switches 65, 67 in response to signals on the ENABLE/DISABLE_1, ENABLE/DISABLE lines respectively, so that the clock pulses provided by the ADC clock rate controller 58 are interrupted in response to an interrupt signal on the CLK_1, CKL_2 busses, respectively; (2) If the main ADC 12 is to be disabled, the configuration section 54 asserts a reset signal on line RESET_1 to the main ADC 12, and a power off signal is sent on the POWER ON/OFF_1 line to open switch 61 to thereby remove power (+V) from the ADC 12 and to open switch 65 in response to signals on the ENABLE/DISABLE_1 line so that the clock pulses provided by the ADC clock rate controller 58 are interrupted in response to an interrupt signal on the CLK_1 buss, respectively (3) If the auxiliary ADC 14 is to be disabled, the configuration section 54 asserts a reset signal on line RESET_2 to the auxiliary ADC 14, and a power off signal is sent on the POWER ON/OFF_2 line to open switch 63 to thereby remove power (+V) from the ADC 14 and to open switch 67 in response to signals on the ENABLE/DISABLE_2 line so that the clock pulses provided by the ADC clock rate controller 58 are interrupted in response to an interrupt signal on the CLK_2 buss. (4) If a WRITE COMMAND is sent to the MODE REGISTERS, described above, requesting a change of operating mode (e.g., from converting an analog input signal to performing an internal calibration), each enabled ADC 12, 14 is interrupted by pulsing their respective RESET lines RESET_1, RESET_2, respectively. The ADCs 12, 14 will re-start in the new operating mode immediately because the reset pulse is a relatively short time duration. (5) If a WRITE COMMAND is sent to the MAIN ADC 12 CONFIGURATION REGISTERS described above, and assuming in this example that the main ADC 12 is enabled, an interrupt pulse is sent to the main ADC 12 via RESET_1 line. In such case, if the auxiliary ADC 14 is also enabled, then it's RESET_2 line should also be pulsed. The ADCs 12 and 14 will re-start immediately because the reset pulse is of relatively short time duration; (6) If a WRITE COMMAND is sent to the AUXILIARY ADC 14 CONFIGURATION REGISTERS, and assuming the auxiliary ADC 14 is enabled, the auxiliary ADC 14 is interrupted by a pulsing its RESET_2 line. If the main ADC 12 is also enabled, then the RESET_2 line to the auxiliary ADC 14 is not released until the main ADC 12 has started a new conversion cycle. If the main ADC 12 is not enabled, then the auxiliary ADC 14 can re-start immediately after only a short reset pulse. With regard to a change in configuration, when the configuration of one or both of the ADCs 12, 14 is changed by a request from the microcontroller 50, the configuration change detector 56 monitors this request and issues reset signals as appropriate to the main ADC 12 or the auxiliary ADC 14 via RESET_1 or RESET_2 lines, respectively. The detector 56 operates by detecting the WRITE COMMAND signal provided by the microcontroller 50 and then checking to determine whether there is any change in the configuration data (i.e., the data on config_0 bus from the MAIN ADC 12 CONFIGURATION REGISTERS in section 54 or the data on the config_1 bus from the AUXILIARY ADC 14 CONFIGURATION REGISTERS in section 54). If the configuration data has been changed, the system 10 operates in accordance with the set of priority rules, to be described in connection with FIGS. 5A-5E through 12A-12E) that determines which ADC 12, 14 needs to be interrupted. These rules state that if the change in configuration affects only the auxiliary ADC 14, then only the auxiliary ADC 14 is interrupted. However, a change in the configuration of the main ADC 12 causes an interrupt in both the main ADC 12 and the auxiliary ADC 14. As noted above, the ADCs 12, 14 are interrupted by asserting a pulse on the RESET_1 and RESET_2 lines, respectively, which resets the state of that ADC 12, 14, respectively, so that it may start a new conversion from scratch. If the auxiliary ADC 14 is interrupted while the main ADC 12 is converting, the auxiliary ADC 14 will not re-start until it can re-synchronize itself with the main ADC 12, as will be illustrated below. Here, the entire digital conversion system 10 is formed on integrated circuit chip 11 (FIG. 4). It should be noted however that the microcontroller 50 and memory 52 need not be in the chip 11 but may be on a different chip. It should also be noted that ADC 14 occupies less area on the chip 11 than the ADC 14 and that ADC 14 consumes less power than ADC 12. Referring now to FIGS. 5A-5E through 12A-12E, FIGS. 5A-5E show timing diagrams illustrating one of the priority rules referred to above. FIG. 5A shows pulses fed to the decimation filters 20, 22. It is noted that the period of time between successive pulses is TSYNC. FIG. 5B shows each time a digital word is produced by the main ADC 12. FIG. 5C shows each time a digital word is produced by the auxiliary ADC 14. It is first noted hat when both ADCs 12, 14 are enabled ADCs 12 and 14 produce digital word at time same time (i.e., the main ADC 12 and the auxiliary ADC 14 are synchronized with each other). Further, here the ADCs 12 and 14 are chopped ADCs as mentioned above and here each ADC 12, 14 requires 2 TSYNC periods in order to produce a new digital word. Finally it should be noted that here there is a 2TADC period settling time required after an interrupt or configuration change in either ADC 12 or ADC 14. It is noted that in this example (i.e., FIGS. 5A-5E) an interrupt at time TINTERRUPT, here two TSYNC periods from the last prior time digital words were produced by the ADCs 12, 14. In response to the interrupt at time TINTERRUPT, both ADCs 12, 14 are fed a reset pulse so that the digital words which would, absent the interrupt, be produced at the end of the next TSYNC period, are not produced. It is noted that after the interrupt it takes a period of time TSETTLE, here equal to 6TSYNC =2TADC, before new sequences a digital words are produced at substantially the same times (i.e., synchronously) for both ADCs 12, 14. It should be noted that the interrupt need not take place at the time TSYNC, but more typically is asynchronous with TSYNC. Referring now to FIGS. 6A-6E, in this example, both ADCs 12, 14 are initially operating in a particular configuration when, at time TCHANGE.-- ADC.-- 12, the configuration of ADC 12 is changed. Because it takes 6TSYNC periods after TCHANGE.-- ADC.-- 12 in order for the ADC 12 to settle and because the priority criteria requires that both the main ADC 12 and the auxiliary ADC 14 produce digital words at the substantially same time (i.e., synchronously), the auxiliary 14 takes 6 TSYNC periods to settle before it can produce a new digital word after TCHANGE.-- ADC.-- 12, as shown in FIGS. 6A-6E. Referring now to FIGS. 7A-7E, in this example, ADC is initially operating in a particular configuration when, at time TENABLE.-- ADC.-- 12, the main ADC 12 is to be enabled. Because it takes 6TSYNC periods after TCHANGE.-- ADC.-- 14 in order for the main ADC 12 to settle, and because the priority criteria requires that both the main ADC 12 and the auxiliary ADC 14 produce digital words at the substantially same time (i.e., synchronously), the auxiliary ADC 14 takes 6TSYNC periods to settle, as shown in FIGS. 7A-7E. Referring now to FIGS. 8A-8E, in this example, both ADCs 12 and 14 are initially operating in a particular configuration when, at time TCHANGE.-- ADC.-- 14, the configuration of auxiliary ADC 14 is to change. Because it takes 6TSYNC periods after TCHANGE.-- ADC.-- 14 in order for the auxiliary ADC 14 to settle, and because the priority criteria requires that both the main ADC 12 and the auxiliary ADC 14 produce digital words at substantially the same time (i.e., synchronously), the auxiliary ADC 14 must wait additional time so that it will produce its next digital word only when the main ADC 12 is to produce its digital word. Thus, in this example in FIGS. 8A-8E, TCHANGE.-- ADC.-- 14 was produced one TSYNC period before ADC 12 was to produce a digital word. Thus, in this example, the auxiliary ADC 14 must wait until the next main ADC 12 output is produced. It is noted that it takes 6TSYNC periods after the wait before the auxiliary ADC 14 produces its first digital word, as shown in FIGS. 8A-8E. Referring now to FIGS. 9A-9E, in this example, only the main ADC 12 is initially operating in a particular configuration when, at time TENABLE.-- ADC.-- 14, auxiliary ADC 14 is to be enabled. Because it takes 6TSYNC periods after TENABLE.-- ADC.-- 14 in order for the auxiliary ADC 14 to settle, and because the priority criteria requires that both the main ADC 12 and the auxiliary ADC 14 produce digital words at substantially the same time (i.e., synchronously), the auxiliary ADC 14 must wait additional time so that it will produce its next digital word only when the main ADC 12 is to produce its digital word. Thus, in this example, the auxiliary ADC 14 must wait until the next main ADC 12 output is produced. It is noted that it takes 6TSYNC periods after the wait before the auxiliary ADC 14 produces its first digital word, as shown in FIGS. 9A-9E. Referring now to FIGS. 10A-10E, in this example, only the auxiliary ADC 14 is initially operating in a particular configuration when, at time TCHANGE.-- ADC.-- 14, the configuration of auxiliary ADC 14 is to change. Because it takes 6TSYNC periods after TCHANGE.-- ADC.-- 14 in order for the auxiliary ADC 14 to settle, and because the priority criteria requires that both the main ADC 12 and the auxiliary ADC 14 produce digital words at substantially the same time (i.e., synchronously) but here main ADC 12 is not enabled, the auxiliary ADC 14 must not wait any additional time and can produce digital words in its new configuration after a 6TSYNC settling time. Thus, in this example in FIGS. 10A-10E, the auxiliary ADC 14 must wait 6TSYNC periods before it produces its first digital word after the time TCHANGE.-- ADC.-- 14, as shown in FIGS. 10A-10E. Referring now to S. 11A-11E, in this example, neither one of the ADCs 12, 14 is initially operating in a particular configuration when, at time TENABLE.-- ADC.-- 14, the auxiliary ADC 14 is to be enabled. Because it takes 6TSYNC periods after TENABLE.-- ADC.-- 14 in order for the auxiliary ADC 14 to settle, and because the priority criteria requires that both the main ADC 12 and the auxiliary ADC 14 produce digital words at substantially the same time (i.e., synchronously) but here main ADC 12 is not enabled, the auxiliary ADC 14 must not wait any additional time and can produce digital words after a 6TSYNC settling time. Thus, in this example in FIGS. 11A-11E, the auxiliary ADC 14 must wait 6TSYNC periods before it produces its first digital word after the time TENABLE.-- ADC.-- 14, as shown in FIGS. 11A-11E. Referring now to FIGS. 12A-12E, in this example, neither one of the ADCs 12, 14 are enabled when, at time TENABLE.-- ADC.-- 12, ADC 12 is enabled for a single conversion. It is first noted that it takes 6TSYNC periods after TENABLE.-- ADC.-- 12 in order for the main ADC 12 to settle. In this example, during the settle time period TSETTLE of main ADC 12 (FIG. 12B), auxiliary ADC 14 is to be enabled, here at time TENABLE.-- ADC.-- 14. Here, TENABLE.-- ADC.-- 14 occurs 2TSYNC periods after the time TENABLE.-- ADC.--12 . It is first noted that it takes 6TSYNC periods after TENABLE.-- ADC.-- 14 in order for the auxiliary ADC 14 to settle. Further, because the priority criteria requires that both the main ADC 12 and the auxiliary ADC 14 produce digital words at substantially the same time (i.e., synchronously), and because in this example, TENABLE.-- ADC.-- 14 occurred 2TSYNC periods after the time TENABLE.-- ADC.-- 12, the auxiliary ADC 14 must wait one TSYNC and then produces its digital word after an additional 6TSYNC periods, as shown in FIGS. 12A-12E. A number of embodiments of the invention have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the invention. For example, here the system 10 has both the main ADC 12 and the auxiliary ADC 14 on the same integrated circuit chip as the microcontroller 50. The microcontroller 50 can be used to digitally compensate the main measurement for influence of the auxiliary input and can also be used to further process the main measurement (e.g., linearization). This provides a single-chip solution for converting and processing sensor outputs. Accordingly, other embodiments are within the scope of the following claims. |
In a system according to an embodiment of the invention, a data requester submits a query having a query characteristic to a data resolver. Based at least in part on the query characteristic, the data resolver obtains data responsive to the query from one among a plurality of data providers. Data normalization may also be performed. |
We claim: 1. A method comprising:receiving a query, said query including a query characteristic; selecting one among a plurality of lists, said selecting being based on the query characteristic; and transmitting a response relating to said query, wherein said selected list comprises an ordered plurality of entries, each among said ordered plurality of entries corresponds to one among a plurality of data providers, the response is based at least in part on data provided by one among said plurality of data providers, said ordered plurality of entries is ordered according to a predetermined preferability of the corresponding data provider with respect to the query characteristic, each among said ordered plurality of entries includes a corresponding query alias, and for each among said ordered plurality of entries, the corresponding query alias includes a mapping of a least a portion of the query into a namespace of the corresponding data provider. 2. The method according to claim 1, wherein the query conforms to a standard framework for communication of management data.3. The method according to claim 1, wherein said selecting includes following a path through a decision tree.4. The method according to claim 1, wherein said selecting includes choosing between at least one default list and at least one exception list.5. The method according to claim 1, wherein said transmitting a response includes converting the data.6. The method according to claim 5, wherein said converting the data includes normalizing the data.7. The method according to claim 1, wherein the data is provided to an interface module coupled to said one among said plurality of data provides.8. The method according to claim 1, wherein each among said ordered plurality of entries includes a priority order tag.9. A method comprising:receiving a query, said query including a query characteristic, wherein said query characteristic includes a query type and at least one query parameter; selecting one among a plurality of lists, said selecting being based on the query characteristic; and transmitting a response relating to said query, wherein said selected list comprises a plurality of entries, the response is based at least in part on data provided by one among said plurality of data providers, each among said plurality of entries includes a query type alias and parameter conversion information, and for each among said plurality of entries, the corresponding query type alias comprises a mapping of at least a portion of the query type into a namespace of the corresponding data provider. 10. The method according to claim 9, wherein the parameter conversion information includes at least one among a request format item and a response format item.11. An apparatus comprising a data storage medium, said data storage medium having machine-readable code stored thereon, the machine-readable code including instructions executable by an array of logic elements, the instructions defining a method to:receive a query, said query including a query characteristic; select one among a plurality of lists, said selection being based on the query characteristic; and transmit a response relating to said query, wherein said selected list comprises an ordered plurality of entries, each among said ordered plurality of entries corresponds to one among a plurality of data providers, the response is based at least in part on data provided by one among said plurality of data providers, and said ordered plurality of entries is ordered according to a predetermined preferability of the corresponding data provider with respect to the query characteristic. 12. An apparatus comprising:a decision structure configured and arranged to receive a query characteristic, and a plurality of priority lists, wherein said decision structure selects one among said plurality of priority lists based at least in part on the query characteristic, said selected priority list comprises a plurality of entries, each among said plurality of entries corresponds to one among a plurality of data providers, and said plurality of entries is ordered according to a predetermined preferability of the corresponding data provider with respect to the query characteristic. 13. An apparatus comprising:a data resolver configured and arranged to receive a query including a query characteristic, said data resolver having: a decision structure configured and arranged to receive a query characteristic; and a plurality of priority lists, and a plurality of interface modules configured and arranged to receive information relating to the query characteristic from said data resolver, wherein said decision structure selects one among said plurality of priority lists based at least in part on the query characteristic, said selected priority list comprises a plurality of entries, each among said plurality of entries corresponds to one among a plurality of data providers, each among said plurality of interface modules corresponds to one among the plurality of data providers, one among said plurality of interface modules receives data relating at least in part to the query characteristic from the corresponding data provider, and said plurality of entries is ordered according to a predetermined preferability of the corresponding data provider with respect to the query characteristic. 14. A system comprising:a data resolver configured and arranged to receive a query including a query characteristic; and a plurality of data providers, wherein at least two among said plurality of data providers are configured and arranged to supply data responsive to said query, said data resolver selects one from among said plurality of data providers based at least in part on the query characteristic, and said data resolver selects one from among said plurality of data providers based at least in part on a predetermined preferability of the corresponding data provider with respect to the query characteristic. 15. A program code storage device, comprising:a machine-readable storage medium; and machine-readable program code, stored on the machine-readable storage medium, having instructions to receive a query, said query including a query characteristic, wherein said query characteristic includes a query type and at least one query parameter; select one among a plurality of lists, the selection being based on the query characteristic; and transmit a response relating to said query, wherein said selected list comprises a plurality of entries, the response is based at least in part on data provided by one among said plurality of data providers, each among said plurality of entries includes a query type alias and parameter conversion information, and for each among said plurality of entries, the corresponding query type alias comprises a mapping of at least a portion of the query type into a namespace of the corresponding data provider. 16. The program code storage device of claim 15, wherein the parameter conversion information includes at least one among a request format item and a response format item.17. A system, comprising:a data resolver configured and arranged to receive a query including a query characteristic, to select one among a plurality of lists, the selection being based on the query characteristic, and to transmit a response to said query, and a plurality of data providers, wherein at least two among said plurality of data providers are configured and arranged to supply data responsive to said query; wherein said query characteristic includes a query type and at least one query parameter, said selected list comprises a plurality of entries, the response is based at least in part on data provided by one among said plurality of data providers, each among said plurality of entries includes a query type alias and parameter conversion information, and for each among said plurality of entries, the corresponding query type alias comprises a mapping of at least a portion of the query type into a namespace of the corresponding data provider. 18. The system according to claim 17, wherein the parameter conversion information includes at least one among a request format item and a response format item. |
BACKGROUND1. Field of the InventionThis invention relates to data management. More specifically, this invention relates to managing multiple data providers.2. Description of Related ArtMany modern computing environments may be characterized at least in part by a distributed model. In most business and academic settings, for example, users share access to resources such as storage, printing, and communications facilities over a local-area network. Distributed applications on a broader scale are supported by connections to larger networks such as the Internet. Yet even within individual computing devices, management of distributed resources and functions is becoming increasingly common. One reason for this trend is the establishment of multitasking operating systems and multiprocessor hardware.One consequence of distributed systems is that acquisition and management of system information becomes even more important to the proper configuration, operation, maintenance, and troubleshooting of a system. Such information may include data relating to device and/or network operation such as the characteristics and operating status of hardware and software components, the capacity and level of use of network pathways, and the history of usage of resources such as storage and printing devices. When different parts of a network are executing on different platforms, or when different components are supplied by different vendors, problems of incompatibility may arise.Several distributed management schemes attempt to overcome vendor and platform differences by providing a standard framework for communication of management data. These schemes include:SNMP (Simple Network Management Protocol), as described and developed in such documents as RFCs (Requests for Comments) 1157 (May 1990), 1514 (September 1993), and 2578 (April 1999) (available from University of Southern California-Information Sciences Institute (ISI, www.isi.edu), Marina Del Rey, Calif. and also available at www.rfc-editor.org). SNMP uses management models called MIB (management information base) objects or modules;CMIP (Common Management Information Protocol), an extension of SNMP that is described in RFC 1189 (October 1990) (available from ISI);DMI (Distributed Management Interface), as defined in the DMI 2.0s specification (available from Distributed Management Task Force, Inc. (DMTF, www.dmtf.org), c/o Mackenzie Kesselring, Portland, Oreg.). DMI uses management models called MIF (management information format) files; andCIM (Common Information Model), as defined in the CIM specification version 2.2 (Jun. 14, 1999) and CIM schema version 2.3 (available from DMTF).It may be desirable to support more than one such scheme in any particular system. For example, it is possible that no one scheme will be supported by all of the devices within the system. Alternatively, a component that is managed by one scheme may be added to a system that supports a different scheme. In cases where more than one management scheme exists in the same system, it is possible that multiple schemes may manage information of the same kind under different names, units, or relations or by association with different behaviors. Therefore, while it is desirable to accommodate multiple management schemes, it is also desirable to avoid confusion among the datasets they provide.A similar problem may be encountered in accessing information from data providers other than distributed management schemes. For example, with respect to an access directed across a network including a number of databases or an access directed across a wider network such as the Internet, it is possible that data responsive to a query relating to any particular subject may be available from more than one provider. A request submitted to an Internet search engine, for example, may be fulfilled in a similar fashion by a number of different search engines. Likewise, a request for a stock quotation may be answered similarly by a number of financial information sites. Requests for directions to a particular location, a review of a particular movie or restaurant, or a price quotation for a new car may also be handled by more than one data provider. In any one of these cases, each such provider may process the request somewhat differently and/or may provide a result in a different format than another provider.A certain provider may be preferred for a particular type of query, if not for a different type of query. It is desirable to direct a query to a preferred provider. However, a requesting entity may not have the information or the capacity to support a decision as to which provider is preferred for a particular type of query.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a block diagram of a system according to an embodiment of the invention;FIG. 2 is a flow chart of an implementation of a data resolver 120 according to an embodiment of the invention;FIG. 3 is a flow chart of an implementation P122 of selection task P120;FIG. 4 is a flow chart of an implementation P124 of selection task P120;FIG. 5 is a flow chart of an implementation P126 of selection task P120;FIG. 6 is a diagram showing a priority list;FIG. 7 is a flow chart of an implementation of a data resolver 120 including an implementation P125a of data collection task P125;FIG. 8 is a flow chart of an implementation of a data resolver 120 including an implementation P125b of data collection task P125;FIG. 9 is a flow chart of an implementation P182 of evaluation task P180;FIG. 10 is a diagram showing a priority list;FIG. 11 is a flow chart of an implementation P184 of evaluation task P180;FIG. 12 is a diagram showing a priority list;FIG. 13A is a diagram showing an example of a query;FIG. 13B is a diagram showing a priority list;FIG. 14 is a flow chart of an implementation P186 of evaluation task P180;FIG. 15 is a block diagram of a system according to an embodiment of the invention;FIG. 16 is a block diagram of a system including an apparatus 300 according to an embodiment of the invention;FIG. 17A is a block diagram of an implementation of an apparatus according to an embodiment of the invention; andFIG. 17B is a block diagram of an implementation of an apparatus according to another embodiment of the invention.DETAILED DESCRIPTION OF THE INVENTIONIt is desirable to free a requesting entity from the burden of having to select a particular provider. Additionally, it is desirable to return a result for a query in a form that is independent of the particular provider that processed the query.FIG. 1 shows a system according to an embodiment of the invention. In this system, data requestor 110 forwards a query to data resolver 120. Data requestor 110 may be any software application executing on the same machine as data resolver 120. For example, data requestor 110 may be a system monitoring agent or data collection agent, such as a historian component that logs system characteristics or a system health monitor that records and analyzes system resource usage. Alternatively, data requestor 110 may be any application executing on a different machine and communicating with data resolver 120 over a wired or wireless communications link. For example, data requestor 110 may be an administrative application executing on a network server and data resolver 120 may be an application executing on a client in the same network.Possible formats for the query received from data requestor 110 include object-oriented formats such as Managed Object Format (MOF) and syntaxes such as Extensible Markup Language (XML). For example, the query may conform to at least one among the distributed management schemes referenced above (SNMP, CMIP, DMI, CIM) or to a similar scheme such as Windows Management Interface (WMI, Microsoft Corp., Redmond, Wash.). Included in the query is a query characteristic that identifies the information requested and/or the subject matter of the query. For example, a query relating to a DVD (Digital Versatile Disk) drive may include an object class (e.g. MediaAccessDevice), a subclass (e.g. DVDDrive), and an indication of the particular drive property about which information is desired.Data resolver 120 receives the query and chooses one from among a set of lists according to the query characteristic. Each entry of the selected list (which may also be called a 'priority list') is associated with a data provider 130, and the entries in the list are ordered according to a predetermined preferability of the corresponding data provider with respect to the query characteristic.Data resolver 120 forwards a request based upon the query to the data provider 130i that is indicated by a entry chosen from the selected list by order of preferability. A data provider 130 may be a hardware structure (such as a sensor) or software structure (such as a registry file) that provides management information. For example, data provider 130 may be an object manager or database that collects management information and services queries according to a particular distributed management scheme. Data resolver 120 receives data responsive to the request from the data provider 130i, and a response based at least in part upon the data is then returned by data resolver 120 to data requestor 110.As described above, a system including a data resolver 120 allows a data requestor to submit all queries in a common namespace and across a single query channel. Such a system is also expandable, as support for additional data providers may be added by updating the priority lists.FIG. 2 is a flow chart that shows the operation of an implementation of data resolver 120 according to an embodiment of the invention. In query reception task P110, a query having a query characteristic is received, for example, from a data requestor 110 as described above. In another application of data resolver 120, the query may be received from a data requestor who is a human user (e.g., via a keyboard or other data input device over a wired or wireless communications link).In selection task P120, a list corresponding to the query characteristic is selected. As noted above, the entries in this list are ordered in a sequence that is established at least in part by their relative preferabilities or priorities with respect to the query characteristic. In data collection task P125, data providers 130 are visited in a sequence according to the order of the entries in the selected list until data responsive to the query is obtained. If the list is exhausted before such data is obtained, data collection task P125 fails and an error is indicated in error handling task P150. If data is obtained, data resolver 120 transmits a response to the query (e.g. to the data requestor) in response task P200.FIG. 3 shows one implementation P122 of selection task P120 as a decision tree. Such an implementation may be used, for example, in situations where the types of data to which a query characteristic may relate can be arranged into a hierarchical grouping. In this example, a first classification decision is made at node P210. Depending upon that decision, a second classification decision is made at one among nodes P220, P230, and P240. The accumulated results of the classification decisions determine which list (i.e. which among the leaves P222, P224, P226, etc.) will be selected. The structure of the decision tree for a particular application need not be balanced or symmetric, and the number of nodes in the path to any particular leaf may be arbitrarily long.One example of a walk through a selection task P120 implemented as a decision tree having several levels of nodes is now described. In this example, the query received in query reception task P110 relates to the capacity of a DVD (Digital Versatile Disk) drive that is internal to the system. At a root node, a decision is made as to whether the query characteristic relates to a hardware device or a software application. The "hardware device" path is chosen, and the decision at the next node relates to whether the query characteristic corresponds to a storage device or a display device. The "storage device" path is chosen, and the decision at the next node relates to whether the storage device is internal to the system or external. In similar fashion, subsequent decisions are made as to whether the drive is removable or permanent, whether the storage medium is optical or magnetic, and whether the drive format is DVD or CD-ROM. At the final node, a decision is made to select a list of data providers arranged by preference with respect to information relating to a capacity of the internal DVD drive. In a similar manner, other paths through the decision tree may lead to lists corresponding to queries relating to other hardware components (i.e. CPU speed and/or temperature, hard disk drive status and/or capacity, etc.) or to software components (e.g. socket assignments, task schedules, shared resource usage, etc.).In certain applications, it may not be necessary to provide a separate list for each possible query characteristic, as one or more default lists may handle many or most possible queries appropriately. In such cases, the default list or lists may be supplemented by exception lists that handle other possible queries. FIG. 4 shows an alternate implementation P124 of selection task P120 suitable for one such application. In this example, the only exception is for queries relating to a CD-ROM drive manufactured by ABC Company. As shown in FIG. 4, a default priority list is selected in task P340 if the query characteristic does not relate to a disk drive (task P310), or if the query characteristic relates to a disk drive but does not relate to a CD-ROM drive (P320), or if the query characteristic relates to a CD-ROM drive that is not manufactured by ABC Company (task P330). If all of tasks P310, P320, and P330 are true, however, then the exception list is selected in task P350.In many cases, a decision structure that uses one or more default lists will consume less storage area than one using a full decision tree. A configuration using default lists may also take less time to configure. Another possible decision structure is a lookup table (e.g. indexed by the query characteristic), which may provide a faster decision but may also consume much more storage area.FIG. 5 shows an alternate implementation P126 of selection task P120. Such an implementation may be used to handle a situation having a default case and several exception cases. In this particular example, a query characteristic that indicates class 1 and sub-class b is associated with exception list B. A query characteristic that indicates class 2, sub-class b and sub-sub-class ii is associated with exception list C. All other query characteristics are associated with default list A. In a similar manner, selection between a number of default lists (e.g. for hardware vs. for software; for components supplied with the system as originally purchased or configured vs. for optional components, upgrade components, and/or components added later; etc.) may also be supported.As shown in FIG. 6, a list (or 'priority list') 150 may be configured as an ordered list of entries. The order of preference in which the various entries are arranged may be determined with respect to criteria such as availability, accuracy, and/or reliability. This order (as indicated by the column of numbers on the left-hand side of the figure) may be represented implicitly (e.g. by the relative positions of the entries within the list) or explicitly (e.g. by including a priority order tag within each entry). One advantage to using an explicit representation is that the order may be updated without moving the entries. Each entry has a data provider identifier, which may be any string (such as a network address, memory or port address, packet header, etc.) that unambiguously indicates a particular one among the data providers 130.As shown in FIG. 6, each list entry may also include a query alias. Each query alias is a string that represents a mapping of at least a portion of the query received in task P110 into the namespace of the data provider indicated by the corresponding data provider identifier. In a case where all of the data providers represented in the list recognize the query received in task P110, the query alias portion of each entry may be omitted. In a case where all of the data providers represented in the list recognize some other form of the query, associating a single instance of this recognized form with the list itself may be more efficient than replicating the recognized form as a query alias for each individual entry.Refreshing of the priority lists may be general or selective and may occur at installation or afterward. In one example, a priority list is retrieved from permanent storage at power-up and is updated (e.g. in accordance with the system configuration) as it is loaded into run-time memory.FIG. 7 is a flow chart that shows the operation of an implementation of data resolver 120 according to an embodiment of the invention, including an implementation P125a of data collection task P125. In evaluation task P130, the first entry on the list (i.e. the entry having the highest priority) is evaluated to determine whether data responsive to the query is available from the corresponding data provider. Evaluation task P130 may include preparing and forwarding a request to the data provider and analyzing any response from the data provider. If such data is not available (for example, because the data provider corresponding to the entry is not present in the system, is not on-line, is not responding to the request, or has returned an invalid response), then in loop test task P140 the presence of more entries on the list is determined (in another implementation, the unavailability of data from the data provider may also be reported to the data requester).If no more entries remain in the list (for example, if an end-of-list marker is encountered), then an error is indicated in error handling task P150. For example, an error response may be returned to the data requestor. If more entries remain, however, then in evaluation task P160 the next entry in the list is evaluated (as in task P130 described above). Once data responsive to the query has been obtained, a response based on the data is transmitted to the data requestor in response task P200.FIG. 8 shows a flowchart for an implementation of data resolver 120 according to an embodiment of the invention, including an implementation P125b of data collection task P125 that performs an alternate procedure of progressing down the selected list in a priority-based fashion. In initialization task P170, an entry counter n is initialized (for example, to have a value of one). In evaluation task P180, the n-th entry is evaluated (e.g. as described above) to determine the availability of data responsive to the query from the corresponding data provider. If the data is not available, then in update task P190 the value of the entry counter is updated (e.g., incremented) and the new value of the entry counter is tested in loop test task P145 to determine whether further list entries remain. If no such entries remain, then an error is indicated in error handling task P150 as described above. Procedures for progressing down the selected list in a priority-based fashion other than those illustrated in FIGS. 7 and 8 are also possible.FIG. 9 shows one implementation P182 of evaluation task P180 (evaluation tasks P130 and P160 may be implemented analogously). In request forwarding task P510, a request including the n-th query alias is forwarded to the corresponding data provider. If the query alias represents a complete query from the namespace of the corresponding data provider, then data resolver 120 may forward the query alias itself as the request. Alternatively, data resolver 120 may supplement the query alias with specific information from the query and/or the query characteristic to form the request. If it is determined in response evaluation task P520 that no response has been received from the data provider (e.g., a time-out occurs, the response is determined to be invalid, or the query is rejected), then the test fails. Otherwise, the data provider's response to the request is forwarded to the next task.FIG. 10 shows an alternative implementation of a priority list 152 for a case in which the availability of data from particular data providers may be determined at least in part in advance. In this example, each entry includes an available flag that indicates whether data responsive to the query is available from the data provider indicated by the corresponding data provider identifier. These flags may be updated (together, in sets, and/or individually) periodically or in response to events such as power-up, user registration, and/or system reconfiguration or. upgrading. As noted above, the query alias portion of the list entries may be omitted in cases where all of the data providers represented in the list recognize a common query form.FIG. 11 shows an alternative embodiment P184 of evaluation task P180 that references the available flag of each visited entry. In flag test task P530, the status of the flag corresponding to the n-th entry is checked. If the flag indicates that data responsive to the query is not available from the corresponding data provider, then the task fails. Otherwise, a request including the corresponding query alias is forwarded to the data provider in request forwarding task P540. In a further implementation, data received from the data provider may be tested for validity before it is forwarded to the next task.In response task P200, a response based at least in part on the data obtained from the data provider is transmitted to the data requester. Response task P200 may include formatting the data into a response format appropriate for presentation to data requester 110. For example, it may be necessary to include information in the response to establish a context for the data (e.g. information similar to the query characteristic), and/or it may be necessary to present portions of the data in a specified sequence within the response. In an exemplary implementation, communications between the data requestor and data resolver 120 conforms to a distributed management scheme as referenced above.As the response format may change with the nature of the query, information regarding a corresponding response format may be associated with each priority list. Alternatively, information regarding the response formats that correspond to several or all possible queries may be stored in a single data structure indexed, for example, by the query characteristic. Data processing operations (such as data normalization as described below) may also be performed on the data before the response is transmitted.One problem that may be encountered in dealing with multiple data providers is that responses by different providers to the same query may represent similar values yet appear completely different. In the CIM distributed management scheme, for example, free disk space is reported in units of bytes, while in the DMI scheme, free disk space is reported in units of kilobytes. FIG. 12 shows an alternative implementation 154 of a priority list in which each entry includes one or more conversion factors. These factors, which are associated with a particular data provider and query characteristic, may be used during response formatting to normalize the data obtained from the data provider to a common reference.In some applications, the relative preferabilities of the various data providers may be determined from only a portion of the query characteristic. For example, the same relative preferabilities may be associated with a group of queries related by at least one common feature, such as all queries relating to a particular hardware or software component or all queries within the same class or subclass. In such cases, it may be desirable to divide the query characteristic into two sections. FIG. 13A shows an example of a query characteristic that includes a query type and one or more query parameters. In this example, the relative preferabilities of the various data providers are based on the query type, while the query parameters indicate the specific information requested in a particular query (e.g. one or more properties as identified in the query). For example, the query type may indicate that the query relates to the CPU, while the query parameters may indicate whether the query relates to the type, speed, version, or temperature of the CPU.FIG. 13B shows an alternative implementation 156 of a priority list in which each entry includes a query type alias and parameter conversion information. In one example, the parameter conversion information includes a set of item pairs that is indexed by the query parameters, each item pair including (1) a request format item which relates the query parameters to a request format appropriate for presentation to the corresponding data provider and (2) a response format item which relates data received from the data provider to a response format appropriate for presentation to the data requester. Evaluation of a list entry in this case may include preparing a request that is based at least in part on the query type alias and the indicated request format item and forwarding the request to the corresponding data provider. Responding to the data requestor may include preparing a response that is based at least in part on data received from the data provider and the indicated response format item and transmitting the response to the data requestor.FIG. 14 shows a flowchart for an implementation P186 of evaluation task P182. In request preparation task P550, a request is prepared which relates to the query received in task P110 and is appropriate for presentation to the indicated data provider. This request is forwarded to the corresponding data provider in forwarding task P560, and the provider's availability is determined in response evaluation task P570.Although the discussion above relates to obtaining data from one or more data providers (e.g. performing a 'Get' operation), it may also be desirable to send data to one or more data providers (e.g. to perform a 'Set' operation) in a similar fashion. In such case, the query received from the data requester (and the corresponding request that is forwarded to the data provider) specify data being sent rather than data being requested. For a binary operation (e.g. toggling a flag), it may be sufficient to forward a request based on the query alias. For other operations, a data payload may be included in one or more query parameters of a query characteristic as shown in FIG. 13A. Operations such as request formatting and response formatting (e.g. to return an acknowledgement to the data requester) may also be performed as described above.In one example, the data requestor sends a query including a query characteristic that specifies a disk use watermark in megabytes. The appropriate priority list indicates that a data provider which uses the CIM distributed management scheme is preferred for this query characteristic. The corresponding list entry also indicates that the request format appropriate for this data provider includes the property 'DiskIsFullThreshold' and a value in kilobytes. The request (including a converted data value) is formatted and forwarded to the data provider, and data received from the data provider is formatted and transmitted to the data requestor as an acknowledgement response.FIG. 15 shows a system according to a further embodiment of the invention. In this example, data resolver 126 receives a query from data requestor 110 and transmits a response to data consumer 140. In another implementation, the system may include more than one data requestor and/or consumer.FIG. 16 shows an apparatus 300 according to yet another embodiment of the invention. In this implementation, data resolver 128 communicates with at least some of the data providers 134 through interface modules 140. For example, data resolver 128 may forward the query alias (or the query type alias and query parameters, as appropriate) to the corresponding interface module 140, which may perform some or all of the appropriate request formatting, request forwarding, and data receiving operations. Interface module 140 may also receive data responsive to the request from the data provider and may perform some or all of the appropriate data processing and response formatting operations before forwarding the data (or the response) to data resolver 128 (or, in another application, directly to the data requester). In one example, an interface module 140 supports communication between data resolver 128 and another machine through remote procedure call (RPC).Use of one or more interface modules 140 may be useful in bridging interfaces between networks, in resolving hardware-related issues such as signal incompatibilities, and/or in executing software-related tasks such as function call or protocol mapping. Additionally, incorporating interface modules into the apparatus allows a design that is more easily upgraded and extended. For example, an additional distributed management scheme may be supported with only limited modification to data resolver 128 by adding an appropriate interface module 140.The foregoing presentation of the described embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments are possible, and the generic principles presented herein may be applied to other embodiments as well. For example, the invention may be implemented in part or in whole as a hard-wired circuit or as a circuit configuration fabricated into an application-specific integrated circuit or field-programmable gate array. Likewise, as shown in FIG. 17A, the invention may be implemented in part or in whole as a firmware program 500 loaded or fabricated into non-volatile storage 510 (such as read-only memory or flash memory) as machine-readable code, such code being instructions executable by an array of logic elements 520 such as a microprocessor or other digital signal processing unit.Further, as shown in FIG. 17B, the invention may be implemented in part or in whole as a software program 530 loaded as machine-readable code from or into a data storage medium 540 such as a magnetic, optical, magnetooptical, or phase-change disk or disk drive; a semiconductor memory; or a printed bar code. Thus, the present invention is not intended to be limited to the embodiments shown above but rather is to be accorded the widest scope consistent with the principles and novel features disclosed in any fashion herein. |
A method of forming a semiconductor structure comprises forming an array of vertical thin film transistors. Forming the array of vertical thin film transistors comprises forming a source region, forming a channel material comprising an oxide semiconductor material over the source region, exposing the channel material to a dry etchant comprising hydrogen bromide to pattern the channel material into channel regions of adjacent vertical thin film transistor structures, forming a gate dielectric material on sidewalls of the channel regions, forming a gate electrode material adjacent to the gate dielectric material, and forming a drain region over the channel regions. Related methods of forming semiconductor structures and an array of memory cells are also disclosed. |
CLAIMSWhat is claimed is:1. A method of forming a semiconductor structure, the method comprising: forming an array of vertical thin film transistors, forming the array of vertical thin filmtransistors comprising:forming a source region;forming a channel material comprising an oxide semiconductor material over the source region;exposing the channel material to a dry etchant comprising hydrogen bromide topattern the channel material into channel regions of adjacent vertical thin film transistor structures;forming a gate dielectric material on sidewalls of the channel regions;forming a gate electrode material adjacent to the gate dielectric material; and forming a drain region over the channel regions.2. The method of claim 1, wherein forming a channel material comprising the oxide semiconductor material comprises forming a channel material comprising anIn:Ga:Zn:0 ratio of 1 : 1 : 1 :4, an In203:Ga203:ZnO ratio of 2:2: 1, or InGa03(ZnO)5.3. The method of claim 1, wherein exposing the channel material to a dry etchant comprising hydrogen bromide comprises exposing the channel material to a dry etchant comprising hydrogen bromide and at least one alkane.4. The method of claim 3, wherein exposing the channel material to a dry etchant comprising hydrogen bromide and at least one alkane comprises exposing the channel material to a dry etching comprising hydrogen bromide and methane.5. The method of claim 4, wherein exposing the channel material to a dry etchant comprising hydrogen bromide and methane comprises exposing the channel material to between about 0.1 part and about 5.0 parts methane for about every 1.0 part hydrogen bromide.6. The method of claim 1, wherein patterning the channel material comprises forming each channel region to be between about 10 nm and about 40 nm from an adjacent channel region in a first direction and between about 20 nm and about 50 nm from an adjacent channel region in a second direction.7. The method of claim 1, wherein exposing the channel material to a dry etchant comprising hydrogen bromide comprises exposing the channel material to a dry etchant comprising hydrogen bromide, methane, hydrogen, and nitrogen trifluoride.8. The method of claim 1, wherein exposing the channel material to a dry etchant comprising hydrogen bromide comprises:exposing the channel material to a first composition comprising hydrogen bromide; and after exposing the channel material to the first composition, exposing the channel material to a second composition comprising at least one of hydrogen and nitrogen trifluoride.9. The method of claim 1, wherein exposing the channel material to a dry etchant comprises exposing the channel material to the dry etchant at a temperature greater than about 50°C.10. The method of claim 1, further comprising applying a bias voltage greater than about 2,000 V while exposing the channel material to the dry etchant.11. The method of claim 1, wherein forming a gate electrode material adjacent to the gate dielectric material comprises forming a gate electrode material comprising titanium nitride adjacent to the gate dielectric material.12. The method of claim 1, wherein exposing the channel material to a dry etchant comprising hydrogen bromide to pattern the channel material into channel regions of adjacent vertical thin film transistor structures comprises:exposing the channel material to the dry etchant to form lines of the channel material in a first direction; andexposing portions of the lines of the channel material to the dry etchant to form the channel regions of adjacent vertical thin film transistors.13. A method of forming a semiconductor structure, the method comprising: forming conductive source regions;patterning a channel material comprising an oxide semiconductor material over the conductive source regions to form rows of the channel material extending in a first direction, wherein patterning the channel material comprises exposing the channel material to a dry etchant comprising a hydrogen bromide-containing gas;forming a gate oxide on sidewalls of the rows of the channel material;forming a gate electrode adjacent to the gate oxide; andpatterning the rows of the channel material to form isolated channel regions comprisingvertical thin film transistors. 14. The method of claim 13, wherein patterning a channel material comprising the oxide semiconductor material comprises patterning a channel material to have a height between about 40 nm and about 100 nm.15. The method of claim 13, wherein patterning a channel material comprising an oxide semiconductor material to form rows of the channel material comprises forming the rows of the channel material to have a pitch between about 10 nm and about 40 nm.16. The method of claim 13, wherein patterning the rows of the channel material to form isolated channel regions comprises forming the isolated channel regions to have a width between about 5 nm and about 40 nm.17. The method of claim 13, wherein patterning a channel material comprising an oxide semiconductor material comprises exposing the channel material to a dry etchant further comprising hydrogen gas. 18. The method of claim 13, wherein exposing the channel material to a dry etchant comprises exposing the channel material to a dry etchant comprising nitrogen trifluoride.19. The method of claim 18, wherein exposing the channel material to a dry etchant comprises exposing the channel material to a dry etchant comprising methane.20. The method of claim 13, wherein patterning a channel material comprises forming the rows of the channel material to have sidewalls having an angle between about 80°and about 90° with respect to a major surface of a substrate.21. The method of claim 13, wherein forming the conductive source regions comprises forming the conductive source regions to comprise tungsten.22. The method of claim 13, further comprising forming a capacitor structure over and in contact with each thin film transistor.23. The method of claim 13, further comprising forming a drain region over the channel material of each of the vertical thin film transistors. 24. The method of claim 20, wherein exposing the channel material to the dry etchant comprises exposing the channel material to methane, hydrogen, and nitrogen trifluoride. |
METHODS OF FORMING SEMICONDUCTOR STRUCTURES COMPRISING THIN FILM TRANSISTORS INCLUDING OXIDE SEMICONDUCTORSPRIORITY CLAIMThis application claims the benefit of the filing date of United States PatentApplication Serial No. 16/114,614, filed August 28, 2018, and titled "METHODS OF FORMING SEMICONDUCTOR STRUCTURES COMPRISING THIN FILM TRANSISTORS INCLUDING OXIDE SEMICONDUCTORS," which claims the benefit of United States Provisional Patent Application No. 62/522,159, filed August 30, 2017, and titled "METHODS OF FORMING SEMICONDUCTOR STRUCTURES COMPRISING THIN FILM TRANSISTORS INCLUDING OXIDE SEMICONDUCTORS."TECHNICAL FIELDEmbodiments disclosed herein relate to methods of forming semiconductor structures including vertical thin film transistors comprising oxide semiconductors, and to related semiconductor structures. More particularly, embodiments of the disclosure relate to methods of forming semiconductor structures comprising an array of vertical thin film transistors having a channel region which, in some embodiments, may comprise an oxide, methods of patterning the vertical thin film transistors, and to related semiconductor structures.BACKGROUNDConventional volatile memory cells, such as dynamic random access memory (DRAM) cells, may include a storage element and a transistor. The storage element may, for example, include a capacitor (e.g., sometimes referred to as a "cell capacitor" or a "storage capacitor") configured to store a logical state (e.g., a binary value of either a "0" or a "1") defined by the storage charge in the capacitor. The transistor may be referred to in the art as an "access transistor." The transistor conventionally includes a channel region between a pair of source/drain regions and further includes a gate configured to electrically connect the source/drain regions to one another through the channel region. The channel region conventionally includes a semiconductor material, such as silicon.To charge, discharge, read, or recharge the capacitor, the transistor may be selectively turned to an "on" state, in which current flows between the source and drain regions through the channel region of the transistor. The transistor may be selectively turned to an "off state, in which the flow of current is substantially stopped. In the off state, it is desired for the capacitor to retain the charge, without change. However, capacitors of conventional volatile memory cells may exhibit discharges of current over time and a resulting loss in stored charge. Therefore, even in the "off state when the memory cell is unselected, current may flow from the capacitor. This off-state leakage current is referred to in the art as a subthreshold leakage current.Due to sub-threshold leakage current, conventional volatile memory cells are frequently refreshed. The sub-threshold leakage current may also impact the fabrication and configuration of an array of memory cells within a memory device. Sub-threshold leakage current rates, refresh rates, cell size, and thermal budgets of memory cells are often important considerations in the design, fabrication, and use of volatile memory cells and arrays of cells incorporated in memory devices.Methods of forming channel regions often include etching the channel regions of such structures with a wet etchant such as oxalic acid. However, the use of such etchants often forms a residue on such structures. The residue left behind by such etchants may change the material properties of the channel regions. In addition, wet etchants are often unable to achieve a desirably high packing density of the structures being patterned.SUMMARYEmbodiments disclosed herein relate to methods of forming semiconductor devices including vertical thin film transistors comprising oxide semiconductors, and to related semiconductor devices. For example, in accordance with some embodiments, a method of forming a semiconductor structure comprises forming an array of vertical thin film transistors. Forming the array of thin film transistors comprises forming a source region, forming a channel material comprising an oxide semiconductor material over the source region, exposing the channel material to a dry etchant comprising hydrogen bromide to partem the channel material into channel regions of adjacent vertical thin film transistor structures, forming a gate dielectric material on sidewalls of the channel regions, forming a gate electrode material adjacent to the gate dielectric material, and forming a drain region over the channel regions.In additional embodiments, a method of forming a semiconductor structure comprises forming conductive source regions, patterning a channel material comprising an oxide semiconductor material over the conductive source regions to form rows of the channel material extending in a first direction, wherein patterning the channel material comprises exposing the channel material to a dry etchant comprising a hydrogen bromide-containing gas, forming a gate oxide on sidewalls of the rows of the channel material, forming a gate electrode adjacent to the gate oxide, and patterning the rows of the channel material to form isolated channel regions comprising vertical thin film transistors.In further embodiments, a method of forming an array of memory cells comprises forming an array of vertical thin film transistors. Forming the array of vertical thin film transistors comprises exposing a channel material comprising an oxide semiconductor material over a conductive source material to a dry etchant comprising hydrogen bromide to form rows of the channel material, forming a gate oxide on sidewalls of the rows of the channel material, forming a gate electrode adjacent to the gate oxide, exposing the rows of the channel material to the dry etchant to form isolated vertical thin film transistor structures, and forming a drain region over the channel material of each of the isolated vertical thin film transistors. The method further comprises forming a capacitor structure over and in contact with the drain region of each of the isolated vertical thin film transistors.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1A is a simplified cross-sectional view of a semiconductor structure, in accordance with some embodiments of the disclosure;FIG. 2A is a simplified cross-sectional view of an array of vertical thin film transistors, in accordance with some embodiments of the disclosure;FIG. 2B is a simplified cross-sectional view of the array of vertical thin film transistors of FIG. 2A, taken along section line B-B of FIG. 2A; andFIG. 3A through FIG. 3K illustrate a method of forming an array of vertical thin film transistors, in accordance with some embodiments of the disclosure.MODE(S) FOR CARRYING OUT THE INVENTION The illustrations included herewith are not meant to be actual views of any particular systems or semiconductor structures, but are merely idealized representations that are employed to describe embodiments herein. Elements and features common between figures may retain the same numerical designation except that, for ease of following the description, for the most part, reference numerals begin with the number of the drawing on which the elements are introduced or most fully described. The following description provides specific details, such as material types, material thicknesses, and processing conditions in order to provide a thorough description of embodiments described herein. However, a person of ordinary skill in the art will understand that the embodiments disclosed herein may be practiced without employing these specific details. Indeed, the embodiments may be practiced in conjunction with conventional fabrication techniques employed in the semiconductor industry. In addition, the description provided herein does not form a complete description of a semiconductor structure including vertical thin film transistors comprising an oxide semiconductor, or a complete description of a process flow for manufacturing such semiconductor structures. The structures described below do not form a complete vertical thin film transistor or semiconductor structure. Only those process acts and structures necessary to understand the embodiments described herein are described in detail below. Additional acts to form a complete semiconductor structure or vertical thin film transistors including an oxide semiconductor described herein may be performed by conventional techniques.According to embodiments disclosed herein, a semiconductor structure includes an array of memory cells and vertical thin film transistors. The vertical thin film transistors may include a channel region formed between a source region and a drain region. In some embodiments, the channel region may include an oxide semiconductor material. Such channel regions including the oxide semiconductor material may exhibit a reduced amount of off-stake leakage and may exhibit a lower off-state current than conventional materials. The vertical thin film transistors may be formed by etching the material of the channel region with a dry etchant, such as is employed in reactive ion etching (RIE). The dry etchant may include hydrogen bromide and a carrier gas. In some embodiments, the dry etchant further includes one or more of an alkane (e.g., methane), hydrogen, nitrogen trifluoride, and oxygen. Such dry etchant facilitates formation of the vertical thin film transistors comprising the oxide semiconductor material with a packing density greater than that achievable with other etching methods. For example, the array of thin film transistors may be formed having a pitch between about 10 nm and about 40 nm in a first direction, and a pitch of between about 20 nm and about 50 nm in a second direction, substantially perpendicular to the first direction. Thus, the vertical thin film transistors formed according to embodiments of the disclosure may be used in ultra large-scale integration or high density memory circuits. The channel regions of the vertical thin film transistors may have sidewalls that are substantially perpendicular (i.e., sidewalls oriented at an angle of about 90° with respect to a major surface of the semiconductor structure). The vertical thin film transistors formed according to such methods may be substantially free of any residue materials formed during etching of the channel regions and the material properties of the channel region materials may be substantially unaffected by the etchants used to etch such materials. The method of etching the channel material facilitates formation of smooth lines and clean surfaces of the channel regions, and facilitates formation of the array of vertical thin film transistors with a narrow pitch and spacing.FIG. 1 is a simplified cross-sectional view of a semiconductor device 100, in accordance with some embodiments of the disclosure. The semiconductor device 100 may include a logic circuitry region 102, a transistor region 104 connected to the logic circuitry region 102 with, for example, a conductive interconnect 106, a capacitor region 108 in communication with the transistor region 104, and an interconnect region 110 over the capacitor region 108. The capacitor region 108 may include storage capacitors associated with memory cells and may be configured to store a logic value of the memory cell with which it is associated. The interconnect region 110 may include interconnect circuitry for electrically coupling the semiconductor structure and components thereof to one or more other components of the semiconductor device 100. The interconnect region 110 may include one or more conductive materials.The logic circuitry region 102 may be formed on or in a substrate 101. The substrate 101 may be a base material or a construction upon which additional materials are formed. The substrate 101 may be a semiconductor substrate, a base semiconductor layer on a supporting structure, a metal electrode or a semiconductor substrate having one or more layers, structures or regions formed thereon. The substrate 101 may be a conventional silicon substrate or other bulk substrate comprising a layer of semiconductive material. As used herein, the term "bulk substrate" means and includes not only silicon wafers, but also silicon- on-insulator ("SOI") substrates, such as silicon-on-sapphire ("SOS") substrates and silicon- on-glass ("SOG") substrates, epitaxial layers of silicon on a base semiconductor foundation, and other semiconductor or optoelectronic materials, such as silicon-germanium, germanium, gallium arsenide, gallium nitride, and indium phosphide. The substrate 102 may be doped or undoped.The logic circuitry region 102 may include complementary metal oxide semiconductor (CMOS) circuitry at the substrate level. By way of nonlimiting example, planar transistor structures 112 (e.g., NMOS transistor structures, PMOS transistor structures, etc.) are arranged over the substrate 101. The planar transistor structures 112 may include one or more gate dielectric materials 114 extending between source/drain regions 116. A gate electrode material 1 18 may overlie the gate dielectric material 114 and may be configured to be in electrical communication with one or more components of the semiconductor device 100. The gate electrode material 118 may include a conductive material, such as titanium nitride (TiN), copper, tungsten, tungsten nitride (WN), molybdenum, polysilicon, other conductive materials, or combinations thereof. A cap material 120 may overlie the gate electrode material 118. The cap material 120 may include an insulative material, such as, for example, silicon dioxide, silicon nitride, or a combination thereof. Sidewall spacers 122 may be on sidewalls of the planar transistor structures 112. The sidewall spacers 122 may include an insulative material, such as, for example, silicon dioxide, silicon nitride, or a combination thereof.A conductive interconnect 124 may electrically couple one of the source/drain regions 116 of at least some of the planar transistor structures 112 to the conductive interconnect 106, which, in rum, may be coupled to the transistor region 104. The conductive interconnect 124 may be in electrical communication with the conductive interconnect 106 through, for example, a conductive line 126.The capacitor region 108 may include capacitor structures 130, each including a first electrode 132 in contact with the transistor region 104, a dielectric material 134 in contact with the first electrode 132, and a second electrode 136 in contact with the dielectric material 134. Accordingly, the dielectric material 134 may be disposed between the first electrode 132 and the second electrode 136. The second electrode 136 may be in electrical communication with the interconnect region 110 through one or more conductiveinterconnects, such as conductive interconnect 138.The first electrode 132 and the second electrode 136 may include a conductive material, such as a metal, a metal alloy, a conductive metal oxide, a conductive metal nitride, a conductive metal silicide, a conductively doped semiconductor material, or combinations thereof. The first electrode 132 and the second electrode 136 may independently comprise, for example, at least one of W, WN, Ni, Ta, TaN, TaSi, Pt, Cu, Ag, Au, Al, Mo, Ti, TiN, TiSi, TiSiN, TiAIN, MoN, Ir, IrOx, Ru, RuOx, and conductively doped silicon.The dielectric material 134 may include suitable dielectric materials for retaining a charge of the capacitor. In some embodiments, the dielectric material 134 comprises a ferroelectric material, such as ferroelectric hafnium oxide, ferroelectric zirconium oxide, lead zirconate titanate (PZT), barium strontium titanate, a high-k dielectric material, or combinations thereof. In some embodiments, the dielectric material 134 may include a dopant, such as one or more of silicon, aluminum, lanthanum, yttrium, erbium, calcium, magnesium, strontium, a rare earth element, or combinations thereof. The dielectric material 134 may be configured to store a charge or other property associated with a logic state of a memory cell associated with the capacitor structure 130. Accordingly, the capacitor structure 130 may be referred to as a "cell capacitor" or a "storage capacitor."Although FIG. 1 illustrates the capacitor structure 130 as comprising a trench capacitor, the disclosure is not so limited. In other embodiments, the capacitor structure 130 may comprise a capacitor other than a trench capacitor. By way of nonlimiting example, the capacitor structure 130 may comprise stacked capacitors.Conductive interconnects 138 may electrically connect the capacitor structures 130 to the interconnect region 110. The interconnect region 110 may include conductive contacts 170, 174, 178 and conductive lines 172, 176 for electrically connecting the capacitor structures 130 to external circuitry or other components of the semiconductor structure 100. The conductive contacts 170, 174, 178 and conductive lines 172, 176 may include conductive materials such as titanium nitride, copper, tungsten, tungsten nitride, molybdenum, polysilicon, other conductive materials, or combinations thereof.With reference to FIG. 1 and FIG. 2A, the transistor region 104 may include vertical thin film transistors 140, details of which are illustrated in FIG. 2A. The vertical thin film transistors 140 may be arranged in an array 200. Each vertical thin film transistor 140 may include a source region (e.g., a source line) 142 in electrical communication with the conductive interconnect 106 for forming an electrical connection between the vertical thin film transistor 140 and the logic circuitry region 102.The source region 142 may include a metal, a combination of metals, or regions of different metals. For example, and without limitation, the source region 142 may include titanium nitride, copper, tungsten, tungsten nitride, molybdenum, polysilicon, other conductive materials, or combinations thereof. In some embodiments, the source region 142 comprises tungsten.A channel material 144 may overlie and be in direct contact with the source region 142. The channel material 144 may include an oxide semiconductor material. By way of nonlimiting example, the oxide semiconductor material may include indium gallium zinc oxide, an amorphous oxide semiconductor material, ZnOx, InOx, Ιη203, SnC , TiOx, ZnxOyNz, InxZnyOz, InxGayZnzOa, ZrxInyZnzOa, HfxInyZnzOa, SnxInyZnzOa, AlxSnyInzZnaOd,SixInyZnzOa, ZnxSnyOz, AlxZnySnzOa, GaxZnySnzOa, ZrxZnySnzOa, InGaSiO, and combinations thereof, wherein each of x, y, z, a, and d are independently real numbers between about 1 and about 10. In other words, each of x, y, z, a, and d may be equal to any value between about 1 and about 10 and may be different than the other of x, y, z, a, and d.In some embodiments, the channel material 144 comprises indium gallium zinc oxide. Indium gallium zinc oxide may include any composition of indium (In), gallium (Ga), zinc (Zn), and oxygen (O). For example, without limitation, indium gallium zinc oxide may have an In:Ga:Zn:0 ratio of 1 : 1 : 1 :4, may have an In203:Ga203:ZnO ratio of 2:2: 1, or may be represented by the formula InGa03(ZnO)5. In some embodiments, indium may constitute from about 20 atomic percent to about 60 atomic percent, such as from about 20 atomic percent to about 40 atomic percent, of the semiconductor material 114, based on the other metal atoms of the semiconductor material 114. Gallium may constitute from about 20 atomic percent to about 60 atomic percent, such as from about 35 atomic percent to about 55 atomic percent, of the semiconductor material 114, based on the other metal atoms of thesemiconductor material 114 (i.e., not including oxygen atoms). Zinc may constitute from about 20 atomic percent to about 60 atomic percent, such as from about 20 atomic percent to about 40 atomic percent, of the semiconductor material 114, based on the other metal atoms of the semiconductor material 114. In embodiments where the channel material 144 comprises indium gallium zinc oxide, the channel material 144 may exhibit a high ratio of "on" state current to "off state leakage current. For example, the channel material 144 may exhibit an off-state current leakage of approximately only 1 x 10"24A and an on-to-off current ratio of about 1,000,000,000 to 1. The low off-state leakage current may be conducive for use of the channel material 144 in a memory cell that does not necessitate refreshing more than about once every hour (e.g., once every ten hours, once every twenty-four hours, etc.).A drain region (e.g., a drain line) 146 may overlie and directly contact the channel material 144. The drain region 146 may include any suitable conductive material formulated and configured to facilitate a flow of current between the drain region 146 and the source region 142 through the channel material 144. The drain region 146 may include a metal, a combination of metals, or regions of different metals. For example, and without limitation, the drain region 146 may include titanium nitride, copper, tungsten, tungsten nitride, molybdenum, polysilicon, other conductive materials, or combinations thereof. In some embodiments, the drain region 146 comprises the same material as the source region 142. In some embodiments, the drain region 146 comprises tungsten.The drain region 146 may be in electrical communication with the first electrode 132 of the capacitor structure 130 of the capacitor region 108. Accordingly, the vertical thin film transistor 140 may be configured to provide access to the memory material (e.g., the dielectric material 134) of the storage capacitor structure 130.A gate dielectric material 148 (FIG. 2A) may be on sidewalls of the channel material 144. A gate electrode 150 (FIG. 2A) may be on sidewalls of the gate dielectric material 148 and may be adjacent to the gate dielectric material 148. The gate dielectric material 148 may include a gate insulator material, such as an oxide (e.g., silicon dioxide (S1O2)). In other embodiments, the gate dielectric material 148 may include phosphosilicate glass, borosilicate glass, borophosphosilicate glass (BPSG), fluorosilicate glass, titanium dioxide, zirconium dioxide, hafnium dioxide, tantalum oxide, magnesium oxide, aluminum oxide, or a combination thereof, a nitride material, (e.g., silicon nitride (S13N4)), an oxynitride (e.g., silicon oxynitride), or combinations thereof.The gate electrode 150 may include a conductive material, such as a metal, a metal alloy, a conductive metal oxide, a conductive metal nitride, a conductive metal silicide, a conductively doped semiconductor material, or combinations thereof. By way of nonlimiting example, the gate electrode 150 may include polysilicon, conductively doped silicon, tungsten, tungsten nitride, nickel, tantalum, tantalum nitride, tantalum silicide, platinum, copper, silver, gold, aluminum, molybdenum, titanium, titanium nitride, titanium silicide, titanium aluminum nitride, molybdenum nitride, iridium, iridium oxide, ruthenium, ruthenium oxide, and combinations thereof. In some embodiments, the gate electrode comprises titanium nitride.The gate electrode 150 may be in electrical communication with a conductive line, such as a conductive word line 160. The conductive word line 160 may extend in rows and electrically connect each vertical access transistor 140 of a row of vertical access transistors to each other.The conductive word line 160 may include a conductive material, such as polysilicon, conductively doped silicon, tungsten, tungsten nitride, nickel, tantalum, tantalum nitride, tantalum silicide, platinum, copper, silver, gold, aluminum, molybdenum, titanium, titanium nitride, titanium silicide, titanium aluminum nitride, molybdenum nitride, iridium, iridium oxide, ruthenium, ruthenium oxide, and combinations thereof. In some embodiments, the conductive word line 160 includes titanium nitride.In use and operation, an individual vertical access transistor 140 may be accessed by applying a voltage through a row associated with the vertical access transistor 140 (via the conductive word line 160), and applying a voltage associated with a column of the vertical access transistor 140 (e.g., via, for example, a source line associated with, for example, the source region 142). To access a particular vertical access transistor 140, a voltage (and a current) may be provided to the gate electrode 150 associated with the vertical access transistor 140. Responsive to a sufficient voltage (e.g., a voltage having a magnitude greater than a threshold voltage), a current may flow in the channel region 144 between the source region 142 and the drain region 146 through the vertical thin film transistor 140. Accordingly, the memory material in the capacitor region 108 may be accessed through the vertical thin film transistor 140 responsive to exposure of the gate electrode 150 to the threshold voltage.A pitch P between adjacent vertical thin film transistors 140 may be between about 10 nm and about 40 nm, such as between about 10 nm and about 15 nm, between about 15 nm and about 20 nm, between about 20 nm and about 25 nm, between about 25 nm and about 30 nm, or between about 30 nm and about 40 nm. The pitch P may be defined as a distance from one feature of a vertical thin film transistor 140 to a similar feature of an adjacent vertical thin film transistor 140. In some embodiments, the pitch P is equal to about 24 nm.A height H of the channel region 144 may be between about 40 nm and about 100 nm, such as between about 40 nm and about 60 nm, between about 60 nm and about 80 nm, or between about 80 nm and about 100 nm. In some embodiments, the height H is equal to about 80 nm.FIG. 2B is a cross-sectional view of a vertical thin film transistor 140 taken through section line B-B of FIG. 2A. Adjacent vertical thin film transistors 140 may be isolated from each other by an insulative material 154. The insulative material 154 may comprise a dielectric material, such as silicon dioxide, phosphosilicate glass, borosilicate glass, borophosphosilicate glass, fluorosilicate glass, titanium dioxide, zirconium dioxide, hafnium dioxide, tantalum oxide, magnesium oxide, aluminum oxide, silicon nitride, silicon oxynitride, amorphous carbon, or a combination thereof. In some embodiments, the insulative material 154 comprises silicon dioxide. A width W of the vertical thin film transistor 140 may be between about 5 nm and about 40 nm, such as between about 5 nm and about 10 nm, between about 10 nm and about 15 nm, between about 15 nm and about 20 nm, between about 20 nm and about 30 nm, or between about 30 nm and about 40 nm. In some embodiments, the width W may be equal to about 12 nm. In other embodiments, the width W may be equal to about 26 nm.A distance D between adjacent vertical thin film transistors 140 in the cross-section of FIG. 2B may be between about 20 nm and about 100 nm, such as between about 20 nm and about 30 nm, between about 30 nm and about 40 nm, between about 40 nm and about 50 nm, between about 50 nm and about 75 nm, or between about 75 nm and bout 100 nm. In some embodiments, the distance D may be equal to about 36 nm. In other embodiments, the distance D may be between about 70 nm and about 80 nm, such as about 73 nm.In some embodiments, sidewalls 152 of the channel material 144 may be oriented at an angle of about 90° with respect to a major surface of the substrate 101 (FIG. 1). Stated another way, the sidewalls 152 may be oriented substantially perpendicularly to the major surface of the substrate 101 and may not exhibit substantial sloping. In some embodiments, the angle of the sidewalls 152 with respect to the major surface of the substrate 101 may be greater than about 88°, greater than about 89°, or may be substantially perpendicular thereto. The angle may be between about 80°and about 90°, such as between about 80° and about 85°, or between about 85° and about 90°. The angle may be between about 88° and about 89°, or between about 89° and about 90°. In some embodiments, the angle is greater than about 82.8°.As will be described with reference to FIG. 3A through FIG. 3K, the vertical thin film transistors 140 may be formed in an array having a greater packing density (e.g., number of vertical thin film transistor 140 per unit area) than conventional semiconductor devices, and the angle between the sidewalls 152 and the major surface of the substrate 101 may be achieved by forming the channel material 144 using a dry etch process comprising hydrogen bromide.FIG. 3A through FIG. 3K illustrate a method of forming the vertical thin film transistors 140 (FIG. 2A, FIG. 2B). With reference to FIG. 3A, a source material 304 may be formed over a substrate 302. The substrate 302 may include, for example, components of the logic circuitry region 102 described above with reference to FIG. 1. By way of nonlimiting example, the substrate 302 may include the conductive interconnects 106 (FIG. 1) that may be positioned and configured to make contact with the source material 304. The source material 304 may include any material described above with reference to the source region 142 (FIG. 2A, FIG. 2B). By way of nonlimiting example, the source material 304 may include a metal, a combination of metals, or regions of different metals. For example, and without limitation, the source material 304 may include titanium nitride, copper, tungsten, tungsten nitride, molybdenum, polysilicon, other conductive materials, or combinations thereof.Referring to FIG. 3B, the source material 304 may be pattemed to form source regions (e.g., source lines) 306 of the source material 304 (FIG. 3A). The source regions 306 may be arranged in, for example, rows extending in a first direction (e.g., ay-direction, perpendicular to the x-direction and into and out of the page in the cross-section illustrated in FIG. 3B) over the substrate 302. The source regions 306 may be pattered by, for example, forming a mask over the source material 304 (FIG. 3 A), forming a partem in the mask, such as byphotolithography, and removing portions of the source material 304 through the mask. After forming the source regions 306 of the source material, a dielectric material 308 may be formed over the source regions 306 and in regions between the source regions 306. Dielectric material 308 may be removed from portions over the source regions 306, such as by chemical mechanical polishing (CMP). The dielectric material 308 may include silicon dioxide, phosphosilicate glass, borosilicate glass, borophosphosilicate glass, fluorosilicate glass, titanium dioxide, zirconium dioxide, hafnium dioxide, tantalum oxide, magnesium oxide, aluminum oxide, silicon nitride, silicon oxynitride, amorphous carbon, or a combination thereof.Referring to FIG. 3C, a channel material 310 may be formed over the dielectric material 308 and the source regions 306. The channel material 310 may include the same materials described above with reference to the channel material 144 of FIG. 2A. The channel material 310 may be formed by, for example, atomic layer deposition (ALD), chemical vapor deposition (CVD), low pressure chemical vapor deposition (LPCVD), plasma enhanced chemical vapor deposition (PECVD), physical vapor deposition (PVD), pulsed laser deposition (PLD), another deposition process, or combinations thereof.Referring to FIG. 3D, the channel material 310 may be pattemed to form isolated lines of the channel material 310 over the source regions 306. In some embodiments, the channel material 310 may be pattemed to form channel regions 312 directly over and in contact with the source regions 306. The channel regions 312 may be pattemed such that adjacent portions of the channel regions 312 are separated from each other by a distance between about 10 nm and about 40 nm, such as between about 10 nm and about 15 nm, between about 15 nm and about 20 nm, between about 20 nm and about 25 nm, between about 25 nm and about 30 nm, or between about 30 nm and about 40 nm. In some embodiments, adjacent portions of the channel regions 512 may be separated from each other a distance of about 24 nm.The channel material 310 may be patterned by dry etching the channel material 310 to remove portions of the channel material 310 and form the channel regions 312 with a desired pitch, width, and spacing. In some embodiments, a mask comprising, for example, a carbon material (e.g., amorphous carbon) is formed in a desired partem over the channel material 310. Portions of the channel material 310 may be removed through the mask with an anisotropic etch, such as an anisotropic dry etch. By way of nonlimiting example, the channel regions 312 may be patterned by reactive ion etching (RIE), plasma etching, another dry etching method, etc.Suitable etchant gases may include hydrogen bromide (HBr), one or more alkanes or alkenes (e.g., CH4, C2¾, etc.), a halogen-based etchant, C¾, CF4, and CH2O2,tetrafluoromethane (CF4), octafluoropropane (C3F8), octafluorocyclobutane (C4F8),hexafluorobutadiene (C4F6), octafiuorocyclopentene (CsFg), fluoroform (CHF3),difiuoromethane (CH2F2), sulfur hexafluoride (SF6), nitrogen trifluoride (NF3), chlorine trifiuoride (C1F3), chlorine (CI2), boron trichloride (BCI3), and trifluoroiodomethane (CF3I), CF4, C3F8, C4F8, C4F6, CHF3, CH2F2, SF6, NF3, C1F3, HBr, Cl2, BC13, and CF3I. The etchant gas may further include at least one carrier, such as nitrogen, argon, helium, oxygen, or combinations thereof.In some embodiments, the etchant gas comprises a hydrogen bromide-containing gas. The hydrogen bromide-containing gas may include hydrogen bromide, a carrier gas, and one or more of methane (CH4), hydrogen (H2), oxygen (O2), and nitrogen trifluoride. In some embodiments, the etchant gas comprises hydrogen bromide, argon, methane, and hydrogen. In some such embodiments, the etchant gas may comprise between about 0.1 part and about 5.0 parts methane for about every 1.0 part hydrogen bromide, such as between about 0.1 part and about 0.5 part, between about 0.5 part and about 1.0 part, between about 1.0 part and about 2.0 parts, or between about 2.0 parts and about 5.0 parts methane for about every 1.0 part hydrogen bromide. In some embodiments, the etchant gas comprises about 1.0 part hydrogen bromide for about every 1.0 part methane. In other embodiments, the etchant gas comprises about 1.0 part hydrogen bromide for about every 2.0 parts methane. In yet other embodiments, the etchant gas comprises about 1.0 part hydrogen bromide for about every 0.30 part methane.The etchant gas may comprise between about 0.1 part and about 5.0 parts hydrogen for about every 1.0 part hydrogen bromide, such as between about 0.1 part and about 0.5 part, between about 0.5 part and about 1.0 part, between about 1.0 part and about 2.0 parts, or between about 2.0 parts and about 5.0 parts hydrogen for about every 1.0 part hydrogen bromide. In some embodiments, the etchant gas comprises about 1.0 part hydrogen bromide for about every 1.0 part hydrogen.The etchant gas may comprise between about 0.1 part and about 5.0 parts carrier gas (e.g., argon) for about every 1.0 part hydrogen bromide, such as between about 0.1 part and about 0.5 part, between about 0.5 part and about 1.0 part, between about 1.0 part and about 2.0 parts, or between about 2.0 parts and about 5.0 parts of the carrier gas for about every 1.0 part hydrogen bromide. In some embodiments, the etchant gas comprises about 1.0 part hydrogen bromide for about every 0.5 part of the carrier gas. In other embodiments, the etchant gas comprises about 4.0 parts hydrogen bromide for about every 1.0 part of the carrier gas, such as where the carrier gas comprises helium.The etchant gas may comprise between about 0.1 part and about 10.0 parts hydrogen bromide for about every 1.0 part oxygen, such as between about 0.1 part and about 0.5 part, between about 0.5 part and about 1.0 part, between about 1.0 part and about 2.0 parts, between about 2.0 parts and about 5.0 parts, or between about 5.0 parts and about 10.0 parts of the hydrogen bromide for about every 1.0 part oxygen. In some embodiments, the etchant gas comprises about 1.0 part oxygen for about every 5.0 parts hydrogen bromide.In some embodiments, the etchant gas comprises or consists essentially of hydrogen bromide. In other embodiments, the etchant gas comprises or consists essentially of hydrogen bromide and a carrier gas (e.g., nitrogen, argon, helium, oxygen, and combinations thereof). In some embodiments, the etchant gas comprises or consists essentially of hydrogen bromide and argon.In yet other embodiments, the etchant gas comprises or consists essentially of hydrogen bromide, an alkane (e.g., methane), and a carrier gas. In some embodiments, the etchant gas comprises hydrogen bromide, methane, hydrogen, and argon. In some such embodiments, the etchant gas may include a ratio of hydrogenbromide: methane:hydrogen: argon of about 60:60:60:32. In some embodiments, the etchant gas comprises hydrogen bromide, methane, helium, and oxygen. In some such embodiments, the etchant gas comprises a ratio of hydrogen bromide:methane:helium: oxygen of about 50: 100: 12: 10.In further embodiments, the etchant gas comprises hydrogen bromide, argon, hydrogen, methane, and nitrogen trifluoride. In some such embodiments, the etchant gas comprises a ratio of hydrogen bromide:argon:hydrogen:methane:nitrogen trifluoride of about 200:200:200:60:20.A bias voltage may be applied during the patterning process. In some embodiments, the bias voltage is between about 200 V and about 2,500 V, such as between about 200 V and about 400 V, between about 400 V and about 600 V, between about 600 V, and about 800 V, between about 800 V and about 1,000 V, between about 1,000 V and about 1,250 V, between about 1,250 V and about 1,500 V, between about 1,500 V and about 2,000 V, or between about 2,000 V and about 2,500 V. In some embodiments, the bias voltage may be pulsed. In some embodiments, the bias voltage may be greater than about 500 V, greater than about 1,000 V, greater than about 1,500 V, or greater than about 2,000 V.During the patterning process, a source radiofrequency (RF) power may be between about 150 W and about 1,500 W, such as between about 150 W and about 250 W, between about 250 W and about 500 W, between about 500 W and about 1,000 W, or between about 1,000 W and about 1,500 W.During the patterning process, a pressure of the etch chamber may be between about1.0 mtorr and about 10.0 mtorr, such as between about 1.0 mtorr and about 2.0 mtorr, between about 2.0 mtorr and about 5.0 mtorr, between about 5.0 mtorr and about 8.0 mtorr, or between about 8.0 mtorr and about 10.0 mtorr.During the patterning process, a temperature of the etch chamber may be greater than a volatilization temperature of etch byproducts, such as Zn(CH3)2. In some suchembodiments, the temperature may be greater than about 46°C, such as greater than about 50°C. The temperature may be between about 20°C and about 250°C, such as between about 20°C and about 50°C, between about 50°C and about 100°C, between about 100°C and about 150°C, between about 150°C and about 200°C, or between about 200°C and about 250°C.In some embodiments, the channel regions 312 may be patterned by exposing the channel material 310 to a mixture of the etchant gas in a so-called "one-step" etch. In other embodiments, the channel material 310 may be exposed to different compositions of the etchant gas. For example, the channel material 310 may be exposed to alternating etch compositions including a first composition comprising hydrogen bromide and a carrier gas and a second composition including one or more gases configured to reduce or prevent polymer formation. In some such embodiments, a first composition comprising hydrogen bromide, methane, and a carrier gas may be cycled to remove the channel material 310. After exposing the channel material 310 to the first composition, the channel material 310 may be exposed to a second gas composition comprising a carrier gas and one or more of hydrogen, oxygen, and nitrogen trifluoride. In some embodiments, a bias voltage may not be applied while exposing the channel material 310 to the second gas composition. Patteming the channel material 310 may include performing multiple cycles of exposure to the first gas composition, followed by exposure to the second gas composition.Without wishing to be bound by any particular theory, it is believed that the combination of methane, hydrogen bromide, hydrogen, and nitrogen trifluoride facilitates patterning of the channel material 310 and maintaining a critical dimension of the channel regions 312. It is believed that the methane etches the channel material 310 and preserves the mask material (e.g., carbon). Hydrogen may also etch the channel material 310 and may reduce or substantially prevent an amount of polymer that may be formed by the methane etchant. In addition, nitrogen trifluoride reduces an amount of polymer that may be formed by the methane. It is believed that applying a high bias voltage (e.g., a bias voltage greater than about 400 V) with pulsing may facilitate formation of substantially vertical sidewalls and may substantially reduce or eliminate formation of polymer byproducts that often accompany alkane-based etchants.Patteming the channel regions 312 with the hydrogen bromide-containing etchant may facilitate formation of the channel regions 312 with a higher packing density than that formed in conventional semiconductor devices. Surprisingly, patteming the channel regions 312 with the hydrogen bromide-containing gas forms the channel regions 312 with substantially vertical sidewalls (e.g., sidewalls having an angle between, for example, about 80° and about 90° with respect to the major surface of the substrate 302). Since the channel regions 312 are formed with substantially vertical sidewalls with the hydrogen bromide-containing etchant, a packing density of the channel regions 312 may be increased. The sidewalls of the channel regions 312 may exhibit smooth a smooth and clean surface. By way of comparison, oxide semiconductors etched with other etchants such as boron trichloride (BCI3) often exhibit a relatively high surface roughness and the etchant is unable to achieve desired packing densities in some semiconductor structures. Etch chemistries that are formed primarily of an alkane may leave a residue on surfaces of the channel regions 312. In some embodiments, such etchants may undesirably alter chemical and electrical properties of the channel region 312. Surprisingly, patterning the channel region 312 with the hydrogen bromide- containing gas including an alkane does not form a residue on surfaces of the channel regions 312.Referring to FIG. 3E, a gate dielectric material 314 may be formed conformally over the semiconductor structure 300, such as over the channel regions 312 and in spaces between the channel regions 312. The gate dielectric material 314 may include any of the gate dielectric materials 148 described above with reference to FIG. 2A and FIG. 2B. In some embodiments, the gate dielectric material 314 comprises an oxide material. The gate dielectric material 314 may be formed by ALD, CVD, LPCVD, PECVD, PVD, another method, or combinations thereof.After forming the gate dielectric material 314, a gate electrode material 316 may be formed adjacent to (e.g., over) the gate dielectric material 314. The gate electrode material 316 may be formed over the gate dielectric material 314. The gate electrode material 316 may include any materials of the gate electrode 150 described above with reference to FIG. 2A and FIG. 2B. In some embodiments, the gate electrode material 316 comprises titanium nitride. The gate electrode material 316 may be formed by ALD, CVD, LPCVD, PECVD, PVD, another method, or combinations thereof.Referring to FIG. 3F, after forming the gate dielectric material 314 and the gate electrode material 316, the gate dielectric material 314 and the gate electrode material 316 may be patterned. In some embodiments, the gate electrode material 316 may be removed from laterally extending surfaces (e.g., surfaces extending substantially parallel with the major surface of the substrate 302). The gate electrode material 316 may be removed by methods such as dry etching, wet etching, or a combination thereof. In some embodiments, the gate electrode material 316 is removed by exposing the gate electrode material 316 to a dry etch, such as by reactive ion etching. The gate electrode material 316 may remain on the gate dielectric material 314 on sidewalls of the channel regions 312. In some embodiments, the gate electrode material 316 may extend to a surface of the dielectric material 308 and may be electrically isolated from the source region 306.After patterning the gate electrode material 316, the gate dielectric material 314 may be patterned. Portions of the gate dielectric material 314 may be removed to form the gate dielectric material 314 extending from the dielectric material 308 on sidewalls of the channel material 312 to an upper surface of the channel material 312. The gate dielectric material 314 may electrically isolate the channel region 312 from the gate electrode material 316. A portion of the channel material 312 may remain exposed through the gate dielectric material 312.Referring to FIG. 3G, a conductive line 318 may be formed in electricalcommunication with the gate electrodes 316. In some embodiments, the conductive line 318 comprises a conductive word line and is substantially the same as the conductive word line 160 described above with reference to FIG. 2A. The conductive line 318 may include a conductive material such as polysilicon, conductively doped silicon, tungsten, tungsten nitride, nickel, tantalum, tantalum nitride, tantalum silicide, platinum, copper, silver, gold, aluminum, molybdenum, titanium, titanium nitride, titanium silicide, titanium aluminum nitride, molybdenum nitride, iridium, iridium oxide, ruthenium, ruthenium oxide, and combinations thereof. In some embodiments, the conductive line 318 includes titanium nitride.FIG. 3H is a cross-sectional view of the semiconductor structure 300 taken along section line H-H of FIG. 3G. The semiconductor structure 300 may include a stack of the source regions 306 and the channel material 310. Referring to FIG. 31, portions of the channel regions 312 may be removed in a second direction to pattern the channel regions 312 in a second direction (e.g., in the y-direction). The channel region 312 may be patterned as described above with reference to FIG. 3D. By way of nonlimiting example, the channel material 310 may be exposed to a dry etchant comprising a hydrogen bromide-containing gas.Referring to FIG. 3J, an insulative material 320 may be formed and patterned over the semiconductor structure 300. The insulative material 320 may be patterned to fill spaces between adjacent channel regions 312. The insulative material 320 may include silicon dioxide, phosphosilicate glass, borosilicate glass, borophosphosilicate glass, fluorosilicate glass, titanium dioxide, zirconium dioxide, hafnium dioxide, tantalum oxide, magnesium oxide, aluminum oxide, silicon nitride, silicon oxynitride, amorphous carbon, or acombination thereof.With reference to FIG. 3K, a drain material may be formed over the semiconductor structure 300 to form drain regions 322 over and in direct contact with the channel regions 312 and form vertical thin film transistors 324. The drain material may be patterned to form vertical thin film transistor as described above with reference to FIG. 2A and FIG. 2B. The drain regions 322 may be formed by forming a conductive material over the semiconductor structure 300, such as over the channel regions 312 and in regions between adjacent channel regions 312. The drain regions 322 may be patterned over the channel regions 312. The drain regions 320 may be patterned by, for example, a wet etch, a dry etch, or a combination thereof. By way of nonlimiting example, portions of the drain region 322 to be removed may be exposed to an etch solution through a mask to remove exposed portions thereof. In other embodiments, portions of the drain region 322 to be removed may be exposed to a reactive ion etching process to remove such portions. With reference to FIG. 3K and FIG. 1, the drain regions 322 may be configured to be in electrical communication with, for example, the first electrode 132 of the capacitor structure 130. Accordingly, the vertical thin film transistors 324 may be in electrical communication with the capacitor region 108 through the drain regions 322.After forming the drain regions 322, additional processing may be performed to form a complete semiconductor structure, such as the semiconductor structure 100 described above with reference to FIG. 1. By way of nonlimiting example, the capacitor structures 130(FIG. 1) may be formed over the drain regions 322 to electrically connect each vertical thin film transistor 324 with a capacitor structure 130. The interconnect region 110 (FIG. 1) may be formed by methods understood by those of ordinary skill in the art.The resulting semiconductor structure may include an array of vertical thin film transistors including an oxide semiconductor channel material. The array of vertical thin film transistors according to embodiments of the disclosure may be more closely packed than in conventional semiconductor structures, may have substantially vertical sidewalls, and may be substantially free of etch residue that is common in conventional semiconductor structures.Accordingly, in some embodiments, a method of forming a semiconductor structure comprises forming an array of vertical thin film transistors. Forming the array of vertical thin film transistors comprises forming a source region, forming a channel material comprising an oxide semiconductor material over the source region, exposing the channel material to a dry etchant comprising hydrogen bromide to pattern the channel material into channel regions of adjacent vertical thin film transistor structures, forming a gate dielectric material on sidewalls of the channel regions, forming a gate electrode material over the gate dielectric material, and forming a drain region over the channel regions.Accordingly, in some embodiments, a method of forming a semiconductor structure comprises forming conductive source lines, patterning a channel material comprising an oxide semiconductor material over the conductive source lines to form rows of the channel material extending in a first direction, wherein patterning the channel material comprises exposing the channel material to a dry etchant comprising a hydrogen bromide-containing gas, forming a gate oxide on sidewalls of the rows of the channel material, forming a gate electrode over the gate oxide, and patterning the rows of the channel material to form isolated channel regions comprising vertical thin film transistors.Accordingly, in other embodiments, a method of forming an array of memory cells comprises forming an array of vertical thin film transistors. Forming the array of vertical thin film transistors comprises exposing a channel material comprising an oxide semiconductor material over a conductive source material to a dry etchant comprising hydrogen bromide to form rows of the channel material, forming a gate oxide on sidewalls of the rows of the channel material, forming a gate electrode over the gate oxide, exposing the rows of the channel material to the dry etchant to form isolated vertical thin film transistor structures, and forming a drain region over the channel material of each of the isolated vertical thin film transistors. The method further comprises forming a capacitor structure over and in contact with the drain region of each of the isolated vertical thin film transistors.While certain illustrative embodiments have been described in connection with the figures, those of ordinary skill in the art will recognize and appreciate that embodiments encompassed by the disclosure are not limited to those embodiments explicitly shown and described herein. Rather, many additions, deletions, and modifications to the embodiments described herein may be made without departing from the scope of embodimentsencompassed by the disclosure, such as those hereinafter claimed, including legal equivalents. In addition, features from one disclosed embodiment may be combined with features of another disclosed embodiment while still being encompassed within the scope of the disclosure. |
A system and method for throttling a slave component of a computer system to reduce an overall temperature of the computing system upon receiving a first signal is disclosed. The first signal may be from a master component indicating that a temperature for the master component has exceeded its threshold temperature. The slave component or the master component may be a central processing unit, a graphics memory and controller hub, or a central processing unit memory controller hub. The slave component may send a second signal to indicate that a temperature for the slave component has exceeded its temperature. The master component would then initiate throttling of the master component to reduce the overall temperature of the computing system. The master component may be throttled to a degree less than the slave component. A first component may be designated the master component and the second component may be designated the slave component based on a selection policy. The selection policy may be received from a user through a graphical user interface. The selection policy may be based on an action being performed by the computing system. |
What is claimed is: 1. A method comprising: receiving in a slave component a first signal from a master component indicating that a temperature for the master component has exceeded a master threshold temperature; and throttling the slave component to reduce an overall temperature of the computing system. 2. The method of claim 1 , further comprising: sending to the master component a second signal from the slave component indicating that a temperature for the slave component has exceeded a slave threshold temperature to initiate throttling of the master component to reduce the overall temperature of the computing system. 3. The method of claim 2, further comprising throttling the master component to a degree less than the slave component. 4. The method of claim 1, further comprising: selecting a first component to be the master component based on a selection policy; and selecting a second component to be the slave component based on a selection policy. 5. The method of claim 4, further comprising allowing a user to create the selection policy. 6. The method of claim 5, wherein the selection policy is determined by an action being performed by the computing system. 7. A set of instructions residing in a storage medium, said set of instructions to be executed by a processor to implement a method for processing data, the method comprising: receiving at a pin of a slave component a first signal from a master component indicating that a temperature for the master component has exceeded a master threshold temperature; and throttling the slave component to reduce an overall temperature of the computing system. 8. The set of instructions of claim 7, further comprising: sending to the master component a second signal from the slave component indicating that a temperature for the slave component has exceeded a slave threshold temperature to initiate throttling of the master component to reduce the overall temperature of the computing system. 9. The set of instructions of claim 7, further comprising: selecting a first component to be the master component based on a selection policy; and selecting a second component to be the slave component based on a selection policy. 10. The set of instructions of claim 9, wherein the selection policy is determined by an action being performed by the computing system. 11. A slave component of a computing system comprising: a slave throttling hardware to throttle the slave component; upon receiving; and a throttling control logic to activate the slave throttling hardware of the slave component upon receiving a first signal from a master component that shares a cooling system with the slave component, the signal indicating that a temperature for the master component has exceeded a master threshold temperature. 12. The slave component of claim 11, wherein the master component and the slave component are each one of a central processing unit, a graphics memory and controller hub, or a central processing unit memory controller hub. 13. The slave component of claim 11, further comprising a first slave pin to receive the first signal. 14. The slave component of claim 13, further comprising: a thermal sensor to read a temperature for the slave component; and a second slave pin to send a second signal from the slave component indicating that a temperature for the slave component has exceeded a slave threshold temperature to induce throttling of the master component. 15. A computing system comprising: a first component including: a first component throttling hardware to throttle the first component; and a first throttling control logic to activate the first throttling hardware of the first component upon receiving a first signal; a second component including: a second component thermal sensor to sense a temperature of the second component; and a second throttling control logic to send the first signal indicating that the temperature of the second component has exceeded a second threshold temperature; and a shared cooling solution to cool the first component and the second component. 16. The computing system of claim 15, wherein the first component and the second component are each one of a central processing unit, a graphics memory and controller hub, or a central processing unit memory controller hub. 17. The computing system of claim 15, wherein the first component further includes a first component thermal sensor to read a temperature for the slave component; the second component further includes a second component throttling hardware to throttle the second component; and the first throttling control logic sends a second signal to the second throttling control logic to throttle the second component. 18. The computing system of claim 17, wherein the first throttling control logic and the second throttling control logic select the first component to be a master component and the second component to be a slave component based on a selection policy. 19. The computing system of claim 18, further comprising a graphical user interface to allow a user to create the selection policy. 20. The computing system of claim 18, wherein the selection policy is determined by an action being performed by the computing system. 21. The computing system of claim 17, wherein the first component further includes a first receiving pin to receive the first signal and a first transmitting pin to send the second signal; and the second component further includes a second receiving pin to receive the second signal and a second transmitting pin to send the first signal. 22. The computing system of claim 17, further comprising a front side bus coupling the first component to the second component to communicate the first signal and the second signal. |
METHOD AND APPARATUS FOR EXTERNAL PROCESSOR THERMAL CONTROLBackground of the Invention [0001] Embodiments of the invention pertain to cooling systems for computersystems. More particularly, embodiments of the invention pertain to throttling a component of a computer system based on a criterion.[0002] The movement of electrons within the electrical components of a computersystem causes a great deal of heat to be generated. Unless the heat is dissipated, it will accumulate, causing damage to the system. Such damage may include the warping of the electrical components and possible fire hazards.[0003] Currently, thermal sensors are attached to a die to read the actual temperature of the die hot spots. When the hot spot temperatures are exceeded on a particular die, that die reduces its temperature independently of the other die using some form of reduction in work per unit time, also called throttling. This throttling prevents a die from reaching its maximum working temperature and damaging the system. Throttling may be performed by clock gating and clock frequency reduction.[0004] The throttling may be triggered if the thermal sensors read a throttling threshold temperature up to some maximum tolerable temperature. To ensure safety, thismaximum temperature may be set well below a temperature that causes actual catastrophic damage.[0005] Usually, different components in a system, such as the central unit and the graphics memory and controller hub (GMCH), may share a cooling system for a more efficient design to the computer system. However, these different components often have different cooling needs. Brief Description of the Drawings[0006] Figure 1 illustrates one embodiment of a computing system according tothe present invention.[0007] Figure 2 illustrates in a diagram one embodiment of the shared cooling system according to the present invention.[0008] Figure 3 illustrates in a flow chart one method for throttling a component to reduce the temperature by using a PROCHOT pin according to an embodiment of the present invention.[0009] Figure 4 illustrates in a flow chart one method for throttling a component to reduce the temperature by using the FSB according to an embodiment of the present invention.[0010] Figure 5 illustrates in a flow chart one method for using a selection policy in throttling a component to reduce the temperature according to an embodiment of the present invention.[0011] Figure 6 illustrates in a flow chart one of a method for using an action- based selection policy according to an embodiment of the present invention. Detailed Description[0012] A system and method for throttling a slave component of a computer system to reduce an overall temperature of the computing system upon receiving a firstsignal is disclosed. The first signal may be from a master component indicating that atemperature for the master component has exceeded its threshold temperature. The slave component or the master component may be a central processing unit (CPU), a graphics memory and controller hub (GMCH), or a CPU memory controller hub. The slavecomponent may send a second signal to indicate that a temperature for the slave component has exceeded its temperature. The master component may then initiate throttling of the master component to reduce the overall temperature of the computing system. The master component may be throttled to a degree less than the slave component. A first component may be designated the master component and the second component may be designated the slave component based on a selection policy. Theselection policy may be received from a user through a graphical user interface. The selection policy may be based on an action being performed by the computing system.[0013] Embodiments of the present invention also relate to apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, compact disk-read only memories (CD- ROMs), and magnetic-optical disks, read-only memories (ROMs), random accessmemories (RAMs), erasable programmable read only memories (EPROMs), electronically erasable programmable read only memories (EEPROMs),magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus. Instructions are executable using one or moredevices (e.g., central processing units, etc.). In other embodiments, steps of the present invention might be performed by specific hardware components that contain reconfigurable or hardwired logic for performing the steps, or by any combination of programmed computer components and custom hardware components. [0014] Figure 1 illustrates one embodiment of a computing system 100 according to the present invention. A first component, such as a CPU 110, may be coupled to a second component, such as a GMCH 120, by a front side bus (FSB) 130. While thisdescription will refer specifically to a CPU and a GMCH, it is to be understood that other components may also be used. For example, the component may also be a CPU memory controller hub. The CPU 110 and the GMCH 120 share a cooling system 140. This cooling system 140 may take one of any number of forms known in the art, such as air circulation units, heat exchangers, or other methods. While the cooling system 140 should be able to handle the sum of the thermal design power (TDP) of both the CPU 110 and the GMCH 120 in most computing systems, in some computing systems this is not the case for various reasons. The TDP for a component is defined as the steady state power for which a thermal solution for that component should be designed so that the componentwill not exceed any reliability temperature threshold, and is generally quoted at a specific ambient temperature. The maximum power for the CPU 110 and GMCH 120 may be more than the TDP of each device. Since the maximum power is more than the TDP power, physical damage due to overheating may occur when operating beyond the TDP power for a sufficiently long time.[0015] The minimum residual GMCH thermal power budget is the power available to the GMCH 120 when the CPU 110 is at its maximum operating power in steady state.The minimum residual CPU thermal power budget is the power available to the CPU 1 10 when the GMCH 120 is at its maximum operating power in steady state.[0016] The CPU 110 has a microprocessor 111 to process software instructions.The CPU 110 may have a thermal sensor 112 to detect when the CPU 110 is getting too hot. The thermal sensor 112 may alert a CPU throttling arbiter 113, which may contain throttling control logic to control CPU throttling hardware 114. The throttling hardware 114 then reduces the amount of processing being performed by the microprocessor. For a computing system 100 that executes graphics, a graphics driver 115 may be used to interact with the GMCH 120 via the FSB 130. Messages may be transmitted via the FSB130 using the inband message protocol 116.[0017] The GMCH 120 may have a graphics engine 121 to execute graphicsprocessing. The GMCH 120 may have a thermal sensor 122 to detect when the GMCH 120 is getting too hot. The thermal sensor 122 may alert a GMCH throttling arbiter 123, which may contain throttling control logic to control GMCH throttling hardware 124. The throttling hardware 114 then reduces the amount of graphics execution being performed by the microprocessor. Messages may be transmitted via the FSB 130 using the inband message protocol 125.[0018] The CPU 110 may have a pin 150, such as a PROCHOT pin, which receives a signal from the GMCH 120. Upon receiving the signal, the CPU throttling arbiter 113 may cause the CPU throttling hardware 114 to throttle the microprocessor 111,Additionally, the GMCH 120 may also have a PROCHOT pin 160, which receives a signal from the CPU 110. Upon receiving the signal, the GMCH throttling arbiter 123 may cause the graphics throttling hardware 124 to throttle the graphics engine 121.[0019] Figure 2 illustrates in a simplified diagram one embodiment of the shared cooling system 140. A first junction 210 may couple the CPU 110 to a shared thermal solution 220. The first junction 210 has a heat capacity 212 and a thermal conductivity214 and the shared thermal solution 220 has a heat capacity 222 and a thermal conductivity 224. A second junction 230 may couple the CPU 110 to the shared thermal solution 220. The second junction 230 also has a heat capacity 232 and a thermal conductivity 234. The shared cooling system may reduce the entire system to the ambient temperature 240 of the surroundings. [0020] The heat capacity 222 and the thermal conductivity 224 of the sharedthermal solution 220 create a heat reduction factor [theta]sa. The heat capacity 212 and thethermal conductivity 214 of the first junction 210 create a heat reduction factor [theta]jS[iota]. Theheat capacity 232 and the thermal conductivity 234 of the second junction 230 create aheat reduction factor [theta]js2. The temperature for the CPU 110 and the GMCH 120 may begoverned by the equations:Tcpu ≤> (Pcpu + Pgmch) "sa ^ Ta + rcpu Dj5ITgmch = (Popu + Pgmch)* [theta]sa + Ta + Pgmch*[theta]js2 where Pcpu is the power from CPU 110, PgmCh is the power from the GMCH 120, and Ta is the ambient temperature 240. If the temperature of the CPU 110 is greater than its maximum allowed die junction temperature, then the temperature of the CPU 110 must be reduced. If the temperature of the GMCH 120 is greater than its maximum allowed die junction temperature, then the temperature of the GMCH 120 must be reduced.[0021] The temperatures of the CPU 110 and the GMCH 120 may be reduced in anumber of ways. Figure 3 illustrates in a flow chart one embodiment of a method 300 for throttling a component to reduce the temperature by using a PROCHOT pin. The processstarts (Block 302) when a first component, designated the slave component (SCOMP), receives a first signal via the first PROCHOT pin from a second component, designated the master component (MCOMP) (Block 304). SCOMP and MCOMP may be either the CPU 110 or the GMCH 120, depending on the circumstances. Further, the CPU 110 orthe GMCH 120 may be a master component at one moment and a slave component at the next moment. Additionally, the master-slave relationship of the components need not extend past the cooling situation described herein. MCOMP is indicating with the first signal that the temperature of MCOMP (MCT) has exceeded the threshold temperature of MCOMP (MCTT). The throttling arbiter then has the throttling hardware throttle theperformance of SCOMP (Block 306). SCOMP may also receive a temperature reading of SCOMP (SCT) from its thermal sensor (Block 308). If SCT is not greater than the threshold temperature of SCOMP (SCTT) (Block 310), then the process ends (Block 312).If SCT is greater than SCTT (Block 310), then a second signal may optionally be sent to the PROCHOT pin of MCOMP (Block 314), ending the process (Block 312). This second signal indicates to the throttling arbiter of MCOMP to throttle MCOMP.[0022] Figure 4 illustrates in a flow chart one embodiment of a method 400 for throttling a component to reduce the temperature by using the FSB 130. The process starts (Block 402) when SCOMP receives a first signal via the FSB from MCOMP (Block404). Again, MCOMP is indicating with the first signal that MCT has exceeded MCTT. The throttling arbiter then has the throttling hardware throttle the performance of SCOMP (Block 406). SCOMP receives SCT from its thermal sensor (Block 408). If SCT is not greater than SCTT (Block 410), then the process ends (Block 412). If SCT is greater than SCTT (Block 410), then a second signal is sent to MCOMP via the FSB (Block 414), ending the process(Block 412). This second signal indicates to the throttling arbiter of MCOMP to throttle MCOMP.[0023] In a further embodiment, a selection policy may be used to designate whichcomponent is throttled. Figure 5 illustrates in a flow chart one embodiment of a method 500 for using a selection policy in throttling a component to reduce the temperature. Theselection policy may be devised in a number of ways. In one embodiment, the process starts (Block 502) when the computing system 100 receives a selection policy by a user through a graphical user interface (GUI) or other method (Block 504). The selection policy may also be already present in the system or received by some other method. The throttling arbiter of a first component (COMPl) registers a first component temperature (CTl) received from the thermal sensor exceeding a first threshold temperature for that component (CTTl) (Block 506). The throttling arbiter refers to the selection policy(Block 508). If the selection policy indicates COMPl is the slave component and should be throttled (Block 510), then the throttling arbiter has the throttling hardware throttle COMPl (Block 512). At the same time, a second component (COMP2) receives a second component temperature (CT2) from its thermal sensor. If CT2 is not greater than the second component threshold temperature (CTT2) at this point (Block 514), the process ends (Block 516). If CT2 is still greater than CTT2 (Block 514), then the throttling arbiterof C0MP2 has the throttling hardware of C0MP2 throttle COMP2 (Block 518), ending the process (Block 516). If the selection policy indicates COMP2 is the slave component and should be throttled (Block 510), then the throttling arbiter of COMP2 has the throttling hardware of COMP2 throttle COMP2 (Block 520). The throttling arbiters ofCOMPl and C0MP2 may communicate using the methods described in Figure 3 andFigure 4. If CTl is not greater than CTTl at this point (Block 522), the process ends(Block 516). If CTl is still greater than CTTl (Block 522), then the throttling arbiter of COMPl has the throttling hardware of COMPl throttle COMPl (Block 524), ending the process (Block 516). The second throttling may be to a lesser degree than the first throttling.[0024] In a further embodiment, a selection policy may be based on the actionsbeing performed by the computing system at that time. Figure 6 illustrates in a flow chart one embodiment of a method 600 for using an action-based selection policy. In one embodiment, the process starts (Block 602) when the throttling arbiter of COMP 1 registers a temperature received from the thermal sensor exceeding CTTl (Block 604). The throttling arbiter refers to the selection policy (Block 606). If a processing intensive action is being performed (Block 608), then the GMCH throttling arbiter 123 has the graphics throttling hardware 124 throttle the graphics engine 121 (Block 610), ending the process (Block 612). If a graphics intensive action is being performed (Block 608), then the CPU throttling arbiter 113 has the CPU throttling hardware 114 throttle the microprocessor 111 (Block 610), ending the process (Block 612).[0025] In the above description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the present invention can bepracticed without these specific details. |
A low-profile passive-on-package is provided that includes a plurality of recesses that receive corresponding interconnects. Because of the receipt of the interconnects in the recesses, the passive-on-package has a height that is less than a sum of a thickness for the substrate and an interconnect height or diameter. |
1.A device that includes:Substratea first recess on the first surface of the substrate;a plurality of first through substrate vias extending through the substrate;a first interconnect, wherein the first interconnect is received by the first recess;a redistribution layer on the first surface of the substrate, wherein the redistribution layer is configured to electrically couple the first interconnect to a corresponding first through substrate via of the first through substrate via .2.The device of claim 1 further comprising:a second recess on the first surface of the substrate;A second through substrate via extending through the substrate from the second recess.3.The device of claim 1 further comprising a capacitor adjacent the opposite second surface of the substrate, wherein the second through substrate via is electrically coupled to the capacitor.4.The device of claim 1 further comprising an embedded inductor, wherein said embedded inductor includes at least two of said first through substrate vias.5.The device of claim 4 wherein said embedded inductor comprises a plurality of embedded inductors.6.The device of claim 5 wherein each of the embedded inductors includes two first through substrate vias electrically coupled together by conductors adjacent the opposing second surface of the substrate.7.The device of claim 1 wherein the substrate comprises a glass substrate, and wherein the first interconnect comprises a solder ball.8.The device of claim 1 wherein the substrate comprises a semiconductor substrate, and wherein the first interconnect comprises a metal post.9.The device of claim 1 wherein the substrate comprises an organic substrate, and wherein the first interconnect comprises a solder ball.10.The device of claim 1 further comprising:a second recess on the first surface of the substrate;a second interconnect received by the second recess, wherein the first and second interconnects comprise solder balls, and wherein the second solder balls have only mechanical functions for securing the device to a circuit board .11.A method comprising:Forming a first recess on the first surface of the substrate;Forming a plurality of first through substrate vias extending through the substrate;Forming a redistribution layer adjacent to the first surface of the substrate;Coupling a first interconnect into the first recess, wherein forming the redistribution layer forms a corresponding first through substrate via that couples the first interconnect to the first through substrate via Conductor.12.The method of claim 11 wherein forming the redistribution layer comprises patterning a metal layer on the first surface.13.The method of claim 12 wherein patterning the metal layer further comprises patterning a pad in the first recess.14.The method of claim 13 wherein patterning the metal layer comprises patterning a copper metal layer.15.The method of claim 11 wherein forming the first recess further comprises forming a second recess on the first surface of the substrate, the method further comprising:A second through substrate via extending from the second recess through the substrate is formed.16.The method of claim 15 wherein forming the second recess comprises forming a plurality of second recesses, and wherein forming the second through substrate via comprises forming a plurality of second recesses A plurality of second through substrate vias, each of the second through substrate vias extending through the substrate from the corresponding second recess.17.The method of claim 15 further comprising forming at least one second through substrate via coupled to said second through substrate via on said opposite second surface of said substrate Capacitor.18.The method of claim 14 further comprising depositing a passivation layer on the first surface of the substrate and on the opposite second surface of the substrate.19.The method of claim 14 wherein forming the recess comprises etching a first side of the glass substrate.20.The method of claim 14 wherein attaching the interconnect to each recess comprises dropping solder balls into each recess.21.A device that includes:SubstrateAn embedded inductor extending through the substrate;a first recess on the first surface of the substrate;a second recess on the first surface of the substrate;a first interconnect that is received by the first recess;a second interconnect received by the second recess;Means for electrically coupling the first interconnect to the embedded inductor and for electrically coupling the second interconnect to the embedded inductor.22.The device of claim 1 wherein said first interconnect and said second comprise solder balls.23.The device of claim 21 wherein said substrate comprises a glass substrate having a thickness of at least 100 microns.24.The device of claim 23 wherein said glass substrate has a thickness of at least 150 microns.25.The device of claim 21 wherein said device comprises at least one patterned metal layer.26.A package that includes:a substrate having a first side separated by a thickness of the substrate from the opposite second side;a plurality of depressions on the first side of the substrate;Corresponding to the plurality of recessed plurality of solder balls, each solder ball has a solder ball diameter, and each recess accommodates a corresponding solder ball such that a package height of the package is smaller than the substrate thickness and the solder ball The sum of the diameters;a plurality of through substrate vias extending from the first side, and each through substrate via has a length substantially equal to a thickness of the substrate;A redistribution layer configured to electrically couple certain ones of the solder balls to corresponding through substrate vias in the through substrate vias.27.The package of claim 26, wherein the package is incorporated into at least one of: a cellular phone, a laptop device, a tablet device, a music player, a communication device, a computer, and a video player.28.The package of claim 26 wherein said substrate is a glass substrate.29.The package of claim 26, further comprising an embedded inductor, wherein said embedded inductor comprises a through substrate via pair.30.The package of claim 26 wherein said substrate is a semiconductor substrate. |
Low profile package with passive componentsCross-reference to related applicationsThe present application claims priority to the filing date of U.S. Provisional Patent Application Serial No. 61/941, 308, filed on Feb. The priority of the day, the entire contents of these two applications are hereby incorporated by reference.Technical fieldThis application relates to integrated circuit package substrates, and more particularly to low profile packages having passive components.backgroundIn a glass overlying passive device (PoG) package, passive components such as inductors and capacitors are integrated on a glass substrate. The PoG package can then be coupled to the circuit board along with the semiconductor package to form a complete working device, such as a radio frequency (RF) front end. The PoG package is much more compact to use than conventional coupling of discrete passive components to a circuit board. In addition, PoG packaging is less expensive than integrating a passive device into a die containing an active device of an electronic system because the glass substrate is relatively inexpensive compared to a crystalline semiconductor substrate.While PoG packaging is thus an attractive alternative to providing passive components for electronic systems, PoG design faces multiple challenges. In particular, there is an ever-increasing need to reduce the size of electronic devices incorporated into mobile devices. Since users require more compact devices, the electronics contained within these devices must be correspondingly reduced in size. One of the dimensions that must be reduced for a PoG package is its height relative to the underlying board. A simple and straightforward way to reduce the height of a PoG package is to reduce the thickness of its glass substrate. But glass is inherently brittle. If the thickness of the glass substrate is excessively reduced (such as less than 150 or 100 microns), the glass substrate is thus prone to chipping. This problem is not solved if the passive components are instead integrated on a semiconductor substrate because such substrates are also brittle and can become too fragile if over-thinned. Since such problems are largely the same regardless of the type of substrate used to support the passive components, the term "passive-on-package" is used herein to indicate that the inclusion is integrated. Packaging of passive components on glass, semiconductor or organic substrates.Another problem with reducing the thickness of the glass substrate is the inductance of the embedded inductor formed in the glass substrate by the through-substrate via. The coil or loop of each embedded inductor is formed by a pair (or more) through substrate vias. For example, a first through substrate via in the embedded inductor can extend from a first surface of the substrate to a lead or conductor formed on a second opposing surface of the substrate. The conductor is also coupled to a second through substrate via in the embedded inductor, the second through substrate extending back from the second surface to the first surface. Current drawn from the first surface into the first through substrate via will thus flow through the conductor on the second surface and back down to the first surface in the second through substrate via. This current loop provides the inductance of the resulting embedded inductor. This inductance depends on the area covered by the current loop (as well as other factors). If the length of the through-substrate via is reduced by thinning the substrate, the inductance of the resulting embedded inductor will also be reduced. Since the thickness of the substrate is reduced, the height or length of the through-substrate through-hole of the substrate reduced by such thickness is of course also correspondingly reduced. For example, a 200 micron thick substrate can have through-substrate vias that extend through this thickness and thus also have a corresponding 200 micron length. However, if the substrate is only 100 microns thick, the through substrate vias will also have a length of only 100 microns. Reducing the package height of the PoG package will therefore tend to reduce the inductance of its inductor. The necessary inductance is thus also a barrier to reducing the height of the PoG package.Solder balls or other types of interconnects that couple the overlying passive device package to the underlying circuit board are another factor limiting the height reduction of the overlying passive device package. To better illustrate these challenges in overlying passive device package designs, a conventional overlying passive device package 100 is shown in FIG. The package 100 has a thickness or height H1 relative to a bottom circuit board (not illustrated) that is dependent on the thickness T of the substrate 104 and the diameter d1 of each of the plurality of solder balls 112. The substrate 104 includes a plurality of through substrate vias 102 coupled from the circuit-facing surface 108 of the substrate 104 to the opposing surface 106. The vias 102 may form a 3-dimensional passive structure, such as an embedded inductor 103. As discussed above, the inductance of the embedded inductor 103 decreases as the thickness T of the substrate 104 is lowered. Solder balls 112 are coupled to corresponding pads 110 on surface 108. Since the solder balls 112 protrude from the pads 110 on the surface 108, it can be immediately appreciated that if the diameter d1 of the solder balls 112 is reduced, the height H1 of the package 100 will be correspondingly reduced. However, if the diameter d1 is excessively reduced, the solder ball 112 is liable to be broken. In particular, lead-free solders are required in modern systems due to environmental problems caused by the use of conventional lead-containing solders. However, lead-free solders are generally more brittle than conventional solders, so that their use requires solder balls 112 to have a certain minimum diameter. Both the thickness T of the substrate 104 and the diameter d1 of the solder balls 112 cannot be excessively reduced without sacrificing strength and board level reliability (BLR) and the inductance required by the inductor 103. The height H1 must therefore meet these minimum values for conventional overlying passive device packages. This minimum height requirement reduces the density of the system incorporated into package 100.Accordingly, there is a need in the art for a more compact package design with passive components.OverviewIn order to provide a low profile package substrate comprising passive components, the first side of the substrate includes a plurality of recesses. As used herein, a low profile package substrate including passive devices can also be labeled as an overlying passive device package. Each recess accommodates a corresponding interconnect, such as a solder ball or a metal post. A redistribution layer on the first side of the substrate is electrically coupled to at least a subset of the interconnects. The substrate includes a plurality of through substrate vias. In one embodiment, the through-substrate via pair forms an embedded inductor. The redistribution layer can include leads or conductors extending from the first of the recesses to one of the through substrate vias forming the inductor. In this manner, the interconnect received in the first recess is electrically coupled to the first through substrate via in the embedded inductor through the conductor in the redistribution layer. The substrate can include additional embedded inductors having through-substrate vias that are coupled to corresponding interconnects through the redistribution layer in this manner.BRIEF DESCRIPTION OF THE DRAWINGS1 is a cross-sectional view of a conventional overlying passive device package.2 is a cross-sectional view of a low profile overlying passive device package, in accordance with an embodiment of the present disclosure.3A is a cross-sectional view of a low profile overlying passive device package, in accordance with an embodiment of the present disclosure.3B is a plan view of the recessed side of the low profile overlying passive device package of FIG. 3A.4A is a cross-sectional view of the substrate after forming a through-substrate via.4B is a cross-sectional view of the substrate of FIG. 4A after a redistribution layer is deposited on the surface of the substrate facing the die and a passivation layer is deposited on the redistribution layer.4C is a cross-sectional view of the substrate of FIG. 4B after formation of a recess on the surface of the substrate facing the board.4D is a cross-sectional view of the substrate of FIG. 4C after a redistribution layer is deposited on the surface of the substrate facing the board and a passivation layer is deposited on the redistribution layer.4E is a cross-sectional view of the substrate of FIG. 4D after placement of the solder balls in the recesses to complete the fabrication of the low profile overlying passive device package.FIG. 5 is a flow chart of a method of fabrication in accordance with an embodiment of the present disclosure.The embodiments of the present disclosure and its advantages are best understood by referring to the following detailed description. It should be appreciated that in the one or more drawings, the same reference numerals are used to identify the same elements.Detailed DescriptionA low profile overlying passive device package including a first side having a plurality of recesses is provided. Each recess can accommodate a corresponding interconnect, such as a solder ball, a metal post, or a metal post. The following discussion will be directed to solder ball interconnect embodiments, but it will be appreciated that other suitable types of interconnects can be used in alternative embodiments. The substrate further includes a plurality of through substrate vias extending from the first surface of the substrate to the opposite second surface. A redistribution layer on the first side of the substrate is electrically coupled to one or more solder balls in the recesses. For example, the redistribution layer can include a patterned metal layer that forms a lead or conductor that is coupled to a corresponding solder ball that is received in the recess. The redistribution layer conductor is coupled between the corresponding solder ball to the end of the corresponding through substrate via. Since the redistribution layer is adjacent to the first surface of the substrate, the end of the through-substrate via to which the redistribution layer conductor is coupled is also adjacent to the first surface.A pair (or more) of through substrate vias may be coupled together through conductors on the second surface of the substrate to form an embedded inductor. For example, the redistribution layer can include a first conductor extending from the interconnect in the first recess of the recesses to the through-substrate via in the embedded inductor. Similarly, the redistribution layer can include a second conductor extending from the interconnect in the second recess of the recesses to another through-substrate via in the embedded inductor. The interconnect in the first recess is thus electrically coupled to the interconnect in the second recess through the embedded conductor. In this manner, current drawn from an interconnect, such as a solder ball in the first recess, is conducted through the embedded inductor to, for example, a solder ball in the second recess. This is very advantageous because the embedded inductor can have a relatively robust inductance because each of its through-substrate vias is relatively long because of their first from the substrate. The side extends to the opposite side of the opposite side. The resulting overlying passive device package has an advantageously low profile because the solder balls are housed in the recesses. The portion of each solder ball that is received in the corresponding recess does not contribute to the package height.Additionally, the substrate can include a through substrate via extending from the corresponding recess to the opposing second surface of the substrate. In order to distinguish between the individual through substrate vias, the through substrate via extending from the first side of the substrate to the opposite second side is designated herein as a first through substrate via. In contrast, the through substrate via extending from the recess to the opposite second side of the substrate is also referred to herein as a second through substrate via. The second through substrate via is shorter than the first through substrate via to a depth corresponding to the recess. This reduced length is advantageous when an integrated capacitor, such as a metal-insulator-metal (MIM) capacitor, is driven on the second surface of the substrate because of the reduced length of the second through substrate via coupled to the capacitor There is less parasitic resistance and inductance than coupling from the first through substrate via. This is highly advantageous because the substrate may be relatively thick enough to be robust enough to resist breakage and warpage and to support relatively long first through substrate vias, the first through substrate via providing increased inductance for the embedded inductor While the same substrate also supports the second through substrate via, the second through substrate via can drive an integrated capacitor with reduced parasitic resistance and inductance.Given the premise that interconnects (such as solder balls) are housed in the recesses of the substrate, the substrate need not be excessively thinned, and the solder balls can still have a sufficiently robust diameter to resist chipping, and the resulting overlying is absent. The source device package has a reduced thickness or height because the solder balls are accommodated in blind vias or recesses. Since the substrate need not be excessively thinned, the substrate can have a thickness large enough to be robust against cracking and warping. In addition, it is noted that the embedded inductor formed using the through-substrate vias extending through the substrate benefits from the relatively robust substrate thickness, although the resulting overlying passive device package accommodates solder bumps. Has a reduced height. As previously discussed, the inductance of the inductor is varied by the area of the loop enclosed by the windings or coils that form the inductor. With regard to the embedded inductor disclosed herein, the inductor coil can be formed from a pair (or more) of first through substrate vias. The substrate can then have a thickness of sufficient magnitude to achieve a robust inductance from the inductor, while the package height is reduced due to the solder balls being received in the corresponding recesses.In addition, the thickness of the substrate can be sufficiently robust to reduce substrate fragility, warpage and breakage, while the package height is reduced due to the solder balls being received in the corresponding recesses. Similarly, the solder balls can each have a sufficiently robust diameter to reduce cracking and increase board level reliability. Although the solder balls can have such a robust diameter, these diameters contribute only partially to the package height since the solder balls are housed within the recesses. These and other advantages are better appreciated by the following discussion of example embodiments.Example embodiment2 illustrates an example overlying passive device package 200 that includes a substrate 204 having a minimum thickness T as discussed with respect to a conventional overlying passive device package 100. For example, if the substrate 204 comprises glass, the thickness T can be, for example, at least 100 microns to make the substrate 204 robust enough to provide the desired board level reliability (BLR). In general, the minimum thickness T depends on the properties of the substrate 204. For example, a more robust type of glass can be thinned to more than 100 microns. Instead, the glass may not be as robust so that the thickness T must be 150 microns or more. If substrate 204 is a semiconductor substrate such as silicon, similar limitations for thickness T will also be present. Alternatively, the substrate 204 may include an organic substrate. A plurality of interconnects (such as solder balls 212) for interconnecting to a circuit board or another package substrate may also have the same minimum thickness d1 as discussed with respect to conventional overlying passive device package 100. The minimum thickness d1 of the solder balls 212 depends on its composition. For example, if the solder balls 212 include lead-free solder balls, they are more brittle and thus would require a greater minimum thickness d1 compared to the lead-containing embodiment. Although these minimum dimensions are satisfied, the overlying passive device package 200 has a reduced height H2 compared to the height H1 of the overlying passive device package 100 because the substrate 204 is in the first side 208 of the substrate 204. Solder balls 212 are received in corresponding blind vias or recesses 214 formed. The height H2 of the overlying passive device package 200 is thus reduced to approximately the depth of the blind via or recess 214.The overlying passive device package 200 can include one or more first through substrate vias extending from the first surface 208 of the substrate 204 to the opposite second surface 206 of the substrate 208, such as the first through substrate via 202a , 202b, 202c, and 202d. The first through substrate via 202a is coupled to the first through substrate via 202a through a lead or conductor 203a on the second surface 206 of the substrate 204 to form an embedded inductor 215. Similarly, the first through substrate via 202d is coupled to the first through substrate via 202c via the conductor 203b to form the embedded inductor 217. Each of the embedded inductors 215 and 217 has an advantageously robust inductance because the thickness T of the substrate 207 is not excessively thinned. For example, the current loop area covered by inductor 215 is varied by the length of each of the first through substrate vias 202a and 202b (as well as other factors). Further, the length of the first through substrate via is changed to the thickness T of the substrate 204. Since the thickness T does not have to be excessively reduced to achieve an advantageously low package height H2 of the overlying passive device package 200, the first through substrate vias (such as vias 202a and 202b) may be relatively long to be inductors The 215 provides enhanced inductance.Coupling to inductors 215 and 217 can occur through redistribution layer 220. For example, solder balls 212 received in recesses 214a are coupled to first through substrate vias 202b in inductor 215 by redistribution layer conductors 216a and recess pads 210 forming self-heavy distribution layer 220. Another solder ball can be coupled to the first through substrate via 202a by a similar redistribution layer conductor and pad (not illustrated) to complete the coupling to the inductor 215. A similar coupling can be provided with respect to the embedded inductor 217. For example, solder balls 212 received in recesses 214c are coupled to first through substrate vias 216b in inductor 217 by redistribution layer conductors 216b and recess pads 210. In one embodiment, redistribution layer 220 may be considered to include means for electrically coupling certain of the interconnects received by the recesses to corresponding ones of the first through substrate vias.In contrast to the first through substrate via, the second through substrate via has a reduced length. For example, the second through substrate via 202e extends from the recess 214b to the second surface 206 of the substrate 204. The second through substrate via 202e has a length that is shortened to a depth or height of the recess 214b as compared to the length of the first through substrate via that is substantially equal to the thickness T of the substrate 204. This reduced length reduces the parasitic inductance and resistance in the coupling of the second through substrate via 202e to the capacitor 207 integrated on the surface 206 of the substrate 204. In one embodiment, capacitor 207 can include a metal-insulator-metal (MIM) capacitor.The recess 214 can also include an adhesive (not illustrated) that helps hold the solder ball 212. The first and second through substrate vias 202 can be used for both electrical coupling functions as well as heat transfer roles. Because the second through substrate via is of a reduced length compared to the first through substrate via, the second through substrate via has in particular heat transfer for the solder balls received from the second surface 206 into the corresponding recess. A passivation layer or solder resist layer 230 may cover the second surface 206. Similarly, the passivation layer or solder resist layer 225 can cover the first surface 208 of the substrate 204. Passivation layers 230 and 225 can broadly include a variety of different suitable materials, such as silicon nitride, dielectric polymers such as polyimide, or organic polymers.The overlying passive device package 300 illustrated in Figure 3A includes one of many alternative embodiments. In this embodiment, surface 206 includes a recess 214d that receives solder balls 212 that are not coupled to any through substrate vias or other structures. The solder balls 212 in the recesses 214d are thus only used to mechanically couple the overlying passive device package 300 to a corresponding circuit board or additional substrate (not illustrated), as opposed to having an electrical function. The remaining components in the overlying passive device package substrate 300 are as discussed with respect to the overlying passive device package 200.A plan view of a surface 208 of an exemplary overlying passive device package substrate 360 is shown in FIG. 3B to better illustrate the redistribution layer pad 210 and the redistribution layer conductor 216 to the first through substrate via 202. layout. The example recess 214f includes redistribution layer pads 210 coupled to the first through substrate vias 202 by redistribution layer conductors 216. In contrast, recess 214e includes redistribution layer pads 210 that are not coupled to any redistribution layer conductors. The pads 210 in the recess 214e may in turn be coupled to a second through substrate via (not illustrated). Alternatively, the recess 214e may only have the purpose of mechanical coupling as discussed with respect to the recess 214d of Figure 3A.The enhanced thickness T of the disclosed substrate, such as the substrate 204 shown in Figure 2, enables the elimination of temporary carriers that would otherwise be required during manufacture, provided the substrate thickness is reduced. Additionally, the length of the first through substrate via 202 can be increased, which results in increased inductance and better quality factor for inductors such as embedded inductors 215 and 217. In addition, a better heat flow through the substrate 204 can be achieved using a second through substrate via that is shortened compared to the substrate thickness T, such as the second through substrate via 202e. This same shortening of the vias 202e also reduces its resistance, which increases the quality factor of the capacitors (such as capacitors 207) that are driven through these vias. The resulting reduced signal path length through such second through substrate vias is also beneficial in enhancing signal integrity. In addition, since the portion of the solder ball 212 that is received in the recess 214 does not contribute to the package height, the solder ball 212 can maintain a minimum diameter, which also improves board level reliability (LBR) and resistance to solder ball cracking. Blind vias or recesses 214 also accommodate the use of adhesives, which further improves the BLR. Finally, the blind vias or recesses 214 act as a screen during the falling ball phase of manufacture such that the solder balls 212 can be received in the corresponding recesses 214 with less error. The example manufacturing process will now be discussed.Sample manufacturing processThe following discussion will be directed to a wafer level process (WLP) embodiment in which a substrate used to support passive components in an overlying passive device package is processed as part of a wafer (or panel) before being singulated into individual packages. . It will be appreciated, however, that the processes discussed herein can also be applied individually to substrates that have been singulated from a wafer (as compared to treating wafers (or panels) as a unit). Regardless of whether the WLP process is used to fabricate overlying passive device packages to achieve reduced height, the reduced height overlying passive device packages disclosed herein are all in corresponding blind vias or recessed via interconnects (such as Solder ball).An example manufacturing process flow is shown in Figures 4A through 4E. As illustrated in FIG. 4A, a substrate 204, such as a glass panel or wafer (or semiconductor wafer), is processed to form a through substrate via 202. Alternatively, substrate 204 can include a stacked organic panel. To form through-substrate vias, the substrate 204 can be laser drilled, mechanically drilled, or etched to form vias that are subsequently plated with copper, nickel, or other suitable metal to form through-substrate vias 202. Alternatively, an electroless plating process can be used instead of electroplating. After depositing metal to form through substrate vias 202, first surface 208 of substrate 204 and opposing second surface 206 may then be polished. Since the recess has not been formed on the first surface 208 (which may be the surface facing the board), the difference in length between the first and second through substrate vias has not been caused.As shown in FIG. 4B, the second surface 206 of the substrate 204 can be processed by, for example, photolithographic techniques with a patterned metal layer, such as a copper or nickel metal layer, to form conductors 206 that connect corresponding through-substrate vias to form Inductor. Additionally, depositing the MIM structure on surface 206 to form any desired capacitor (not illustrated) may also be performed at this time. Additionally, passivation layer 230 can be deposited on surface 208 at this stage of fabrication. If some subsequent contacts through the substrate vias are required (such as for heat transfer or signal conduction to the die), the patterned metal layer forming the conductor 203 can also be patterned to form pads, such as soldering. Disk 219. In such embodiments, passivation layer 230 can include a pad opening for exposing pad 219, such as pad opening 218.Surface 208 can then be etched or drilled to form blind vias or recesses 214, as shown in Figure 4C. Regarding the etching of the recess 214, a wet etching or dry etching technique can be used. Alternatively, reactive ion etching can be used to etch the recesses 214. Regarding drilling, laser or mechanical drilling techniques are suitable. In the cross-sectional view of FIG. 4C, the recess 214 does not intersect any of the through substrate vias 202 such that all of the illustrated vias are first surface vias. Alternatively, the recess may intersect the through substrate via, such as previously discussed with respect to recess 214e of FIG. 2, to form a second through substrate via 202e (shown only in FIG. 2).As illustrated in FIG. 4D, backside redistribution layer pads 210 and conductors 216 may then be deposited on surface 208 of substrate 204. For example, a masking layer (not illustrated) can be patterned to include openings for plating of copper, nickel, or other suitable metal to form pads 210 and conductors 216. Finally, the solder balls 214 are dropped within the recess 214 and remelted as shown in Figure 4F. Substrate 204 can then be singulated from its panel or wafer (not illustrated) to complete the fabrication process. The manufacturing process will now be summarized in the flow chart below.Sample manufacturing process flow chartA flow chart of an example fabrication method is shown in FIG. The method includes the step of forming a first recess on a first surface of the substrate. Step 505 includes forming a plurality of first through substrate vias extending through the substrate. The through holes 202a to 202d of FIG. 2 are examples of such first through substrate through holes. Step 510 includes forming a redistribution layer on the first surface. Finally, step 515 includes coupling the interconnect into the first recess, wherein forming the redistribution layer forms a conductor that couples the first interconnect to the corresponding first through substrate via. For example, the redistribution layer 220 forming FIG. 2 couples the conductor 216a between the solder balls 212 in the recess 214a to the first through substrate via 202b. In this regard, it is noted that some of the recesses (such as recess 214d of FIG. 3A) accommodate solder balls 212 that are not coupled to any through-substrate vias through any redistribution layer conductors. Some example electronic systems that can advantageously be incorporated into a low profile overlying passive device package are now discussed.Example electronic systemThe overlying passive device package disclosed herein can be incorporated into a wide variety of electronic systems. For example, as shown in FIG. 6, cellular telephone 600, laptop 605, and tablet PC 610 may each include a low profile overlying passive device package constructed in accordance with the present disclosure. Other exemplary electronic systems, such as music players, video players, communication devices, and personal computers, can also be configured with overlying passive device packages constructed in accordance with the present disclosure.Many modifications, substitutions and changes can be made in the materials, devices, arrangements and methods of use of the device of the present disclosure without departing from the spirit of the present disclosure, as will be appreciated by those of ordinary skill in the art and in light of the specific application at hand. And scope. In view of the above, the scope of the present disclosure should not be limited to the specific embodiments illustrated and described herein (as they are merely some examples of the present disclosure), but should be fully equivalent to the appended claims and their functional equivalents. . |
Techniques are described that can be used to support integrity validation of protocol data units. An iSCSI compatible logic may establish a memory region to store a header portion of the protocol data unit. In some implementations, the iSCSI compatible logic may read the header and determine a size of a second memory region to store a payload portion of the protocol data unit. In some implementations, the iSCSI compatible logic may set the second memory region as a maximum possible size of the payload portion. TCP compatible logic may include the capability to validate an integrity of the header or data portions of the protocol data unit. TCP compatible logic may request data mover logic to determine an integrity validation value for a header and/or data portion of the protocol data unit in the process of copying the protocol data unit to among the memory region or the second memory region. TCP compatible logic may compare the determined integrity validation value with an integrity validation value included with the protocol data unit. |
Claims What is claimed is: 1. A method comprising: allocating a first memory region to store a header of a protocol data unit received using at least one network protocol unit; determining a size of a payload portion of the protocol data unit based on the header; allocating a second memory region to store the payload portion based on the determined size, wherein the payload portion is received using at least one network protocol unit; identifying a location of an integrity validation value within at least one of the first or second memory region; and requesting determination of an integrity validation value using a data mover logic. 2. The method of Claim 1 , wherein the requesting determination of an integrity validation value comprises requesting determination of an integrity validation value for the payload portion and further comprising: requesting determination of an integrity validation value over the payload portion of the protocol data unit using the data mover logic; identifying a seed value in a third memory region; requesting the data mover logic to copy the data portion of the protocol data unit to the second memory region; and requesting the data mover logic to determine the integrity validation value in part using the third region. 3. The method of Claim 2, further comprising: the data mover logic retrieving an integrity validation seed value from the third memory region; the data mover logic writing the determined integrity validation value to the third memory region; comparing the determined integrity validation value with the integrity validation value in the second memory region; and indicating pass or fail based on the comparing. 4. The method of Claim 1 , wherein the requesting determination of an integrity validation value comprises requesting determination of an integrity validation value for the header portion and further comprising: requesting determination of an integrity validation value of the header portion using the data mover logic; identifying a seed value in a third memory region; and requesting the data mover logic to copy the header portion to the first memory region and requesting the data mover logic to determine the integrity validation value in part using the third region. 5. The method of Claim 4, further comprising: the data mover logic retrieving an integrity validation seed value from the third memory region; the data mover logic writing the determined integrity validation value to the third memory region; comparing the determined integrity validation value with the integrity validation value in the first memory region; and indicating pass or fail based on the comparing. 6. The method of Claim 1 , wherein the first region comprises a maximum size of a header portion of the protocol data unit under applicable protocol. 7. The method of Claim 1 , wherein the first region includes an integrity validation value for the header portion transmitted with the protocol data unit. 8. The method of Claim 1 , wherein the second region includes an integrity validation value for the payload portion transmitted with the protocol data unit. 9. The method of Claim 1 , wherein the integrity validation value comprises a cyclical redundancy checking (CRC) value. 10. A system comprising : a host system comprising a memory and a data mover logic; a network component communicatively coupled to the host system to provide received network protocol units for storage in the memory, wherein the host system includes: logic to allocate a first region in the memory to store a header of a protocol data unit received using at least one network protocol unit, logic to determine a size of a payload portion of the protocol data unit based on the header, logic to allocate a second region in the memory to store the payload portion based on the determined size, wherein the payload portion is received using at least one network protocol unit, logic to identify a location of an integrity validation value within at least one of the first or second region, and logic to request determination of an integrity validation value using the data mover logic; and a network medium communicatively coupled to the network component. 11. The system of Claim 10, wherein the data mover logic is to retrieve an integrity validation seed value from a third memory region and the data mover logic is to write determined integrity validation value to the third memory region and further comprising: logic to compare the determined integrity validation value with the integrity validation value in the first memory region; and logic to indicate pass or fail based on the comparison. 12. The system of Claim 10, wherein the data mover logic is to retrieve an integrity validation seed value from a third memory region and the data mover logic is to write determined integrity validation value to the third memory region and further comprising: logic to compare the determined integrity validation value with the integrity validation value in the second memory region; and logic to indicate pass or fail based on the comparison. 13. The system of Claim 10, wherein the data mover logic comprises a direct memory access engine. 14. A method comprising : allocating a first memory region to store a header portion of a protocol data unit received using at least one network protocol unit; allocating a second memory region to store a payload portion of the protocol data unit, wherein the payload portion is received using at least one network protocol unit and wherein the second region comprises a maximum size of a payload portion of a protocol data unit under an applicable protocol; and requesting a stack logic to determine integrity validation values for at least one of the header and data portion. 15. The method of Claim 14, further comprising: the stack logic requesting a data mover logic to determine an integrity validation value over one or more portion of the protocol data unit in a course of copying portions of the protocol data unit into any of the first and second regions, wherein the stack logic requesting includes the stack logic identifying a location of one or more integrity validation value included with the protocol data unit 16. The method of Claim 15, wherein identifying a location of one or more integrity validation value included with the protocol data unit comprises: the stack logic inspecting the protocol data unit to identify a location of one or more integrity validation value included with the protocol data unit based at least on any of: a predetermined header size, information in the header indicating the size of the data, and one or more integrity validation value being located at a set location relative to the header and data. 17. The method of Claim 14, wherein the requesting a stack logic to determine integrity validation values for at least one of the header and data portion comprises requesting determination of an integrity validation value for the header portion and further comprising: requesting determination of an integrity validation value of the header portion using a data mover logic; identifying a seed value in a third memory region; and requesting the data mover logic to copy the header portion to the first memory region and requesting the data mover logic to determine the integrity validation value in part using the third region. 18. The method of Claim 17, further comprising: the data mover logic retrieving an integrity validation seed value from the third memory region; the data mover logic writing the determined integrity validation value to the third memory region; comparing the determined integrity validation value with the integrity validation value in the first memory region; and indicating pass or fail based on the comparing. 19. The method of Claim 14, wherein the requesting a stack logic to determine integrity validation values for at least one of the header and data portion comprises requesting determination of an integrity validation value for the data portion and further comprising: requesting determination of an integrity validation value over the data portion of the protocol data unit using the data mover logic; identifying a seed value in a third memory region; and requesting the data mover logic to copy the data portion of the protocol data unit to the second memory region and requesting the data mover logic to determine the integrity validation value in part using the third region. 20. The method of Claim 19, further comprising: the data mover logic retrieving an integrity validation seed value from the third memory region; the data mover logic writing the determined integrity validation value to the third memory region; comparing the determined integrity validation value with the integrity validation value in the second memory region; and indicating pass or fail based on the comparing. 21. The method of Claim 14, wherein the first region includes an integrity validation value for the header portion transmitted with the protocol data unit. 22. The method of Claim 14, wherein the second region includes an integrity validation value for the data portion transmitted with the protocol data unit. 23. The method of Claim 14, wherein the integrity validation value comprises a cyclical redundancy checking (CRC) value. 24. A system comprising: a host system comprising a memory and a data mover logic; a network component communicatively coupled to the host system to provide received network protocol units for storage in the memory, wherein the host system includes : logic to allocate a first memory region to store a header portion of a protocol data unit received using at least one network protocol unit, logic to allocate a second memory region to store a data portion of the protocol data unit, wherein the data portion is received using at least one network protocol unit and wherein the second region comprises a maximum size of a data portion of a protocol data unit under an applicable protocol, and logic to request a stack logic to determine integrity validation values for at least one of the header and data portion; and a network medium communicatively coupled to the network component. 25. The system of Claim 24, wherein the stack logic is to request the data mover logic to determine an integrity validation value over one or more portion of the protocol data unit in a course of copying portions of the protocol data unit into any of the first and second regions, wherein to request, the stack logic is to identify a location of one or more integrity validation value included with the protocol data unit. 26. The system of Claim 25, wherein to identify a location of one or more integrity validation value included with the protocol data unit, the stack logic is to inspect the protocol data unit to identify a location of one or more integrity validation value included with the protocol data unit based at least on any of: a predetermined header size, information in the header indicating the size of the data, and one or more integrity validation value being located at a set location relative to the header and data. 27. The system of Claim 24, wherein the data mover logic is to retrieve an integrity validation seed value from a third memory region and the data mover logic is to write determined integrity validation value to the third memory region and further comprising: logic to compare the determined integrity validation value with the integrity validation value in the first memory region; and logic to indicate pass or fail based on the comparison. 28. The system of Claim 24, wherein the data mover logic is to retrieve an integrity validation seed value from a third memory region and the data mover logic is to write determined integrity validation value to the third memory region and further comprising: logic to compare the determined integrity validation value with the integrity validation value in the second memory region; and logic to indicate pass or fail based on the comparison. 29. The system of Claim 24, wherein the data mover logic comprises a direct memory access engine. |
TECHNIQUES TO PROCESS RECEIVED NETWORK PROTOCOL UNITSFieldThe subject matter disclosed herein relates to techniques to process received network protocol units.Related ArtData communications systems typically utilize techniques to verify the integrity of received information. For example, to verify integrity of received packets, various protocols such as Remote Direct Memory Access (RDMA), Internet SmallComputer System Interface (iSCSI), and Stream Control Transmission Protocol (SCTP) may involve calculation of cyclical redundancy checking (CRC) values over received packets and a comparison of the calculated CRC values with CRC values provided with the packets. For example, RDMA is described for example at www.rdmaconsortium.com as well as in An RDMA Protocol Specification, Version 1.0 (Oct. 2002). iSCSI is described for example at RFC 3720: Internet Small Computer Systems Interface (iSCSI) (Apr. 2004). SCTP is described for example at The Internet Society RFC-3286, An Introduction to the Stream Control Transmission Protocol (SCTP) (May 2002). Brief Description of the Drawings [0003] Embodiments of the present invention are illustrated by way of example, and not by way of limitation, in the drawings and in which like reference numerals refer to similar elements.[ 0004 ] FIG. 1 depicts an example system embodiment in accordance with some embodiments of the present invention. [0005] FIGs. 2 and 3 depict an example of interactive elements that can be used to process received network protocol units in accordance with some embodiments of the present invention.FIG. 4 depicts an example of a flow diagram that can be used to process received network protocol units in accordance with some embodiments of the present invention.Detailed Description[0007 ] Reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrase "in one embodiment" or "an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in one or more embodiments.[ 0008 ] FIG. 1 depicts in computer system 100 a suitable system in which some embodiments of the present invention may be used. Computer system 100 may include host system 102, bus 116, and network component 118. [0009] Host system 102 may include chipset 105 , processor 110, host memory112, and storage 114. Chipset 105 may provide intercommunication among processor 110, host memory 112, storage 114, bus 116, as well as a graphics adapter that can be used for transmission of graphics and information for display on a display device (both not depicted). For example, chipset 105 may include a storage adapter (not depicted) capable of providing intercommunication with storage 114. For example, the storage adapter may be capable of communicating with storage 114 in conformance at least with any of the following protocols: Small Computer Systems Interface (SCSI), Fibre Channel (FC), and/or Serial Advanced Technology Attachment (S-ATA). [ 0010 ] In some embodiments, chipset 105 may include data mover logic capable to perform transfers of information within host system 102 or between host system 102 and network component 118. As used herein, a "data mover" refers to a module for moving data from a source to a destination without using the core processing module of a host processor, such as processor 110, or otherwise does not use cycles of a processor to perform data copy or move operations. By using the data mover for transfer of data, the processor may be freed from the overhead of performing data movements, which may result in the host processor running at much slower speeds. A data mover may include, for example, a direct memory access (DMA) engine. In some embodiments, data mover may be implemented as part of processor 110, although other components of computer system 100 may include the data mover. In some embodiments, data mover may be implemented as part of chipset 105.In some embodiments, the data mover may include a capability to determine an integrity validation value or at least have the capability to access logic to determine an integrity validation value. The data mover may be capable to determine integrity validation values such as but not limited to CRC values and checksum. The data mover may be used to determine an integrity validation value over a header and/or data portion of an iSCSI protocol data unit (PDU) based on a seed value in host memory and write the integrity validation value result to the seed value memory location. [0012] Processor 110 may be implemented as Complex Instruction Set Computer(CISC) or Reduced Instruction Set Computer (RISC) processors, multi-core, or any other microprocessor or central processing unit. Host memory 112 may be implemented as a volatile memory device such as but not limited to a Random Access Memory (RAM), Dynamic Random Access Memory (DRAM), or Static RAM (SRAM). Storage 114 may be implemented as a non- volatile storage device such as but not limited to a magnetic disk drive, optical disk drive, tape drive, an internal storage device, an attached storage device, flash memory, battery backed-up synchronous DRAM (SDRAM), and/or a network accessible storage device. [0013] Bus 116 may provide intercommunication among at least host system 102 and network component 118 as well as other peripheral devices (not depicted). Bus 116 may support serial or parallel communications. Bus 116 may support node-to-node or node-to-multi-node communications. Bus 116 may at least be compatible with Peripheral Component Interconnect (PCI) described for example at Peripheral Component Interconnect (PCI) Local Bus Specification, Revision 3.0, February 2, 2004 available from the PCI Special Interest Group, Portland, Oregon, U.S.A. (as well as revisions thereof); PCI Express described in The PCI Express Base Specification of the PCI Special Interest Group, Revision 1.0a (as well as revisions thereof); PCI-x described in the PCI-X Specification Rev. 1.1, March 28, 2005, available from the aforesaid PCI Special Interest Group, Portland, Oregon, U.S.A. (as well as revisions thereof); and/or Universal Serial Bus (USB) (and related standards) as well as other interconnection standards.[0014 ] Network component 118 may be capable of providing intercommunication between host system 102 and network 120 in compliance at least with any applicable protocols. Network component 118 may intercommunicate with host system 102 using bus 116. In one embodiment, network component 118 may be integrated into chipset 105. "Network component" may include any combination of digital and/or analog hardware and/or software on an I/O (input/output) subsystem that may process one or more packets to be transmitted and/or received over a network. In one embodiment, the I/O subsystem may include, for example, a network component card (NIC), and network component may include, for example, a MAC (media access control) layer of the Data Link Layer as defined in the Open System Interconnection (OSI) model for networking protocols. The OSI model is defined by the International Organization for Standardization (ISO) located at 1 rue de Varembe, Case postale 56 CH-1211 Geneva 20, Switzerland. [0015] Network 120 may be any network such as the Internet, an intranet, a local area network (LAN), storage area network (SAN), a wide area network (WAN), or wireless network. Network 120 may exchange traffic with network component 118 using the Ethernet standard (described in IEEE 802.3 and related standards) or any communications standard. As used herein, a "network protocol unit" may include any packet or frame or other format of information with a header and payload portions formed in accordance with any protocol specification.FIG. 2 depicts an example of elements that can be used to process received network protocol units, in accordance with some embodiments of the present invention. In some embodiments, host 200 may include at least host memory 201, iSCSI logic 202, stack 204, network component driver 206, and data mover 210. Host 200 may include other logic (not depicted) such as but not limited to a processor, memory device, and storage device.[ 0017 ] Based on the Internet Small Computer System Interface (iSCSI) protocol, iSCSI logic 202 expects that a header of a received iSCSI PDU is 48 bytes. The PDU may be received among one or more network protocol unit (NPU). Accordingly, iSCSI logic 202 posts header buffer 208A (i.e., starting location and size) within host memory 201 that is just large enough to store the iSCSI header and an integrity validation value for the header. Because the last four bytes of the header is an integrity validation value, iSCSI logic 202 knows that the last four bytes of header buffer 208 A stores an integrity validation value for the header. In some embodiments, iSCSI logic 202 may post header buffer 208A prior to network component 250 receiving an iSCSI PDU by way of one or more NPU. iSCSI logic 202 may indicate to stack 204 the location of header buffer 208 A. [0018] Stack 204 may request data mover 210 to transfer a header portion of aPDU among one or more NPU stored in host memory 201 to header buffer 208 A. The one or more NPU may have been received by network component 250 and transferred into host memory 201. After the header portion is written into header buffer 208 A, stack 204 may return use of header buffer 208A and indicate storage of a header to iSCSI logic 202. [0019] In some embodiments, stack 204 may request data mover 210 to copy header of a PDU stored among one or more NPU in host memory 201 to header buffer 208 A and request data mover 210 to determine an integrity validation value over the header using a seed value stored at a specified location. The seed value is shown as seed value 207. Seed value 207 may be used to store a seed for a single PDU and to accumulate integrity validation values determined over a single PDU that spans multiple NPUs. In some embodiments, data mover 210 may write the determined value to the same location which contained the seed value. An integrity validation value may be a CRC value, checksum, or any other value determined over a portion of a PDU. [ 0020 ] iSCSI logic 202 may process the iSCSI header stored in header buffer 208 A in accordance with iSCSI protocol processing and determine the length of the data portion of the PDU based on content in the iSCSI header. For example, iSCSI is described at least in IP Storage Working Group Internet Draft RFC 3720 entitled "Internet Small Computer Systems Interface (iSCSI)" (April 2004) and revisions thereof. Accordingly, rather than post a data buffer large enough to store the largest- size scenario of the data portion of a PDU, iSCSI logic 202 may post data buffer 208B in host memory 201 that is just large enough to store data and an integrity validation value for the data. Accordingly, by knowing the size of the data portion of a PDU, iSCSI logic 202 may know the location of an integrity validation value included with the data because under iSCSI, the last four bytes of the data portion are an integrity validation value. [ 0021 ] iSCSI logic 202 may pass a descriptor to stack 204 for each buffer (e.g., header buffer 208A and/or data buffer 208B) indicating the buffer location (e.g., header buffer 208A and/or data buffer 208B) and the starting location in memory for integrity validation value associated with header and/or data. In some embodiments, determination of header and/or data integrity validation may be offloaded to other logic. An integrity validation value will be located in the last four bytes of each of header buffer 208 A and data buffer 208B.Stack 204 may request data mover 210 to copy the data portion of a PDU from one or more NPU in host memory 201 into data buffer 208B. In some embodiments, stack 204 may request data mover 210 to copy a data portion of a PDU from one or more NPU in host memory 201 to data buffer 208B and request data mover 210 to determine an integrity validation value over the data portion using a seed value stored at seed value 207. In some embodiments, data mover 210 may write the determined value to seed value 207. If multiple NPUs transfer a single iSCSI PDU, data mover 210 may accumulate the integrity validation value in seed value 207 until it reaches the end of the PDU. If there are multiple NPUs which transfer a single iSCSI PDU, the determined integrity validation value may be accumulated in seed value 207 until the end of the header or data portion of the PDU. In order to handle the case where header buffer 208A and/or data buffer 208B is posted in more than one post call to stack 204, descriptor semantics that can be defined to allow the data over which the integrity validation value is to be calculated to span multiple NPUs.In some embodiments, stack 204 may be capable to determine TCP/IP protocol compliance for one or more received NPU in accordance TCP. For example, the TCP/IP protocol is described at least in the publication entitled "Transmission Control Protocol: DARPA Internet Program Protocol Specification," prepared for the Defense Advanced Projects Research Agency (RFC 793, published September 1981). [0024 ] After data mover 210 determines an integrity validation value over the entire header, stack 204 may compare the determined integrity validation value with the value included with the header using the integrity validation value from header buffer208A. Under iSCSI, integrity validation values in a PDU may be located among the last 4 bytes of the header portion and among the last 4 bytes of the data portion. If the determined integrity validation value matches the integrity validation value included with the header, then the header passes. If the determined integrity validation value does not match the integrity validation value included with the header, then the PDU fails. Similar techniques can be used to determine pass or fail for the data portion of the PDU using an integrity validation value for the data portion from data buffer 208B. In some embodiments, stack 204 may indicate pass/fail of the validation to iSCSI logic 202, for example, when it returns the posted buffer(s) associated with the PDU to iSCSI logic 202. [0025] In some embodiments, instead of stack 204 comparing the determined integrity validation value for the header and/or data portion with the integrity validation value(s) for the header and/or data portion received with the PDU, data mover 210 may include or have access to logic capable to compare the determined integrity validation value with an integrity validation value(s) received in a network protocol unit. Stack 204 may instruct data mover 210 to compare the determined integrity validation value(s) with the integrity validation value(s) received in a PDU. When instructed, data mover 210 may compare the determined integrity validation value with the integrity validation value received with a PDU. Data mover 210 may indicate a pass/fail indication for header and/or data portions of a PDU to stack 204. In some embodiments, determination of whether data mover 210 has completed determination and/or validation of an integrity validation value can be made by polling or interrupts.When instructed, data mover 210 may determine an integrity validation value over a header and/or data using an integrity validation seed value in host memory and prior to writing the header or data to respective header buffer 208A or data buffer 208B. In some embodiments, stack 204 provides the location of the integrity validation seed value to data mover 210. In some embodiments, stack 204 provides the integrity validation seed value to data mover 210. In some embodiments, determination of an integrity validation value may use table lookup and/or arithmetic-logic-unit operations. In some embodiments, determination of an integrity validation value may include calculations and/or uses of look-up-tables.[ 0027 ] Network component driver 206 may receive descriptors from network component 250 that indicate pointer(s) to locations within host memory that are capable to store a one or more received network protocol unit (NPU). Host memory 201 may store one or more NPU received by network component 250.[ 0028 ] In some embodiments, network component 250 may include a transceiver logic 252, network component data mover 254, and descriptor manager 256. Transceiver logic 252 may be capable to receive network protocol units through a physical medium and transmit network protocol units through a physical medium. The physical medium may be a coaxial cable, wire-line, fiber optic cable, or other signal propagation medium. Alternatively or in addition, transceiver logic 252 may be capable to receive and transmit signals using wireless techniques. For example, transceiver logic 252 may receive and transmit network protocol units in conformance with applicable protocols such as Ethernet as described in IEEE Standard 802.3 (2002) and revisions thereof, although other protocols may be used. Transceiver logic 252 may be used to perform media access control operations as prescribed by applicable protocols such as Ethernet, although other protocols may be used, as well as other protocol-related processing. [0029] Network component data mover 254 may transfer one or more NPU received by network component 250 to host memory 201. Descriptor manager 256 may receive one or more descriptor 216 from host 200. Descriptor manager 256 may modify one or more descriptor 216 to describe a storage location of a received network protocol unit in host memory 201. Descriptor manager 256 may provide the modified one or more descriptor 216 to host 200 to indicate storage of a NPU into memory 201. [ 0030 ] FIG. 3 depicts an example of interactive elements that can be used to process received network protocol units in accordance with some embodiments of the present invention. In some embodiments, host 300 may include at least host memory 301, iSCSI logic 302, stack 304, network component driver 306, and data mover 310. Host 300 may include other logic (not depicted) such as but not limited to a processor, memory device, and storage device.[ 0031 ] In some embodiments, iSCSI logic 302 may set a buffer region (e.g., starting location and size) for a PDU (shown as iSCSI PDU 308) prior to arrival of the PDU by way of one or more NPU at network component 350. In some embodiments, iSCSI logic 302 may set a location to store the data portion of the PDU for iSCSI PDU 308 at any time before completion of copying of a header of the PDU into iSCSI PDU 308. In some embodiments, iSCSI logic 302 does not predict the size of the data portion of a PDU and iSCSI PDU 308 sets a maximum size for the data portion of the PDU in iSCSI PDU 308 under applicable protocols.[0032 ] iSCSI logic 302 may provide an indication to stack 304 that a given connection is an iSCSI connection along with a request to offload integrity validation value determination to data mover 310 as well as an indication of whether determination of the header and/or data integrity validation value are to be offloaded to data mover 310. iSCSI logic 302 may further indicate that the next iSCSI PDU buffer posted on the connection is to contain the start of an iSCSI header.In some embodiments, stack 304 determines locations of integrity validation value(s) within a PDU. Under iSCSI, the last 4 bytes of header and data portions of a PDU are integrity validation values. By inspecting a TCP byte stream, stack 304 may determine where an iSCSI PDU starts. When the header is a fixed size and the last 4 bytes of the header is an integrity validation value, the location of the integrity validation value of the header can be located. The header may indicate the size of the data portion of the PDU. Stack 304 may determine the location of the data integrity validation for the data portion by inspecting the header and because the integrity validation value for the data portion can located in the last 4 bytes of the data portion.[0034 ] Stack 304 may request determination by data mover 310 of an integrity validation value over a header and/or data portions of a received PDU. Stack 304 may identify location(s) of an integrity validation value for the header and/or data portion of a PDU in received NPUs used to validate the integrity validation values determined by data mover. The identified locations of integrity validation values may be in iSCSI PDU 308. Stack 304 may further indicate to data mover 310 location(s) in which to store header and/or data portions of a PDU in iSCSI PDU 308. [0035] In some embodiments, stack 304 may be capable to determine TCP/IP protocol compliance for one or more received NPU in accordance TCP/IP. [0036] In some embodiments, stack 304 may allocate a memory location to be used to accumulate the integrity validation value determined by data mover 310 (shown as "seed value 307"). In some embodiments, stack 304 may allocate seed value 307 and provide the location of seed value 307 to a driver for data mover 310. In some embodiments, the integrity validation seed value location is indicated by stack 304. In some embodiments, the integrity validation seed value is provided by stack 304. Stack 304 may provide one or more descriptor to data mover 310 to request data mover 310 to copy a portion of one or more PDU stored in one or more NPU into PDU buffer 308 and instruct data mover 310 to determine an integrity validation value for the header and/or data using a seed value stored in seed value 307. In some embodiments, data mover 310 may write the determined value to the same location which contained the seed value (i.e., seed value 307). [ 0037 ] Stack 304 may compare the determined integrity validation value for the header and/or data with the value included with the respective header and/or data portion of the PDU. If the determined integrity validation value for the header and/or data portion matches the integrity validation value received in the respective header and/or data portion of the PDU, then the received PDU passes. If the determined integrity validation value for the header and/or data portion does not match the integrity validation value received in the respective header and/or data portion of the PDU, then the received PDU fails. In some embodiments, stack 304 may indicate a pass/fail indication to iSCSI logic 302, for example, when it returns use of iSCSI PDU 308 to iSCSI logic 302. [0038] In some embodiments, instead of stack 304 comparing the determined integrity validation value with the integrity validation value included with a PDU, data mover 310 may include logic capable (or have access to such logic) to compare the determined integrity validation value with an integrity validation value received in a PDU. Stack 304 may instruct data mover 310, via a descriptor, to compare the determined integrity validation value with the integrity validation value received in the PDU. Stack 304 may identify to data mover 310 location(s) of an integrity validation value for the header and/or data portion of a PDU in received NPUs used to validate the integrity validation values determined by data mover. The identified locations of integrity validation values may be in iSCSI PDU 308. When instructed via a descriptor field, data mover 310 may compare determined integrity validation values for header and/or data portions of a PDU with respective header and/or data integrity validation values received in a PDU. Data mover 310 indicates a pass/fail indication to stack 304. Determination of whether data mover 310 has completed determination and/or validation of an integrity validation value can be made by polling or interrupts. [0039] Data mover 310 may determine an integrity validation value using an integrity validation seed value in host memory and prior to writing header or data portions of the PDU to iSCSI PDU 308.Network component driver 306 may receive descriptors from network component 350 that indicate pointer(s) to locations within host memory that are capable to store a one or more received network protocol unit (NPU). Host memory 301 may store one or more NPU received by network component 350.In some embodiments, network component 350 may include a transceiver logic 352, data mover 354, and descriptor manager 356. Transceiver logic 352 may be capable to receive network protocol units through a physical medium and transmit network protocol units through a physical medium. The physical medium may be a coaxial cable, wire-line, fiber optic cable, or other signal propagation medium. Alternatively or in addition, transceiver logic 352 may be capable to receive and transmit signals using wireless techniques. For example, transceiver logic 352 may receive and transmit network protocol units in conformance with applicable protocols such as Ethernet as described in IEEE Standard 802.3 (2002) and revisions thereof, although other protocols may be used. Transceiver logic 352 may be used to perform media access control operations as prescribed by applicable protocols such as Ethernet, although other protocols may be used, as well as other protocol-related processing. [0042] Data mover 354 may transfer one or more NPU received by network component 350 to host memory 301. Descriptor manager 356 may receive one or more descriptor 316 from host 300. Descriptor manager 356 may modify one or more descriptor 316 to describe a storage location of a received network protocol unit in host memory 301. Descriptor manager 356 may provide the modified one or more descriptor 316 to host 300 to indicate storage of a NPU into memory 301. [0043] FIG. 4 depicts an example flow diagram that can be used to determine and/or validate one or more integrity validation value included with a received PDU. [0044 ] Block 402 may include allocating region(s) in memory to store header and/or data portions of a protocol data unit (PDU). The header and/or data portions of a PDU may have been received in multiple TCP compliant network protocol units. In some embodiments, the region to store the header portion may be set at the maximum size for a PDU under the iSCSI protocol. The final four bytes of the region to store the header portion may store an integrity validation value for the header portion. In some embodiments, a second region in memory to store the data portion of the PDU may be determined based on inspection of the header portion. The size of the data portion of the PDU may be indicated in the header. The size of the second region in memory to store the data portion of the PDU may be set to the indicated size of the data portion. In some embodiments, the second region is set to a maximum size of the data portion of the PDU. The final four bytes of the second region may store an integrity validation value for the data portion.Block 404 may include an iSCSI stack requesting validation of integrity validation value(s) associated with header and/or data portions of a PDU. Determination of integrity validation value(s) for header and/or data portions of a PDU may take place in logic accessible to a data mover. A stack logic may identify a location of a seed useful for determination of the integrity validation value. The location can be used to accumulate integrity validation value. [0046] Block 406 may include determining integrity validation value prior to storing header and/or data portion(s) of a PDU into allocated region(s). For example, data mover logic may determine integrity validation value(s) for the header and/or data portions of a PDU before storing the header and/or data portions into allocated regions.[0047 ] Block 408 may include indicating pass/fail of a portion of the PDU over which integrity validation took place. In some embodiments, a stack logic or data mover can determine pass/fail status of a PDU based on comparing an integrity validation value received with a PDU with a determined integrity validation value. The header and/or data integrity validation values received with a PDU may be identified by the stack logic or indicated by an iSCSI logic as stored in one or more region of memory.Embodiments of the present invention are not limited for use with iSC SI orTCP/IP and can be used in environments compliant with other protocols. [0049] Embodiments of the present invention may be implemented as any or a combination of: one or more microchips or integrated circuits interconnected using a motherboard, hardwired logic, software stored by a memory device and executed by a microprocessor, firmware, an application specific integrated circuit (ASIC), and/or a field programmable gate array (FPGA). The term "logic" may include, by way of example, software or hardware and/or combinations of software and hardware. [ 0050 ] Embodiments of the present invention may be provided, for example, as a computer program product which may include one or more machine-readable media having stored thereon machine-executable instructions that, when executed by one or more machines such as a computer, network of computers, or other electronic devices, may result in the one or more machines carrying out operations in accordance with embodiments of the present invention. A machine-readable medium may include, but is not limited to, floppy diskettes, optical disks, CD-ROMs (Compact Disc-Read Only Memories), and magneto-optical disks, ROMs (Read Only Memories), RAMs (Random Access Memories), EPROMs (Erasable Programmable Read Only Memories), EEPROMs (Electrically Erasable Programmable Read Only Memories), magnetic or optical cards, flash memory, or other type of media / machine -readable medium suitable for storing machine-executable instructions.[ 0051 ] Moreover, embodiments of the present invention may also be downloaded as a computer program product, wherein the program may be transferred from a remote computer (e.g., a server) to a requesting computer (e.g., a client) by way of one or more data signals embodied in and/or modulated by a carrier wave or other propagation medium via a communication link (e.g., a modem and/or network connection). Accordingly, as used herein, a machine-readable medium may, but is not required to, comprise such a carrier wave. [ 0052 ] The drawings and the forgoing description gave examples of the present invention. Although depicted as a number of disparate functional items, those skilled in the art will appreciate that one or more of such elements may well be combined into single functional elements. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, orders of processes described herein may be changed and are not limited to the manner described herein. Moreover, the actions any flow diagram need not be implemented in the order shown; nor do all of the acts necessarily need to be performed. Also, those acts that are not dependent on other acts may be performed in parallel with the other acts. The scope of the present invention, however, is by no means limited by these specific examples. Numerous variations, whether explicitly given in the specification or not, such as differences in structure, dimension, and use of material, are possible. The scope of the invention is at least as broad as given by the following claims. |
An apparatus, method, machine-readable medium, and system are disclosed. In one embodiment the apparatus is a micro-page table engine that includes logic that is capable of receiving a memory page request for a page in global memory address space. The apparatus also includes a translation lookaside buffer (TLB) that is capable of storing one or more memory page address translations. Additionally, the apparatus also has a page miss handler capable of performing a micro physical address lookup in a page miss handler tag table in response to the TLB not storing the memory page address translation for the page of memory referenced by the memory page request. The apparatus also includes memory management logic that is capable of managing the page miss handler tag table entries. The micro-page table engine allows the TLB to be an agent that determines whether data in a two-level memory hierarchy is in a hot region of memory or in a cold region of memory. When data is in the cold region of memory, the micro-page table engine fetches the data to the hot memory and a hot memory block is then pushed out to the cold memory area. |
CLAIMS We claim: 1. A micro page table engine apparatus comprising: logic to receive a memory page request for a page in global memory address space; a translation lookaside buffer (TLB) to store one or more memory page address translations; a page miss handler logic to perform a micro physical address lookup in a page miss handler tag table in response to the TLB not storing the memory page address translation for the page of memory referenced by the memory page request; and a memory management logic to manage entries in the page miss handler tag table. 2. The apparatus of claim 1, wherein the page miss handler tag table is located in a hidden area of a system memory accessible by the micro page table engine. 3. The apparatus of claim 2, wherein the page miss handler tag table is fully associative and has a separate entry for a plurality of pages of the system memory. 4. The apparatus of claim 3, wherein micro physical address space is divided into a cold region and a hot region. 5. The apparatus of claim 4, wherein the memory management logic is operable to free a first memory page in the hot region by transferring data from the first memory page to a second memory page in the cold region. 6. The apparatus of claim 5, further comprising logic to transfer data from a third memory page in the cold region to the first memory page in the hot region in response to the received memory request targeting the third memory page in the cold region. 7. The apparatus of claim 6, wherein the micro physical address space spans two portions of physical memory, a physical memory A and a physical memory B, wherein the micro physical address space of physical memory A comprises the hot region and the micro physical address space of physical memory B comprises the cold region. 8. The apparatus of claim 3, wherein the micro physical address space comprises a shared portion of micro physical address space that is shared between a plurality of processors, and a plurality of non-shared portions of micro physical address space, wherein each of the plurality of processors receives one of the non-shared portions. 9. A method comprising: receiving a memory page request for a page in global memory address space; storing one or more memory page address translations in a translation lookaside buffer (TLB); performing a micro physical address lookup in a page miss handler tag table in response to the TLB not storing the memory page address translation for the page of memory referenced by the memory page request; and managing entries in the page miss handler tag table. 10. The method of claim 9, wherein the page miss handler tag table is located in a hidden area of a system memory accessible by the micro page table engine. 11. The method of claim 10, wherein the page miss handler tag table is fully associative and has a separate entry for every page of the system memory. 12. The method of claim 11, wherein micro physical address space is divided into a cold region and a hot region. 13. The method of claim 12, further comprising: freeing a first memory page in the hot region by transferring data from the first memory page to a second memory page in the cold region. 14. The method of claim 13, further comprising: transferring data from a third memory page in the cold region to the first memory page in the hot region in response to the received memory request targeting the third memory page in the cold region. 15. The method of claim 14, wherein the micro physical address space spans two portions of physical memory, a physical memory A and a physical memory B, wherein physical memory A and physical memory B comprise different memory technologies, and wherein the micro physical address space of physical memory A comprises the hot region and the micro physical address space of physical memory B comprises the cold region. 16. The method of claim 11, wherein the micro physical address space comprises a shared portion of micro physical address space that is shared between a plurality of processors, and a plurality of non-shared portions of micro physical address space, wherein each of the plurality of processors receives one of the non-shared portions. 17. A machine-readable medium having stored thereon instructions, which if executed by a machine causes the machine to perform a method comprising: receiving a memory page request for a page in global memory address space; storing one or more memory page address translations in a translation lookaside buffer (TLB); performing a micro physical address lookup in a page miss handler tag table in response to the TLB not storing the memory page address translation for the page of memory referenced by the memory page request; and managing entries in the page miss handler tag table. 18. The machine-readable medium of claim 17, wherein the page miss handler tag table is located in a hidden area of a system memory accessible by the micro page table engine. 19. The machine-readable medium of claim 18, wherein the page miss handler tag table is fully associative and has a separate entry for every page of the system memory. 20. The machine-readable medium of claim 19, wherein micro physical address space is divided into a cold region and a hot region. 21. The machine-readable medium of claim 20, wherein the performed method further comprises: freeing a first memory page in the hot region by transferring data from the first memory page to a second memory page in the cold region. 22. The machine-readable medium of claim 21 , wherein the performed method further comprises: transferring data from a third memory page in the cold region to the first memory page in the hot region in response to the received memory request targeting the third memory page in the cold region. 23. The machine-readable medium of claim 22, wherein the micro physical address space spans two portions of physical memory, a physical memory A and a physical memory B, wherein physical memory A and physical memory B comprise different memory technologies, and wherein the micro physical address space of physical memory A comprises the hot region and the micro physical address space of physical memory B comprises the cold region. 24. The machine-readable medium of claim 19, wrherein the micro physical address space comprises a shared portion of micro physical address space that is shared between a plurality of processors, and a plurality of non-shared portions of micro physical address space, wherein each of the plurality of processors receives one of the non-shared portions. 25. A system, comprising: a first memory, wherein the first memory includes a hidden portion for storing at least a page miss handler tag table; and a first processor, the first processor including: logic to receive a memory page request for a page in global memory address space; a translation lookaside buffer (TLB) to store one or more memory page address translations; a page miss handler logic to perform a micro physical address lookup in a page miss handler tag table in response to the TLB not storing the memory page address translation for the page of memory referenced by the memory page request; and a memory management logic to manage entries in the page miss handler tag. |
APPARATUS, METHOD, AND SYSTEM FOR IMPLEMENTING MICRO PAGE TABLES FIELD OF THE INVENTION The invention relates to memory page tables implemented in a computer system. BACKGROUND OF THE INVENTION A modern computer system incorporates complex memory management schemes to handle the sharing of system memory among components in the system. The computer system may include several multi-core processors, where each core (i.e., each hardware thread) requires access to memory. For example, the operating system running on the system as well as potentially a virtual machine monitor may both include logic to help manage the sharing of system memory among all the hardware threads. This memory management many times does not take into account the physical constraints of how the memory is actually laid out in the system. For example, there may be a memory power savings ability which allows ranks of memory to be powered down into low power states to save platform power. In another example, there may be multiple physical types of memory in the system (i.e., a heterogeneous memory system rather than a homogenous one). These varied physical implementations of the memory subsystem of a computer system may not benefit as much from standard memory management currently available through the means discussed. BRIEF DESCRIPTION OF THE DRAWINGS The present invention is illustrated by way of example and is not limited by the drawings, in which like references indicate similar elements, and in which: FIG. 1 describes one embodiment of a computer system implementing micro-page tables. FIG. 2 describes an additional embodiment of a computer system implementing micro- page tables. FIG. 3 describes another embodiment of a computer system implementing micro-page tables. FIG. 4 illustrates an embodiment of the page miss handler tag table. FIG. 5 illustrates an embodiment of a computer system implementing micro page tables for rank shedding. FIG. 6 illustrates an embodiment of a computer system implementing micro page tables utilized at least in part for rank shedding. FIG. 7 illustrates an embodiment of the page miss handler tag table when implemented at least in part for rank shedding. FIG. 8 is a flow diagram of an embodiment of a process used handling a hot page miss. FIG. 9 illustrates an embodiment of some of the additional micro page table data structures utilized by the micro page table engine when reacting to a hot page miss. FIG. 10 is a flow diagram of an embodiment of a maintenance process to provide a number of memory pages with the capability of being utilized as hot pages during a cold-to-hot memory page data transfer. FIG. 11 illustrates an embodiment of some of the additional micro page table data structures utilized by the micro page table engine during the maintenance process. FIG. 12A-12D illustrate several embodiments of flow diagrams that the micro page table engine processing logic may utilize to determine when to recover memory pages for use. FIG. 13 describes an embodiment of a micro page table managed two-level memory subsystem within a computer system. FIG. 14 describes an embodiment of a phase change memory-specific memory subsystem. DETAILED DESCRIPTION OF THE INVENTION Embodiments of an apparatus, method, system, and machine readable medium to implement micro page tables are described. A computer system may implement additional hardware and firmware logic to manage memory in an efficient manner through the use of micro page tables that map software view of memory to the physical implementation of memory. A micro page table-implemented architecture may comprise certain logic in a processor core and uncore to manage an additional set of hidden data structures. These hidden data structures are transparent to the overlying operating system and applications running on the computer. Historically when an operational CPU receives a memory request from the operating system it comprises a linear memory address. This linear memory address is not the actual physical address of the requested memory location, but rather an address utilized by an operating system in the computer system. To get to the actual physical address, logic within the CPU takes the linear address and performs a walk through the standard memory page tables to find the physical page. In many embodiments, a processor implementing micro page tables requires an extra step in the walk. What would normally be the physical address at the end of a page walk lookup process (a Platform Physical Address - PPA) is actually is one level removed from the true physical address, which may now be referred to as a micro physical address (MP A). Micro page table engine logic implemented in the CPU utilizes a page miss handler tag table, which has the PPA as an index, to find the MPA address. By increasing the level of indirection by one additional level, logic within the micro page table engine can perform a great deal of memory management of physical memory completely unbeknownst to any other hardware or software in the system. What follows is an in-depth description of the conceptual layout of micro page tables in several different general purpose computer systems as well as several different implementations of micro page tables and how they may be utilized to provide additional benefits to computer memory management. Creating micro page tables may allow for a potentially transparent way of splitting memory between near (e.g., high performance/high power consumption memory) and far (e.g., low performance/low power consumption memory) portions of memory. This extra layer of memory management may allow an optimization of memory subsystem cost, memory power consumption, and memory performance. Micro Page Table General Implementation FIG. 1 describes one embodiment of a computer system implementing micro-page tables. Computer system 100 is shown. The computer system may be a desktop, server, workstation, laptop, handheld, television set-top, media center, game console, integrated system (such as in a car), or other type of computer system. In several embodiments the computer system 100 includes one or more central processing units (CPUs). Although in many embodiments there are potentially many CPUs, in the embodiment shown in FIG. 1 only CPU 102 is shown for clarity. CPU 102 may be an Intel® Corporation CPUs or CPUs of another brand. CPU 102 includes one or more cores in different embodiments. CPU 102 is shown including a single core (Core 104), again, for sake of clarity. In many embodiments, core 104 includes internal functional blocks such as one or more execution units, retirement units, a set of general purpose and specific registers, etc. If core 104 is multi-threaded or hyper-threaded, then each hardware thread may be considered as a "logical" core as well. CPU 102 may also include one or more caches, such as cache 106. many embodiments that are not shown, additional caches other than cache 106 are implemented so that multiple levels of cache exist between the execution units in the core and memory. In different embodiments cache 106 may be apportioned in different ways. Additionally, cache 106 may be one of many different sizes in different embodiments. For example, cache 106 may be an 8 megabyte (MB) cache, a 16 MB cache, etc. Additionally, in different embodiments the cache may be a direct mapped cache, a fully associative cache, a multi-way set-associative cache, or a cache with another type of mapping. In other embodiments that include multiple cores, cache 106 may include one large portion shared among all cores or may be divided into several separately functional slices (e.g., one slice for each core). Cache 106 may also include one portion shared among all cores and several other portions that are separate functional slices per core. In many embodiments, CPU 102 includes an integrated system memory controller 108 to provide an interface to communicate with system memory 110. In other embodiments that are not shown, memory controller 108 may be located in a discrete chip elsewhere in computer system 100. System memory 110 may comprise dynamic random access memory (DRAM), such as a type of double data rate (DDR) DRAM, non-volatile memory such as flash memory, phase change memory (PCM), or another type of memory technology. System memory 1 10 may be a general purpose memory to store data and instructions to be operated upon by CPU 102. Additionally, there may be other potential devices within computer system 100 that have the capability to read and write to the system memories, such as a direct memory access (DMA)- capable I/O (input/output) device. The link (i.e., bus, interconnect, etc.) that couples CPU 102 with system memory 110 may include one or more optical, metal, or other wires (i.e. lines) that are capable of transporting data, address, control, and clock information. I/O complex 112 enables communication between the CPU 102 and one or more I O devices, such as I/O device 1 14. In the embodiment shown in FIG. 1, I/O device 1 14 is communicatively coupled to the I/O complex 112 and the rest of CPU 102 through VO interface 116. I/O complex 112 may be an I/O hub interface that comprises several I/O host adapters and other I/O circuitry to provide access between CPU 102 and much of the I/O subsystem. For example, I/O Complex 112 may comprise a platform controller hub (PCH). Specifically, the I/O complex 112 can provide a general communication interface between a number of I/O devices coupled to one or more I/O interconnects (i.e. I/O busses) and the CPU 102. To accomplish this, I/O hub complex may have at least one integrated I/O adapter for each I/O protocol utilized. There may be many I/O devices communicatively coupled to I/O interface 116, though only I O device 114 is shown for clarity. I/O adapter 118, shown in FIG. 1 as an integrated I/O adapter within I/O complex 1 12, translates a host communication protocol utilized within the CPU 102 to a protocol compatible with a particular I/O device, such as I/O device 118. Some of the protocols that a given I/O adapter may translate include a Peripheral Component Interconnect (PCI)-Express, Universal Serial Bus (USB), Serial Advanced Technology Attachment (SAT A), Small Computer System Interface (SCSI), Redundant Array of Inexpensive Disks (RAID), and 1394 "Firewire," among others. Additionally, there may be one or more wireless protocol I/O adapters. Examples of wireless protocols are Bluetooth, IEEE 802.11 -based wireless protocols, and cellular protocols, among others In many embodiments, the BIOS 120 (basic input/output system) is coupled to the I/O complex 112. The BIOS is firmware stored in the computer system that contains instructions to initialize key computer system components during a boot process. The BIOS 120 generally is stored within a flash memory device, though other such storage devices designed to store information in a non-volatile manner may also be utilized to store the BIOS. Additionally, although not shown in FIG. 1, other firmware beyond simply the BIOS may also be stored in flash devices coupled to the CPU 102 such as extendible firmware. Apart from I/O interface 116, there may be other interfaces integrated into CPU 102 to provide a communicative interface with one or more links external to the CPU 102. High-speed I/O interface 122 may communicatively couple CPU 102 to one or more links to high speed I/O subsystems such as a graphics subsystem and/or a network subsystem. For example, high-speed I/O interface may be a single or multi-lane high-speed, bidirectional, serial interface such as PCI- Express. Inter-CPU high-speed interface 124 may provide an interface to a link coupled to one or more additional CPUs and allow inter-CPU communications to take place. E.g., Inter-CPU high-speed interface may be a quick path interconnect (QPI) or other similar interface. In many embodiments, computer system 100 includes hardware and software logic capable of providing a virtualized environment with one or more guest operating systems (OSes) running in virtual machine (VM) environments. A virtual machine monitor (VMM) or hypervisor may be implemented in logic within the system to isolate each VM's operating environment (i.e. so each VM and the OS and applications running within it is isolated from and unaware of other VMs present in the system. One of the areas required to create a seamless virtualized environment is virtualized I/O. I/O virtualization logic 126 provides the ability to virtualize and isolate I/O devices in the I/O subsystem, such as I/O device 114. In some embodiments, I/O virtualization logic includes Intel® VT-d architecture. Device transfers (Direct Memory Accesses - DMAs) and interrupts that are initiated by an I/O device are the key processes that require device isolation to a given VM. In many embodiments, I/O virtualization logic 126 may enable system software to create multiple DMA protection domains. A protection domain is an isolated environment to which a subset of the system memory is allocated. Depending on the software usage model, a DMA protection domain may represent memory allocated to a VM, or the DMA memory allocated by a guest-OS driver running in a VM or as part of the VMM or hypervisor itself. The I/O virtualization logic 126 may enable system software to assign one or more I/O devices to a protection domain. DMA isolation is achieved by restricting access to a protection domain's physical memory from I/O devices not assigned to it. For interrupt handling, I/O virtualization logic 126 may modify the interrupt-message format to be a DMA write request that includes a "message identifier" and not the actual interrupt attributes. The write request, like any DMA request, may specify the requester-id of the I/O device function generating the interrupt. Then, when the interrupt request is received by the I/O virtualization logic 126, the interrupt is remapped through a table structure of interrupts. Each entry in the interrupt-remapping table corresponds to a unique interrupt message identifier from a device, including any necessary interrupt attributes (e.g., destination CPU, vector, etc.). In the embodiment shown in FIG. 1, 1/0 virtualization logic 126 receives requests from one or more I/O devices through the I/O interface 1 16. The I/O virtualization logic 126 handles these requests, as described above, prior to allowing them to pass through to the memory controller 108. In many embodiments, micro page table (MPT) engine 128 logic is implemented in core 104 to provide a hardware managed memory address space using possibly hidden (from the OS and applications) page table structures. The MPT engine 128 virtualizes the memory address space as seen by OS and application software running on the computer system 100. Specifically, software, which may include an operating system running one or more applications/processes, assumes that it can ask directly for access to a physical address in system memory 110, but the MPT engine 128 provides a hidden level of indirection so physical memory may be managed separately from the layout of physical memory that the kernel in the operating system is aware of. This hardware-implemented possibly hidden level of indirection for memory manages all memory coherency and data movement between regions of system memory 110. The MPT engine 128 includes a modified translation lookaside buffer (TLB) 130 and page miss handler (PMH) logic 132 within each core, such as core 104. In many embodiments, the TLB 130 is generally considered a CPU cache that memory management hardware uses to improve linear address translation speed. The TLB 130 includes a fixed number of slots that contain page table entries, which generally map a linear address to a platform physical address (PPA). Furthermore, the TLB 130 is a content-addressable memory (CAM), in which the search key is the linear address and the search result is the platform physical address. If the requested linear address, from a core memory request 134, is present in the TLB, the CAM search yields a match, which is a TLB hit. Otherwise, if the requested address is not in the TLB the CAM search results in a TLB miss. If there is a TLB hit, the linear address platform physical address translation has already taken place and the translation is stored in the TLB 130. If there is a TLB miss, the translation is accordingly not stored and MPT engine 128 logic is then required to perform a page walk using OS-established page tables to retrieve the platform physical address. Once the page walk has been completed, the correct platform physical address is found and the translation can then be stored into the TLB. Although, in computer system 100, the platform physical address is not the actual physical memory address used for accessing the memory devices that comprise system memory 110. Rather, once the MPT engine 128 has received the platform physical address of the translation, the platform physical address is then used as an index (i.e., a search key) into a PMH tag table (TT) 136 stored in a hidden area of memory 138 to retrieve the true address into physical memory. In other embodiments that may be utilized throughout this document, the memory area 138, and potentially some or all of the structures that it stores, is not hidden but rather visible to an OS and software applications. Specifically, the PMH TT 136 has a number of entries. Each entry stores a micro physical address (MPA), which directly maps into final physical memory space. Thus, it takes at least two address translations to get the MPA from the linear address. The additional table lookup when using the MPT engine 128 is the table lookup utilized to translate the platform physical address into the MPA. In other words, a basic address translation in computer system 100 would be processed in the following order: linear address -> platform physical address -> MPA. The platform physical address to micro physical address translation (using the PMH TT 136) is a fully associative mapping. Thus, any entry in the PMH TT 136 is capable of storing any MPA. Once the MPT Engine has retrieved the MPA from the PMH-TT it stores this in the TLB for subsequent accesses where a translation from linear address to MPA can be done directly. In other embodiments, the PMH TT 136 may be implemented with more restricted set associativity. In many embodiments, the hidden area of memory 138 that stores all necessary MPT data structures, such as the PMH TT 136, is not visible to software (e.g., the OS) and is specifically utilized by the MPT engine 128 to manage physical memory through an extra layer of address indirection (i.e., an extra translation using a table in the hidden area 138). In embodiments where the platform is running in a virtualized environment and the memory address request is received from a guest OS (or more generally, a guest VM), there is a second additional address translation using another address translation table referred to as an extended page table (EPT) . Specifically, the linear address received from the guest OS is first translated into a guest physical address (GPA) through a standard page walk using the OS page tables, the guest physical address is then translated into the platform physical address (PPA) by using the EPT, and finally the platform physical address is translated to the corresponding MPA by using the PMH TT 136. Memory management logic (MML) 142 resides in the CPU 102. In some embodiments, MML 142 is implemented in the core 104, outside of the core 104 (e.g., the uncore), or possibly even implemented across both the core and uncore. The uncore generally refers to logic/circuitry within a CPU that is not actually in the core. For example, certain I/O circuitry which allows communication between the cores of a given CPU and other CPUs may be located in the uncore of the CPU. MML 142 is a portion of the MPT engine 128 that assists in managing the MPT data structures, such as the PMH-TT 136. In many embodiments, which will be discussed in detail below, memory pages will be designated in certain ways and the data stored on those pages may need to be swapped out to other pages. The management of these memory page swap transfers may be handled in large part by the MML 142. In the embodiment shown in FIG. 1, , MML 142 comprises hardware circuitry in the uncore, microcode stored in the uncore, or a combination of both. In other embodiments that are not shown, the circuitry and/or microcode comprising the MML 142 resides in the core. Yet other embodiments, the MML 142 may span the core and uncore. Furthermore, there are the PMH-TT 136 as well as the additional data structures stored in the hidden memory area 138 may be referred to in general as the "micro page tables" that are managed by the MPT engine 128. FIG. 1 describes a computer system with a single CPU with a single core for ease of explanation, though another, more complex, embodiments is illustrated below in FIG. 3. Returning to FIG. 1, CPU 102 is generally coupled to a system board (i.e., a motherboard). The motherboard, though not shown in FIG. 1, may include a socket designed to secure contact between each external power and communication pin originating from CPU 102 with other components in the computer system. This socket will essentially communicatively couple all circuitry integrated into the CPU 102 with components such as system memory (e.g., general storage 140), the I/O complex 112, and other additional components that are not shown. In many instances, the allocation of system resources such as memory may be based on a per-socket layout. Thus, CPU and socket may be utilized interchangeably when referring to system setup, unless otherwise noted. Other embodiments of computer systems implementing micro page tables are shown in FIG. 2 and FIG. 3. FIG. 2 describes an additional embodiment of a computer system implementing micro- page tables. The computer system illustrated in FIG. 2 is similar to the computer system shown in FIG. 1 except that the I/O complex and circuitry integrated into the I/O complex is an external I/O complex 200 discrete from CPU 102. In many embodiments, I/O virtualization logic 202 and I/O adapter 204 are both integrated into the discrete I/O complex 200. The functionality of these components may be the same as described above in FIG. 1 regarding I/O complex 112, I/O adapter 1 18 and I/O virtualization logic 126, only their at a different location within the computer system. In yet other embodiments that are not shown in FIG. 1 and FIG. 2, the I/O Complex 200 may partially be implemented on the CPU 102 die and partially implemented external to the CPU 102. FIG. 3 describes another embodiment of a computer system implementing micro-page tables. The computer systems illustrated in FIG. 1 and FIG. 2 are limited to showing a single CPU with a single core. As stated, this is done for illustrative purposes. In many embodiments, a computer system implementing micro-page tables may be a computer with many cores and many CPUs. For example, FIG. 3 illustrates a computer system with four CPUs (AO, Al, B0, and B l). CPUs AO and Al reside within node A and CPUs B0 and Bl reside within node B. All four CPUs communicate with each other over a high-speed inter-CPU interface (I F). Between nodes, the high-speed interface is routed through node controllers for each node (node controllers A and B). In the embodiment shown in FIG. 3, each CPU includes four distinct cores (cores AOa, AOb, AOc, and AOd are in CPU AO; cores Ala, Alb, Ale, and Aid are in CPU Al; cores BOa, BOb, BOc, and BOd are in CPU B0; and cores Bla, Bib, Blc, and Bid are in CPU Bl). At least a portion of the logic for a MPT engine resides within each of the 16 cores. In many embodiments, there is a single global system memory300 that every CPU has access to. Although not shown, there may be a hidden memory area (138 in FIG. 1) within system memory 300. Furthermore, additional components within each CPU (e.g., a cache) are not shown in FIG. 3 for the sake of clarity of the figure. FIG. 4 illustrates an embodiment of the page miss handler tag table. In many embodiments, the PMH-TT 400 stores an entry 402 for each page in all of physical memory 404. As shown in FIG. 4, physical memory 404 many times is made up of multiple ranks (such as ranks 0-7). Each PMH-TT entry (e.g., 402) includes a MPA that references a specific page in one of the ranks of physical memory 404. In some embodiments, the entry 402 also includes state information to store a current state of the memory at the MPA. For example, FIG. 4 shows some examples of details of the entry 402. In FIG. 4, the state includes 3 bits and certain combinations of these bits may show the entry in different states, such as the listed states of dirty, victim, reclaim, lock, fetching, zero, etc. Many of these example states will be explained in greater detail below with regard to different focused MPT embodiments. In other embodiments, the state may not be a simple encoding where 3 bits are capable of signifying 8 states, but rather there may be one or more separate bits for each of the states (though this embodiment is not shown). Returning to the page look-up illustration shown in FIG. 4, an address request arrives at the MPT engine (128 in FIG. 1) logic. For example, the linear address may be an address request from an OS. If there is no virtualization on the platform, the page walk will produce a platform physical address (PPA), which was discussed above. The PPA may be used as an index into the PMH-TT 400 to retrieve the relevant entry containing the micro physical address (MP A), which refers directly to a location in physical memory (e.g., a physical memory page address). On the other hand, if there is a level of virtualization on the platform, which may include one or more virtual machines as well as a virtual machine manager (VMM), an intermediary page walk through a set of VMM-maintained page tables is additionally taken. Specifically, in this embodiment, the page walk through the OS maintained page tables 406, refers to walking through the page tables known to a VM that the OS supplying the linear address is running on. In this case, the address produced from the linear address page walk refers to a guest OS physical address (GPA). The GPA does not index directly into the PMH-TT because there is a VMM layer below the guest OS that manages its own set of page tables unbeknownst to the guest OS running on one of possibly many virtual machines present in the platform. Generally, VMM page tables are referred to as extended page tables (EPT) and the GPA would then be used as an index for a page walk through the EPT to produce the PPA to index into the PMH-TT 400. This additional page walk step is generally standard when dealing with a virtualized platform. In either case, once the PPA has been produced, it is utilized as the index into the PMH- TT 400 to find the entry containing the MPA that directly references physical memory 404. To allow memory to be virtualized for all software in the manner described above, the memory and the data structures supporting the MPT engine need to be initialized. In many embodiments, during a boot of the computer system, the BIOS provides the computer with a set of instructions to initialize many of the components present in the system. In many computer systems, an important aspect of the BIOS is the Memory Reference Code (MRC). The MRC relates to memory initialization and includes information about memory settings, frequency, timing, driving and detailed operations of the memory controller. To support MPT, the BIOS in the computer system may be updated to publish CPU-socket specific memory address regions. This may include publishing the physical address range scope of each memory rank present in the system as well as initially hiding a portion of the system memory (hidden memory area 130 in FIG. 1) from all software to implement the PMH-TT 400 as well as other MPT data structures, which will be detailed below. For the PMH-TT 400 specifically, an identity map may be utilized for initialization purposes. In many embodiments, the mapping of the PPA to the MPA is a fully associative mapping. Thus, a given PPA may index into exactly one location within the entire PMH-TT 400, but the entry at that location may map to any arbitrary MPA location in memory. This extra level of indirection is hidden in hardware so a guest OS, a VM, a VMM, a hypervisor, or potentially any other software construct operating on the platform may be completely unaware of the extra layer of translation. In other words, from a software point of view, the PPA is thought to be the actual physical address indexing a page of memory. Instead, when the PMH-TT 400 is implemented, the MPA is the true physical address into memory. The additional indirection layer can allow for many applications otherwise considerably less efficient or possibly unachievable when limited to software solutions. Micro Page Table Rank Shedding FIG. 5 illustrates an embodiment of a computer system implementing micro page tables for rank shedding. Rank shedding involves allowing ranks of memory to be prioritized in usage. There are multiple reasons why rank prioritizing may be implemented. For example, in many embodiments, it is not always efficient from a power perspective to utilize all system memory ranks in a computer system simultaneously. For example, in a server, there may be many ranks of memory that are available but the server may not currently be in use or at least not in high use. In this scenario, the workload present on the server may not show a performance degradation if a subset of the memory ranks in the system are used. Thus, it may be achievable to prioritize the usage of certain ranks over others and, in lower usage periods, certain low priority ranks present in the memory subsystem can then be disengaged from active use. Disengaging a rank from active use may allow a memory subsystem power management scheme to put the non-used rank into a lower power state for a length of time, thereby lowering the power consumption of the entire computer system. In other embodiments, an interleaved memory access architecture is present instead of a NUMA architecture. Thus, generally there are no constraints on the configuration of memory and configurations will vary according to implementation. For example, in some embodiments, there may be multiple CPUs sharing a single DIMM. In other embodiments, there may be multiple DIMMs in use for each CPU. In yet other embodiments, there may be a 1 -to- 1 correlation between the number of DIMMs and the number of CPUs. Turning to FIG. 5 an embodiment of a multi-CPU socket computer system implementing rank shedding is shown. CPU A 500 is coupled to CPU B 502 across point-to-point link 504. CPU A 500 is also coupled directly to memory A 506 and indirectly to memory B 508 (through link 504 and CPU B 502). CPU B 502 is coupled directly to memory B 508 and indirectly to memory A 504 (through link 504 and CPU A 500). When utilizing memory for a workload, it generally is more efficient if CPU A 500 were to utilize memory A 506 and if CPU B 502 were to utilize memory B 508. This is due to locality of the memory per CPU in the UMA environment. Thus, in many embodiments, CPU A may prioritize usage of memory ranks located in memory A 506 and CPU B may prioritize usage of memory ranks located in memory B 508. In many embodiments, rank shedding may assist in implementing the prioritization of ranks per CPU (this can also be referred to as "per socket" since each CPU is coupled to system board by its own socket). The prioritization of ranks per socket is accomplished by utilizing an MPT engine in each CPU to dynamically prioritize the ranks used by that CPU. MPA address space spans all sockets. Thus, the range of address space that all sockets see is the same, but different portions of the address space may be prioritized per socket. For example, in the two socket system shown in FIG. 5, CPU A has the CPU A/Socket 1 MPA address space as shown below it. This shows that from address 0 to address 2 Gigabyte(GB)-l, socket l 's lower NUMA region, is in the hot region for socket 1. CPU B has the CPU B/Socket 2 MPA address space that shows addresses 2GB to 4GB-1, socket 2's lower NUMA region, is in the hot region for socket 2. Specifically, the first 4GB of memory is the lower NUMA region. The first two gigabytes being mapped to socket 1 and the second two gigabytes being mapped to socket 2. In the embodiment shown in FIG. 5, there is additional address space above 4GB. For an even split of address space, the upper NUMA region may be divided equally among all sockets. Furthermore, the upper NUMA region would equal Top of Memory (ToM) minus Top of lower NUMA region (e.g., 4GB). In the embodiment shown in FIG. 5, there is 16GB of total address space and the lower NUMA region is 4GB. Thus the top 12GB (16GB-4GB) is in the upper NUMA region. Then the 12GB is divided by two to distribute the upper NUMA region among the two sockets, so each socket has 6GB of addressable upper NUMA region address space. Therefore, socket 1 may be allocated address space from 4GB up to lOGB-l and socket 2 may be allocated address from 10GB to the top of memory, which in this example is 16GB-1. The size of the hot and cold regions may differ over time based on usage conditions. In the example shown in FIG. 5, at the time the snapshot of memory is made, the hot region comprises half of the total address space and the cold region comprises the other half. Thus, the upper NUMA region for socket 1 has addresses 4GB to 6GB- 1 in the hot region and 6GB to lOGB-1 in the cold region and socket 2 has addresses 10GB to 12GB-1 in the hot region and 12GB to 16GB-1 in the cold region. It should be appreciated that the size of each respective NUMA region, the size of memory, and the number of sockets may change in different embodiments. Additionally, since the hot and cold regions of memory can potentially be dynamically adjustable, the sizes of each socket's hot and cold regions are also variable. Although there are hot and cold regions for each socket, the variability of the sizes per socket may be symmetric or asymmetric. For example, in certain situations the sizes of the hot and cold regions for each socket are always the same across sockets. Thus, if the hot region expands from 25% to 50% of addressable memory, this change may be done for all sockets simultaneously (symmetric treatment across sockets). On the other hand, in other situations, the size of the hot and cold regions for each socket are separately maintained (asymmetric treatment across sockets). For example, if socket 1 has a heavy workload and socket 2 has a light workload, the hot region of socket 1 might span a higher percentage of socket 1 ' s addressable memory space than the respective hot region for socket 2's addressable memory space. Generally, the identity map for the PMH-TT is stored in a hot region of memory address space and the MPT data structures themselves, including the PMH-TT, are also stored in a hot region of memory address space. In many embodiments, the data structures span the hot region for each of the two sockets, as shown in FIG. 5. FIG. 6 illustrates an embodiment of a computer system implementing micro page tables utilized at least in part for rank shedding. The computer system in FIG. 6 is similar to the computer system illustrated in FIG. 1. All major components are utilized in a similar manner and may be referenced in the description above. FIG. 6 adds rank shedding implementation specific partitions in the system memory 110. An active memory 600 partition and an inactive memory partition 602 are shown. The active memory 600 partition includes those ranks of memory that are presently utilized by CPU 102 (i.e., the hot region of address space). Whereas the inactive memory 602 partition includes those ranks of memory that are presently not utilized by CPU 102 (i.e., the cold region of address space). As more ranks are brought into use, the active memory 600 partition will increase, and as less ranks are used, the inactive memory partition 602 will increase. The granularity of potential change in size between the active and inactive memory portions is implementation specific. For example, if rank shedding is utilized for power management, the granularity of change in active vs. inactive ranks may mirror the number of ranks that can be separately managed for power. If there are 16 ranks in system memory and these ranks are coupled to power planes on a system board in groups of 4, it may mean that at any given time, 0, 4, 6, 12, or 16 ranks may be active and vice versa for the inactive ranks. On the other hand, if there is a finer granularity for power supply within system memory, there may be more options. If each rank is able to be controlled with separate power then there may be 16 different combinations of active versus inactive memory portions. It may also be possible to focus the granularity on a memory module basis, which would allow all the memory coupled to the system board through a single module (e.g., a DIMM) to be power managed as a group. On the other hand, active and inactive ranks may be managed per CPU on the basis of performance in a NUMA-based system. For example, returning to FIG. 5, in a two CPU system, memory A 506 is local to CPU A 500, so the address space representing memory A 506 may initially be designated as active memory for CPU A 500. Whereas, memory B 508 is remote to CPU A 500, so the address space representing memory B 508 may initially be designated as inactive memory for CPU A 500. Although if a workload requires increased memory usage for CPU A 500, memory B 508 may have portion of its designation switched to active memory. This utilization of local memory address space may hold true for CPU B 502 as well using memory B 508 as active memory and memory A 506 and inactive memory initially. Turning now to FIG. 7, this figure illustrates an embodiment of the page miss handler tag table when implemented at least in part for rank shedding. The page walking process to get from the initial input linear address to the MPA is similar to the process described and illustrated in FIG. 4. Major steps in the process are completed in a similar manner and may be referenced in the description above. FIG. 5 includes additional rank shedding implementation details. Physical memory, shown at right in FIG. 7 (i.e., ranks 0-7), comprises a micro page address (MPA) space cold region 500 and a MPA space hot region 502. The cold region may be considered the inactive memory region and the hot region may be considered the active memory region. Thus, ranks 0 and 1 are presently active and ranks 2-7 are presently inactive. One possible state utilized in the state information shown in 702 detail is a "cold" bit. This bit can indicate, for each entry in the PMH-TT 704, whether that entry is indexing into a cold region of MPA space or a hot region of MPA space. During initialization of the system, each entry in the PMH-TT 704 corresponding to ranks in the cold region can be initially set as a cold MPA (cMPA) using the cold bit (e.g., cold bit = "1"). And the rest of the entries may be set as a hot bit (e.g., cold bit = "0" or present bit = "1"). When the system first is booted, there may be an initial state for each rank as to whether that rank (and all MPA entries it comprises) is within the cold region or hot region of overall system memory. As usage patterns change for memory during operation (e.g., heavy memory workload, light memory workload, idle, etc.) the MPT engine logic may decide to shrink or expand the hot region of memory 702. Thus, the hot and cold regions of memory may be dynamically adjustable during system operation. This could potentially be based on a performance policy (i.e., as performance degrades, the hot region expands to compensate) or a power policy (during system idle more ranks are added to the cold region for potential use of a low power mode to at least a part of the memory subsystem. Apart from these scenarios, there are many other potential uses for shedding ranks of a memory. The hot and cold MPA translations have no bearing on whether the system is using virtualization and requires an additional GPA^PPA page walk step. The figure is shown specifically that way just for illustrative purposes of a single example. In many embodiments, only hot page translations are stored in the TLB. Thus, when there is a request to access a physical page at an MPA address that is in a cold region of memory, a rank shedding miss takes place. Because hot pages are specifically utilized for general access by a CPU or other bus master device, the data stored in the requested page is then moved from the cold page to a hot page of memory. In many embodiments, the hot region of memory always maintains at least a certain percentage of the total hot space as free memory pages to be utilized in a data swap when data from a cold page is needed to be accessed. This percentage of free hot pages may range from a very small percentage of total hot pages (e.g., one free hot page) up to a significant percentage of the entire range of hot pages (e.g., 10% of the total number of hot pages are free). FIG. 8 is a flow diagram of an embodiment of a process used handling a hot page miss and FIG. 9 illustrates an embodiment of some of the additional micro page table data structures utilized by the MPT engine when reacting to a hot page miss. To be clear, a hot page miss is simply a memory page request to a physical page that is in a cold region of memory. FIG. 8 is related to FIG. 9. Specifically, the process illustrated in FIG. 8 shows how the MPT engine logic handles a hot page miss by utilizing the data structures shown in FIG. 9. The process illustrated in FIG. 8 may be performed by processing logic related to the MPT engine. This processing logic may comprise hardware, software, firmware, or any combination of those three forms of logic. Furthermore, throughout FIG. 8 and FIG. 9, there are small circles designated with letters (e.g., A, B, C, etc.). These letter-designated circles are items of interest as to what data structures and data transfers are utilized by processing logic while it is performing the block-by- block process flow in FIG. 8. Turning now to FIG. 8, the process begins by processing logic determining whether a received memory request is targeting a page of memory that is not in the TLB (processing block 800). As described above, the TLB, in many embodiments, does not store cold page translations, thus if there was a TLB hit, in these embodiments it would be inherent that the requested memory page already resides within the hot region of memory. If there is a TLB hit then this process is finished since processing logic would not be dealing with a hot page miss. Processing block 800 is utilized because although processing block 802 will determine whether the requested memory page is in the hot or cold region, processing block 802 requires additional lookup time which would not be necessary if there was a TLB hit. Continuing with the process (assuming there is a TLB miss), next processing logic determines specifically whether the requested memory page is in a hot or cold region by checking the status information associated with the page in the PMH-TT (900 in FIG. 9) (processing block 802). This determination requires a lookup of the physical page of memory in the PMH-TT 900. The lookup is described in detail in FIG. 7 above. Specifically, processing logic utilizes the PPA of the memory page request to index into the PMH-TT 900 to find the specific entry. The specific entry includes the MPA as well as the state information for that particular memory page. In some embodiments, the state information includes a present bit (P), which may indicate that the memory page is in the hot region if the P bit is set (P=l) or indicate that the memory page is in the cold region if the P bit is cleared (P=0).using the PPA of the requested memory page. In many other embodiments, a cold bit (C) is additionally utilized, which may indicate the memory page is in the cold region if the C bit is set (C=l) or indicate that the memory page is in the hot region if the C bit is cleared (C=0). For example, as shown in the process of FIG. 8, processing logic determines if P=0. Processing logic determines this by looking within PMH-TT 900 at the PPA index location (item A). If the P bit is cleared (P=0), then processing logic locks the PMH-TT 900 at the PPA index location (processing block 804). The PMH-TT 900 needs to be locked to allow processing logic to initiate a cold-to-hot memory page transfer. If the PMH-TT 900 is not locked at the PPA index location, corruption of the cold-to-hot transition may ensue, for example due to another entity simultaneously attempting a similar access. In many embodiments, the lock can be accomplished by setting the state information bits at the PPA index location (item B) in the PMH-TT 900 to "Fetching" (F=l). Then, processing logic fetches the cMPA from the PMH-TT 900 (processing block 806). [In many embodiments, this fetching procedure fetches the data from the cMPA memory location (item C). The data may be placed within buffer logic for temporary storage.] Next, processing logic loads the cMPA physical memory page address into a cold free list (processing block 808), in other words, the cMPA address in PMH-TT 900 is copied to cold free list data structure (902 in FIG. 9) illustrated by data transfer D in FIG. 9. The cold free list stores physical memory page addresses that were in the cold region of memory but have been the target of a memory request and thus the data in the page has required a transfer to a hot memory region page. Once the cold region page no longer is required to continue to hold the data (because a copy of the data has been copied into a buffer), then the cold page of memory is free to be overwritten and therefore its address is placed into the cold free list. Processing logic then fetches a free hMPA physical memory page address (item E in FIG. 9) from the hot free list data structure (904 in FIG. 9) (processing block 810). The hot free list includes hot region memory pages that are available to be written to for this process. FIG. 10 and FIG. 11 below describe how the hot free list 904 is populated. Processing logic then copies the data from the cMPA memory page to the hMPA memory page (processing block 812 and item F in FIG. 9). Once the cold-to-hot memory page data transfer has taken place, then processing logic updates and unlocks the PMH-TT 900 (processing block 814 and item G in FIG. 9). In many embodiments, the PMH-TT 900 update sets the state information to present (P=l) and not fetching (F=0) for the memory page at the hMPA address to unlock and update the page . Additionally, prior to this process, this hot region memory page at the hMPA address was in the hot free list because it was available to be used for a cold-to-hot transfer, though now because it has been used for the transfer, the page is in use and no longer free. Thus, processing logic removes the hMPA address from the hot free list 904. Turning now to FIG. 10 and FIG. 11, FIG. 10 is a flow diagram of an embodiment of a maintenance process to provide a number of memory pages with the capability of being utilized as hot pages during a cold-to-hot memory page data transfer and FIG. 11 illustrates an embodiment of some of the additional micro page table data structures utilized by the MPT engine during the maintenance process. FIG. 10 is related to FIG. 1 1. Specifically, the process illustrated in FIG. 10 shows how the MPT engine logic proceeds through the maintenance methodology by utilizing the data structures shown in FIG. 11. The process illustrated in FIG. 10 may be performed by processing logic related to the MPT engine. This processing logic may comprise hardware, software, firmware, or any combination of those three forms of logic. Furthermore, similar to FIG. 8 and FIG. 9, throughout FIG. 10 and FIG. 1 1 there are small circles designated with letters (e.g., A, B, C, etc.). These letter-designated circles are items of interest as to what data structures and data transfers are utilized by processing logic while it is performing the block-by-block process flow in FIG. 10. The process in FIG. 10 begins by processing logic allocating a hMPA for a local descriptor table (LDT) on a hot page miss (processing block 1000 and item A). The LDT (1102 in FIG. 11) contains a subset of the entries in the PMH TT (1100 in FIG. 11). The specific subset of entries in the LDT are those that are actually in use (i.e., the "Present" or "Hot" bit is set). Generally, the LDT 1 102 allows a quick lookup for logic to determine if an entry is present. Without the LDT 1 102, the PMH TT would need to be searched to determine if the entry in question is present because in the PMH TT, all memory address space locations are referenced, which that in many embodiments, a majority of the PMH TT entries are going to be empty (i.e., "Present" or "Hot" bit is cleared). The hot page miss, as described in detail above in FIG. 8 and FIG. 9 is determined once the MPA address from the tagtable lookup is found with P=0 in the state information (and/or C=l) at the PPA index location in the PMH-TT (processing block 1000 and item A in both FIG. 10 and FIG. 1 1). The data at the hMPA that is designated to transition from the hot page to a cold page in the PMH-TT (1100 in FIG. 11) may be allocated on a hot page miss (described above in regard to FIG. 8 and FIG. 9. The LDT (1102 in FIG. 11) is indexed using MPA addresses. Specifically, the allocated hMPA is used as an index into the LDT 1102 to find the PPA. Thus, while the PMH-TT 1100 stores physical MPA addresses and is indexed using a PPA address, the LDT 1 102 is the opposite because it stores PPA addresses and is indexed using a physical MPA address. In different embodiments, the allocation of the slot in the LDT 1102 may happen on each hot page miss or in another manner such as several slots being allocated at once after a certain number of hot page misses. At a certain time after the PPA memory location is stored in the LDT 1102, the MPT engine processing logic will select one or more PPA memory locations in the LDT 1102 for victimization (processing block 1002 and item B in FIG. 10 and FIG. 1 1). Rank shedding victimization is the process of moving data stored in a hot page of memory into a cold page of memory so the hot page of memory may be freed for a future required cold-to-hot page transfer. The selection process for victimization can be one of several embodiments. For example, the MPT engine processing logic may track how long it has been since each hot page has been accessed by the CPU or another entity and based on that data, victims may be include those hot pages that have been inactive for the longest time. In another example, the MPT engine processing logic may track the locality of data and keep data in hot pages that are actively being utilized as well as being clustered together in relatively adjacent physical memory page locations and victimize other hot pages that are not near hot page clusters. Although not discussed, many other types of victimization algorithms may be performed by the MPT engine processing logic. In many embodiments, LDT 1102 slots are selected for victimization by processing logic setting the state information victim bit of the page to V=l . When V is set to "1" either that may prompt logic to transfer the hMPA to a victim list 1104 structure for storage or processing logic may initiate the transfer to the victim list 1 104 by other means and change the victim bit to "1" after the transfer is complete. The victim list 1104 is used to store a set of victimized physical memory page addresses designated for a hot-to-cold region transitions. Though designated as victims, MPT engine processing logic can reclaim any given victim page in the victim list 1104 by clearing the victim bit (V=0) and setting the reclaim bit (R=l) in the PMH-TT 1100 and removing the victim list 1104 entry for the page. The process of reclaiming a victim page allows a CPU or an I/O DMA access an active page with a translation currently in the TLB. This allows a page to be claimed without going through a TLB miss lookup process. The victims may sit in the victim list 1 104 as long as there is room and/or there are a sufficient number of free hot pages available. At a certain point, the victim list 1104 may grow to the point that the number of victims in the list surpasses a threshold value or the number of free hot pages falls below another threshold value. Once one of these thresholds has been passed, MPT engine processing logic then may process a significant number of victim page data transfers to cold space. This hot-to-cold data transfer is done because data in a hot region memory page must first be transferred to data in a cold region memory page before that hot region memory page can be deemed "free." Thus, returning to FIG. 10, MPT engine processing logic will select one or more victims from the victim list 1 104 for a data transfer move to the cold region of memory (processing block 1004 and data transfer item C in FIG. 11). A TLB shootdown is required when moving an entry from the victim list 1 104 to the dirty list 1106. Therefore, it is generally more efficient if the TLB shootdown processes a group of pages together rather than for each page to limit the times in which the system is slow due to the TLB shootdown process. TLB shootdowns require inter-processor interrupts to flush the TLB lookup translations affected. After the TLB shootdown for a memory page, the entry for that MPA physical memory page in the victim list 1104 can be transferred into the dirty list 1106. This transfer also involves modifying the state information for each entry to clear the victim bit (V=0) and set the dirty bit (D=l). As discussed above, in many embodiments, only hot page translations are cached in the TLB, thus although hot-to-cold region data transfers from a hMPA to a cMPA require a TLB shootdown, a cold-to- hot move does not require a TLB shootdown. These TLB shootdown-processed page entries that are stored in the dirty list can also be reclaimed by a CPU or an I/O DMA access. Again, the reclaiming process simply requires removing the page entry in the dirty list and updating the PMH-TT 1100 entry for that particular page to clear the dirty bit (D=0) and set the reclaim bit (R=0). Thus, both victim and dirty list entries are capable of being reclaimed. An entry that is reclaimed refers to an entry that can be used as if it was never victimized, once the state bits are updated. At a given time, each entry in the dirty list 1106 needs to have its data copied from the hMPA memory page location to a selected free cMPA memory page location from the cold free list 1108 (processing block 1006 and item D in FIG. 1 1). As mentioned above, this creates a copy of the data which has been stored in the hMPA page in a free cMPA page to allow the hMPA page to be freed for future use. Once the data transfer to the cold region of memory takes place, then processing logic updates the PMH-TT 1100 with the new cMPA information (processing block 1008 and item E in FIG. 1 1). The PMH-TT 1 100 update utilizes the LDT 1102 to set the index of the utilized cMPA to the copied hMPA's PPA index. This essentially remaps the PMH-TT 1100 entry so a lookup of that PPA address will now point to the utilized cMPA that now has the copy of the data rather than point to the old hMPA. Finally, MPT engine processing logic will then update the hot free list 1 110 with the hMPA info (processing block 1010 and item F in FIG. 1 1). With the data that was stored in the memory page at the hMPA address now safely stored in the new cMPA page, the hMPA page is now free to be used as a free hot memory region page. Thus, the hMPA entry is stored in the hot free list 1 110 for this reason. Hot free list 1 110 pages are no longer in the TLB. This allows for a reduction on the number of TLB shootdowns because a needed hot page grabbed from the hot free list does not require an additional TLB shootdown. Rather, the TLB shootdown process takes place between the victim and dirty lists where large groups of page entries can be processed during a single shootdown. The reclaim bit feature, as discussed above, allows discarding victim and diry list page entries that have that bit set. The copy process block (block 1006 and item D in FIG. 1 1) does not copy reclaimed pages. In some embodiments, the MPT engine discards pages that are reclaimed at the copy process block. In other embodiments, the reclaimed pages are removed from the victim and dirty lists after the reclaiming takes place. Although the PMH-TT is a global structure, generally the other structures shown in FIG. 9 and FIG. 11 are localized per socket. Therefore, generally, main memory is allocated per socket and the hidden area of memory (138 in FIG. 1) would include the global PMH-TT and the local additional structures (e.g., LDT, victim list, hot free list, etc.). In some embodiments, the computer system stores a single global PMH-TT in a certain location in one of the memory storage areas for one of the CPUs in the system. In other embodiments, a local copy of the global PMH-TT is stored in each hidden memory area per CPU and broadcast update messages area sent between the CPUs to update their local copies of the PMH-TT so all copies remain identical throughout system operation, though this may be uncommon. Generally, the PMH-TT is divided among the cores/sockets in a system such that a PMH-TT access has an equal probability of hitting memory local to the socket versus memory on another socket. The time at which to move entries in one list to another list may be dictated by one or more threshold values. FIG. 12A-12D illustrate several embodiments of flow diagrams that processing logic may utilize to determine when to recover memory pages for use (i.e., memory page housekeeping). Each process may be performed by processing logic which may be hardware, software, firmware, or a combination of any of those three forms of logic. In each of these diagrams the referred "threshold" is a value that may be determined at any given time whether prior to boot utilizing testing procedures to determine the optimal threshold to use to maintain peak performance or dynamically at runtime using algorithmic analysis to determine whether the threshold may need to increase or decrease based on current workloads. In some embodiments, not all threshold tests are utilized, rather a subset of the threshold tests are utilized (e.g., 1 or more). Each "threshold" in question between FIG. 12A, 12B, 12C, and 12D, as well as any other non-pictured threshold possibilities may be of a similar or different value from each other. Many different threshold values for housekeeping portion of the MPT engine-maintained list data structures may be utilized and each threshold is implementation specific. FIG. 12A-12D simply provide certain examples of how threshold values may be utilized in regard to the MPT engine. In many embodiments discussed, "one or more" entries are copied/moved. In practice, many times the number of list entries that are copied/mo ed is the total number of entries in the list (i.e., the list is entirely flushed). In other embodiments, there is a set maximum number of list entries that are copied/moved in a block and if the total number of entries exceeds the maximum, then for any single process the maximum number allowed to be copied/moved at once is utilized and one or more follow up processes may be used to take care of any remaining list entries. FIG. 12A illustrates an embodiment of a process to determine when the victim list has reached a threshold value to begin housekeeping. In many embodiments, MPT engine processing logic determines if the total number of entries in the victim list has reached a high threshold value (processing block 1200). If it is determined that this is the case, then processing logic selects one or more entries in the victim list for a move into the cold region of memory (processing block 1202). Then processing logic performs a TLB shootdown on the entries that were selected (processing block 1204) and this housekeeping process is complete. FIG. 12B illustrates an embodiment of a process to determine when the dirty list has reached a high threshold value to begin housekeeping. In many embodiments, MPT engine processing logic determines if the total number of entries in the dirty list has reached a high threshold value (processing block 1206). If it is determined that this is the case, then processing logic copies one or more dirty list entries from the hot region of memory to the cold region of memory (processing block 1208) and the process is complete. FIG. 12C illustrates an embodiment of a process to determine when the hot free list has reached a low threshold value to begin housekeeping. In many embodiments, MPT engine processing logic determines if the total number of entries in the hot free list has dwindled down to a number that falls below a minimum threshold required value (processing block 1210). If it is determined that this is the case, then processing logic selects one or more entries from the LDT for victimization (processing block 1212). Victimization selection begins a process that is described in detail above in regard to FIG. 10. Returning to FIG. 12C, once the one or more entries have been selected, then processing logic copies the selected entries into the victim list (processing block 1214). In many embodiments, if the hot free list does not have enough entries in it, victims are first gathered from the FIG. 12C process, then once they are gathered, processing logic performs the process in FIG. 12A on the victims to move the victims into the hot free list. Finally, FIG. 12D illustrates another embodiment of a process to determine when the hot free list has reached a low threshold value to begin housekeeping. As in FIG. 12C, MPT engine processing logic in FIG. 12D determines if the total number of entries in the hot free list has dwindled down to a number that falls below a minimum threshold required value (processing block 1216). If it is determined that this is the case, then processing logic copies one or more dirty list entries from the hot region of memory to the cold region of memory (processing block 1218) and the process is complete. In any event, a TLB Shootdown is required before these copied entries can be reused in the Hot free list. Two-Level Memory A micro-page table engine may also be utilized to implement a two-level memory (2LM) memory subsystem. FIG. 13 describes an embodiment of a micro page table managed two-level memory subsystem within a computer system. The computer system illustrated in FIG. 13 is similar to the computer system shown in FIG. 1, many components provide similar functionality. Certain elements shown in other versions of this figure, such as the I/O subsystem, are not shown specifically in FIG. 13 for the purpose of clarity. Although not shown, an I/O subsystem similar to one shown in FIG. 1 and FIG. 2 would generally be implemented in the system shown in FIG. 13. In many embodiments, a discrete memory controller, memory controller B 1300, resides in computer system 100. Memory controller B 1300 is coupled to CPU 102 through a highspeed I/O interface 122 or the memory controller B could be implemented in the CPU die rather than as a discrete chip, though this embodiment is not illustrated. High-speed I/O interface may be one of several types of interconnects, such as PCI-Express, QPI, etc. Memory controller B 1300 in turn, provides control over a second memory, such as memory B 1302. Memory B 1302 may be of a different type of memory than memory A 1 10. For example, while memory A 1 10 may comprise a form of DRAM, memory B 1302 may comprise a form of non- olatile memory. In different embodiments, memory B 1302 may be phase change memory (PCM), another form of non- volatile memory, or standard DRAM or low power DRAM, among others. In many embodiments, memory B 1302, comprising general storage area B 1304, may be considered the main memory within computer system 100 wherein memory A 110, comprising general storage area A 140, may be implemented as a DRAM cache. The cache comprising DRAM and in many embodiments comprising many gigabytes of storage space, may be capable of absorbing most writes during regular operation of the computer system 100. In embodiments where memory B is a non-volatile memory, this absorbtion effect helps minimize the wear of the non-volatile memory B 1302, which helps minimize the effects of the limited write lifetime of PCM or other forms of NVM as well as hiding the long latency of writes to these types of memory. An implementation of 2LM using a MPT engine would generally work in a similar fashion to the rank shedding as described in detail above. Essentially, the MPT engine 128 can set up the hot region of memory space to map to general storage A 140 and the cold region of memory space to map to general storage B 1304. Thus, in many embodiments the general storage A is the hot region and the general storage B is the cold region. Data from the memory accesses to the cold region is brought into the hot region with memory page transfer swaps. Once a memory request is found to not be in the TLB and also found to not be present in the hot region of memory (general storage A 140) after a page-walk lookup process, the MML 142 can then go out to memory controller B 1300 and request access to a page in general storage B 1304. The standard hot page miss process flow described above in FIG. 8 and FIG. 9 is just reapplied to the FIG. 13 implementation utilizing a second memory controller and a second memory for the cold region of memory. Additionally, the housekeeping of swapping hot and cold pages of memory applies generally as well. In many 2LM embodiments, there are additional processes run by the MML 142. A wear leveling algorithm may be incorporated into the logic within MML 142 when memory B 1302 is non-volatile. During periods of little to no activity, the MML 142 portion of the MPT engine 128 may instruct memory controller B 1300 to redistribute portions of the data stored within memory B 1302 to evenly distribute the wear amongst the PCM devices comprising all of memory B 1302. FIG. 14 describes an embodiment of a PCM-specific memory subsystem. Memory B 1302 is shown in a xl6 PCM device configuration. In many other embodiments that are not shown, the PCM devices may be stacked several devices high to further increase storage with a relatively small increase delay in access times. The memory B controller 1300 is coupled to a CPU by way of link 1400. Requests come into the controller from logic in the CPU. The memory B controller 1300 may comprise several DMA units (units 1-N) that are coupled with an integrated link 1502 (i.e., bus) in memory B 1302. The DMA units may work in tandem sending requests to memory B 1302 and receiving data back. This data is then sent back across link 1400 to the CPU. Elements of embodiments of the present invention may also be provided as a machine- readable medium for storing the machine-executable instructions. The machine-readable medium may include, but is not limited to, flash memory, optical disks, compact disks-read only memory (CD-ROM), digital versatile/video disks (DVD) ROM, random access memory (RAM), erasable programmable read-only memory (EPROM), electrically erasable programmable readonly memory (EEPROM), magnetic or optical cards, propagation media or other type of machine-readable media suitable for storing electronic instructions. In the description above and in the claims, the terms "include" and "comprise," along with their derivatives, may be used, and are intended to be treated as synonyms for each other. In addition, in the following description and claims, the terms "coupled" and "connected," along with their derivatives may be used. It should be understood that these terms are not intended as synonyms for each other. Rather, in particular embodiments, "connected" may be used to indicate that two or more elements are in direct physical or electrical contact with each other. "Coupled" may mean that two or more elements are in direct physical or electrical contact. However, "coupled" may also mean that two or more elements are not in direct contact with each other, but yet still cooperate, interact, or communicate with each other. In the description above, certain terminology is used to describe embodiments of the invention. For example, the term "logic" is representative of hardware, firmware, software (or any combination thereof) to perform one or more functions. For instance, examples of "hardware" include, but are not limited to, an integrated circuit, a finite state machine, or even combinatorial logic. The integrated circuit may take the form of a processor such as a microprocessor, an application specific integrated circuit, a digital signal processor, a micro- controller, or the like. It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various portions of this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined as suitable in one or more embodiments of the invention. Similarly, it should be appreciated that in the foregoing description of embodiments of the invention, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the various inventive aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed subject matter requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description. |
The invention discloses a unsupervised learning of scene structure for synthetic data generation. A rule set or scene grammar can be used to generate a scene graph that represents the structure and visual parameters of objects in a scene. A renderer can take the scene graph as input and, with a library of content for assets identified in the scene graph, can generate a synthetic image of a scene that has the desired scene structure without the need for manual placement of any of the objects in the scene. Images or environments synthesized in this way can be used to, for example, generate training data for real world navigational applications, as well as to generate virtual worlds for games or virtual reality experiences. |
1.A computer-implemented method includes:Generating a scene structure using at least a subset of a plurality of rules from a rule set, the plurality of rules specifying relationships between types of objects in the scene;Determining one or more parameter values of one or more of the objects represented in the scene structure;Generating a scene graph based on the scene structure and including the parameter value; andThe scene graph is provided to a rendering engine to render an image of the scene.2.The computer-implemented method according to claim 1, further comprising:A library of object-related content is provided to the rendering engine, and the library of object-related content will be used to render the object represented in the scene structure using the one or more parameter values of the object.3.The computer-implemented method according to claim 1, further comprising:The rendering engine is made to include at least one of object tags or scene structure information of the image with the scene, wherein the image can be used as training data for training a neural network.4.The computer-implemented method according to claim 1, further comprising:Receiving an input indicating that one of the rules or objects is not used in the scene;Update the scene graph to reflect the input; andThe updated scene graph is provided to the rendering engine to render an updated image of the scene.5.The computer-implemented method according to claim 1, further comprising:Use the rule set to generate various scene graph sets; andThe virtual environment is generated using the various scene graph sets.6.The computer-implemented method according to claim 5, wherein the virtual environment is a game environment, and wherein the rendering engine is configured to use the various scene graphic sets to target one of the games corresponding to the game environment Or more players to render one or more images of the game environment.7.The computer-implemented method according to claim 5, further comprising:The virtual environment is utilized in one or more test simulations of one or more autonomous or semi-autonomous machines.8.The computer-implemented method of claim 1, wherein the scene structure is generated from the rule set in an unsupervised manner without requiring data annotations on one or more input images.9.The computer-implemented method according to claim 1, further comprising:Assign probabilities to the rules in the rule set; andThe subset of the plurality of rules is selected by sampling the rules according to the probability.10.The computer-implemented method according to claim 9, further comprising:The scene mask is determined to indicate which one or more rules in the rule set can be selected through the sampling.11.The computer-implemented method according to claim 1, further comprising:The scene structure is generated using an iterative process, wherein in one or more of a plurality of time steps, it is determined whether to perform expansion for one or more object types in the scene structure, and the object type is located At different levels in the hierarchical scene structure.12.The computer-implemented method of claim 1, wherein a generative model is used to generate the scene structure using the rule set.13.The computer-implemented method according to claim 1, further comprising:A latent vector is generated as an input of the generative model, and the latent vector has a length equal to the number of rules in the rule set.14.A system including:At least one processor; andA memory, which contains instructions that, when executed by the at least one processor, cause the system to:Generating a scene structure using at least a subset of a plurality of rules from a rule set, the plurality of rules specifying relationships between types of objects in the scene;Determining one or more parameter values of one or more of the objects represented in the scene structure;Generating a scene graph based on the scene structure and including the parameter value; andThe scene graph is provided to a rendering engine to render an image of the scene.15.The system of claim 14, wherein the instructions, when executed, also cause the system to:The rendering engine is made to include at least one of object tags or scene structure information of the image with the scene, wherein the image can be used as training data for training a neural network.16.The system of claim 14, wherein the instructions, when executed, also cause the system to:Receiving an input indicating that at least one of the rules or objects is not used in the scene;Update the scene graph to reflect the input; andThe updated scene graph is provided to the rendering engine to render an updated image of the scene.17.The system of claim 14, wherein the instructions, when executed, also cause the system to:Use the rule set to generate various scene graph sets; andThe virtual environment is generated using the various scene graph sets.18.The system according to claim 14, wherein the system comprises at least one of the following:A system used to perform graphics rendering operations;A system used to perform simulation operations;Systems used to perform simulation operations to test or verify autonomous machine applications;Systems used to perform deep learning operations;Systems implemented using edge devices;A system containing one or more virtual machines (VM);A system implemented at least partially in a data center; orA system implemented at least partially using cloud computing resources.19.A method for synthesizing training data, including:Obtaining a rule set including a plurality of rules, the plurality of rules specifying the relationship between the types of objects in the scene;Generating multiple scene structures based on the multiple rules;Determining one or more parameter values of each of the objects represented in the multiple scene structures;Generating a plurality of scene graphs based on the plurality of scene structures and including the parameter values; andThe plurality of scene graphs are provided to a rendering engine to render an image set including data of the object represented in the image, the image set representing training data for use in training one or more neural networks .20.The method of claim 19, further comprising:A library of object-related content is provided to the rendering engine, and the library of object-related content will be used to render the object represented in the scene structure using the one or more parameter values of the object.21.The method of claim 19, further comprising:Receiving input indicating that one of the rules or objects is not to be used;Updating the plurality of scene graphs to reflect the input; andThe updated scene graph is provided to the rendering engine to render the updated image set.22.The method of claim 19, wherein the scene structure is generated from the rule set in an unsupervised manner without requiring data annotations on one or more input images.23.The method of claim 19, further comprising:Assign probabilities to the rules in the rule set; andA subset of the multiple rules is selected for each scene structure by sampling the rules according to the probability. |
Unsupervised learning of scene structure for synthetic data generationCross-references to related applicationsThis patent application requires the title "Bridging the Sim-to-Real Gap: Unsupervised Learning of Scene Structure for Synthetic Data Generation" to be submitted on March 6, 2020. Synthetic Data Generation)" U.S. Provisional Patent Application Serial No. 62/986,614, which is hereby incorporated in its entirety for all purposes.Background techniqueApplications such as games, animations, and simulations increasingly rely on more detailed and realistic virtual environments. In many cases, process models are used to synthesize scenes in these environments and to create labeled synthetic data sets for machine learning. In order to produce realistic and diverse scenes, experts must carefully adjust many parameters of the control program model. These parameters control both the structure of the generated scene (for example, how many cars are in the scene) and the parameters that place the object in a valid configuration. The complexity and amount of knowledge of manually determining and adjusting these parameters and configuring other aspects of these scenarios may limit wide applicability, and may also limit the authenticity or scope of the generated environment.Description of the drawingsVarious embodiments according to the present disclosure will be described with reference to the accompanying drawings, in which:Figures 1A and 1B show images that can be generated according to at least one embodiment;Figures 2A, 2B, and 2C show rules and graphics of a scene according to at least one embodiment;3A, 3B, and 3C show stages of scene graph generation according to at least one embodiment;Fig. 4 shows a process for generating an image from a scene grammar according to at least one embodiment;Figure 5 shows a process for training a network according to at least one embodiment;Figure 6 shows components of a system for generating scene graphs according to at least one embodiment;Figure 7A shows inference and/or training logic according to at least one embodiment;Figure 7B shows inference and/or training logic according to at least one embodiment;Figure 8 shows an example data center system according to at least one embodiment;Figure 9 shows a computer system according to at least one embodiment;Figure 10 shows a computer system according to at least one embodiment;FIG. 11 shows at least a part of a graphics processor according to one or more embodiments;FIG. 12 shows at least a part of a graphics processor according to one or more embodiments;Figure 13 is an example data flow diagram for an advanced computing pipeline according to at least one embodiment;FIG. 14 is a system diagram of an example system for training, adjusting, instantiating, and deploying machine learning models in an advanced computing pipeline according to at least one embodiment; and15A and 15B show a data flow diagram of a process for training a machine learning model according to at least one embodiment, and a client-server architecture for enhancing an annotation tool using a pre-trained annotation model.Detailed waysThe method according to various embodiments may provide for the generation of composite images and data sets. In particular, various embodiments may generate a virtual scene or environment based at least in part on a set of rules that define the placement and appearance of objects or "assets" within the environment. These data sets can be used to generate virtual environments, as well as to generate large training data sets related to the target reality data set. Synthetic data sets provide attractive opportunities for training machine learning models that can be used for tasks such as perception and planning in autonomous and semi-autonomous driving, indoor scene perception, generation of content creation, and robot control. Through the graphics engine, synthetic data sets can provide ground truth data for tasks that are expensive or even unavailable for labeling (for example, segmentation, depth, or material information). As shown in the images 100, 150 of Figures 1A and 1B, this may include ground truth data of objects rendered in those images, such as the bounding boxes and labels of the cars 102, 152 and people 104 rendered in these images. The operation of adding new types of tags to such synthetic datasets can be performed by invoking the renderer, rather than embarking on the time-consuming annotation work, which requires new tools and the hiring, training, and supervision of annotators.Using conventional methods, there are various obstacles to creating synthetic data sets. Although the content that makes up the scene can be obtained from sources such as online asset stores (for example, the three-dimensional computer-aided design (3D CAD) models that make up the scene), artists usually have to write complex procedural models by placing these assets in real-world layout To synthesize the scene. This usually requires browsing a large number of real images to carefully adjust the process model, which can be a very time-consuming task. For scenes such as street scenes, creating a composite scene related to one city may need to be adjusted from scratch to a process model produced by another city. Methods according to various embodiments may attempt to provide automated methods for handling these and other such tasks.In one method, the scene parameters in the synthetically generated scene can be optimized by using the visual similarity between the generated (eg, rendered) synthetic data and the real data. The scene structure and parameters can be represented by scene graphs, and the data is generated by sampling random scene structures (and parameters) from a given probability grammar of the scene, and using the learned model to modify the scene parameters. Since this method only learns scene parameters, there is still a gap from simulation to reality in the scene structure. For example, one might find that the density of cars, people, and buildings in Manhattan is higher than in quaint villages in Italy. Other work on generative models of structured data (for example, graphics and grammatical strings) requires a large amount of ground truth data for training to generate realistic samples. However, the annotation of the scene structure is very cumbersome and therefore not available in most real data sets.The method according to various embodiments may utilize the process of synthesizing a scene to generate a model that is learned unsupervisedly from a real image. In at least one embodiment, one or more scene graphs can be generated object-by-object by learning to extend the sampling rules from a given probabilistic scene grammar and generate scene parameters. At least partly due to the discrete nature of the scene structure to be generated and the presence of indistinguishable renderers during the generation process, learning such tasks without supervision can be challenging. To this end, the feature space divergence can be used to compare the generated (for example, rendered) scene with the real scene, and the real scene can be determined for each scene. Such a method may allow credit allocation for training through reinforcement learning. Experiments conducted on two synthetic data sets and real data sets show that the method according to at least one embodiment significantly reduces the distribution gap between the generated data and the scene structure in the target data. Through learning and target structure distribution Close alignment to improve human priors on the structure of the scene. On the real data set, starting from the smallest human prior, the structure distribution in the real target scene can be restored almost accurately. This is worth noting because the model can be trained without any labels. For example, it has been proved that the performance of the object detector trained on the generated data is better than that of the detector trained on the data generated by the human prior, and it has been proved that the distribution similarity measure of the rendered image generated using real data improvement of.Instead of inferring each scene as in the previous method, the method according to various embodiments can generate new data similar to the target distribution. One method is to learn to use the variable upper limit of GAN-type targets to optimize non-differential simulators, or to optimize simulator parameters for control tasks by directly comparing real and simulated trajectories. The method according to at least one embodiment can learn to generate discrete scene structures limited by grammar, while optimizing the distribution matching target (using reinforcement learning) instead of using adversarial training. In contrast to images of individual objects or faces, this method can be used to generate large and complex scenes.In at least one embodiment, a generative model composed of graphs and trees can produce graphs with a richer structure, and has greater flexibility than grammar-based models, but may not be grammatically correct for situations where the grammar is already defined. Graphics, such as programs and scene graphics. Grammar-based methods have been used for various tasks, such as program translation, conditional program generation, grammatical induction, and generative modeling of grammatical structures (e.g., molecules). However, these methods assume access to the ground truth graphics structure for learning. The method according to various embodiments can train the model in an unsupervised manner without any ground truth scene graphical annotations.In at least one embodiment, a rule set can be generated or obtained for the virtual scene or environment to be generated. For example, the artist can generate or provide a rule set for the scene, or can select from a library of rule sets for various scenes. This may include, for example, browsing through scene type options through a graphical interface and selecting a scene type with an associated rule set, as the scene type may correspond to scene types such as European cities, American villages, dungeons, etc. For a given virtual or "composite" scene, the artist can also create or obtain content for various objects in the scene (for example, "assets"), such as models, images, textures, and other features that can be used to render those objects . The rules provided can indicate how these objects should be related to each other in a given scene.For example, FIG. 2A shows an example rule set 200 that can be utilized in accordance with various embodiments. The rule set can include any number of rules, or in some embodiments can reach the maximum number. Each rule may define the relationship between at least two types of objects to be represented in the composite scene. This rule set applies to locations where there will be roads and sidewalks. As shown in the rule set 200, the road may have a lane according to the first rule. According to other rules, it can be a single lane or multiple lands, and each lane can be associated with a sidewalk and one or more cars. As shown in the figure, rules can also define whether an object type can have one or more instances of that object type associated with a given object type. Another pair of rules indicates that there can be one or more people on the sidewalk in this scene.Such rules can be used to generate one or more scene structures that represent the scene to be generated. Two example scene structures 230, 260 are shown in Figures 2B and 2C. In each of these structures, the road is shown as the main parent node in the hierarchical tree structure. The rules in the rule set 200 and the relationships defined therein determine potential parent-child relationships that can be used to generate different tree structures that conform to those relationships. In at least one embodiment, the generative model can generate these scene structures from the rule set using an appropriate sampling or selection process. In the structure 230 of FIG. 2B, there is a two-lane road, where each lane has a sidewalk, and one of the lanes has a car. There is a tree and a person on the sidewalk close to the lane with cars. In the structure 260 of FIG. 2C, there is a one-lane road with three cars and a sidewalk, and there are two people and a tree near the sidewalk. It can be seen that these structures represent two different scenarios generated from the same rule set. This method can be used to construct, supplement, augment, generate, or synthesize a virtual environment that includes variants of the object structure that all follow the selected rule set. Using this method, a single rule set can be used to generate an environment as large as required, and its changes can be random or can follow a change strategy without the user having to manually select or place these objects. This method can be used to generate synthetic scenes from real images in an unsupervised manner. In at least one embodiment, this method can learn a generative model of the scene structure, from which samples (with other scene parameters) can be rendered to create composite images and labels. In at least one embodiment, this method can be used to use unlabeled real data to generate synthetic training data with appropriate labels. The rule set and unlabeled real data can be provided as the input of the generative model, and the generative model can generate different scene structure sets.The method according to at least one embodiment can learn such a generative model for synthesizing scenes. In particular, given a data set of real images XR, the problem is to create composite data D(θ) = (X(θ), Y(θ)) representing XR's image X(θ) and label Y(θ), where θ represents the parameters of the generative model. In at least one embodiment, by specifying that the synthetic data D is the output of creating an abstract scene representation and rendering the scene representation with a graphics engine, advances in the graphics engine and rendering can be utilized. The rendering can ensure that there is no need to model the low-level pixel information in X(θ) (and its corresponding annotation Y(θ)). Ensuring the semantic validity of a sampled scene may require at least some constraints on its structure. Scene grammar uses rule sets to greatly reduce the space of scenes that can be sampled, thereby making learning a more structured and tractable problem. For example, the scene grammar can explicitly force cars to be on the road only, and then there is no need for implicit learning. The method according to various embodiments may partially take advantage of this by using probabilistic scene grammar. The scene graph structure can be sampled from a priori imposed on the Probabilistic Context Free Grammar (PCFG), which is referred to as a structural prior in this article. It is possible to sample parameters for each node in the scene graph from the parameter prior and learn to predict new parameters for each node, thereby maintaining the integrity of the structure. Therefore, the resulting generated scene comes from the structure a priori (which is context-independent) and the learned parameter distribution, which may lead to a gap in the scene structure from simulation to reality.The method according to various embodiments can at least alleviate the gap by learning the context-dependent structural distribution of the unsupervised synthetic scene from the image. In at least one embodiment, one or more scene graphs can be used as abstract scene representations, which can be rendered into corresponding images with labels. Figures 3A to 3C show components that can be used at different stages of this process. Figure 3A shows a set of logit 300 generated from regular samples, where a given sample is used to determine the next logit. Figure 3B shows a corresponding mask 330 that can be used to generate a scene graph. In the process of generating scene graphics, the shape of logit and mask is Tmax×K. In FIG. 3, the unpatterned (for example, pure white) area represents a higher value, and the pattern filled area represents a lower value. At each time step, such a process can automatically re-sample the rules and predict the logit of the next rule based on the sampling, thereby capturing contextual relevance. Sampling can be used to generate the scene structure 362, as shown in FIG. 3C, and to determine the parameters of the nodes of the scene structure. These parameters can include information such as position, height, and posture. These and other parameters 366 can be sampled and applied to each node in the scene structure to generate a complete scene graph. Therefore, such a process can take advantage of the sampling rules in the grammar and convert it into a graphical structure. In this example, only the objects that can be rendered are separated from the complete syntax string. The parameters of each node can be sampled from a priori, or can choose to learn. The generated scene graph can be rendered as shown in the figure. Such a generative model can sequentially sample the extended rules from a given probabilistic scene grammar to generate the rendered scene graph. The model can be trained unsupervisedly and through reinforcement learning using distribution divergence based on feature matching specially designed to accommodate this setting.The scene graph may be advantageous in at least some embodiments due to the ability of the scene graph to describe the scene in a concise and hierarchical manner in fields such as computer graphics and vision, where each node describes the object and its parameter. Parameters may involve aspects such as 3D assets or poses. The parent-child relationship can define the parameters of the child node relative to its parent node, so as to realize simple scene editing and operation. In addition, cameras, lighting, weather, and other effects can be encoded into the scene graph. Generating corresponding pixels and annotations can be equivalent to placing objects in the scene in the graphics engine and rendering them with defined parameters.In at least one embodiment, the rule set can be defined as a vector, the vector having a length equal to the number of rules. The network can then be used to determine which of these rules to expand, and these rules can be expanded sequentially at different time steps. For each scene structure to be generated, a category distribution can be generated on all relevant rules in the set. Then, the generative network can sample from the classification distribution to select the rules for the scene. In the scene, the classification distribution can also be masked so that certain rules are forced to have zero probability, so that they cannot be selected. The network can also infer rules or options to expand for each object. In at least one embodiment, the generative model may be a recurrent neural network (RNN). Based on the determined probability, the latent vector defining the scene can be input to this RNN to sequentially generate and expand the rules of the scene. The RNN moves down the tree or stack until all rules are processed (or the maximum number of rules is reached).In one or more embodiments, each row in FIG. 3A may correspond to a sampling rule. As shown in Figure 3B, the mask 330 can then be used to indicate to the model which rules will be expanded at a given time step. This process can be performed iteratively to generate effective scene descriptions. In addition, the relationship between objects in the scene graph also provides geometric constraints for the scene, because these objects cannot exist outside the specified relationship, for example, cars cannot be placed outside the lane or on the sidewalk. The parameters of the node define various visual attributes, so the roads in the countryside of New Zealand look different from the roads in the big cities of Thailand. In at least some embodiments, a range can be set for these various parameters for a specific type of object, so that the sidewalk only enters with a certain width, the road has only a limited number of lanes, and so on.These data structures can also be used to perform additional learning. For example, this data can be used downstream for training models, such as detecting cars in captured image data. For example, you can use the generated image to preserve this structure to help the model more quickly identify cars based on the locations that appear in the scene structure.In at least one embodiment, the context-free grammar G can be defined as a list of symbols (e.g., terminal and non-terminal) and extended rules. Non-terminal symbols have at least one extension rule and can be extended to the new symbol set. Sampling from the syntax may involve extending the start symbol (or initial or parent symbol) until only non-terminal symbols remain. The total number of extended rules K can be defined in grammar G. A scene grammar can be defined, and one or more scene graphs can be used to sample strings from the represented grammar. For each scene graph, you can sample the structure T from the syntax G, and then sample the corresponding parameter α for each node in the graph. In at least one embodiment, a convolutional network is used with the scene graph to sample a parameter set for each single node in the graph.Among various methods, generative models with graphs that are grammatically constrained can be utilized. In at least one embodiment, a recurrent neural network can be used to map the latent vector z to the unnormalized probabilities of all possible grammatical rules in an automatic regression manner. In such an embodiment, this can last for a maximum Tmax step. In at least one embodiment, a rule rt can be sampled at each time step, and this rule can be used to predict the logit of the next rule ft+1. Compared with the context-free nature of traditional methods of scene graphs, this allows the model to easily capture context-sensitive relationships. Given a list of at most Tmax sampling rules, the corresponding scene graph is generated by treating each rule expansion as a node expansion in the graph, as shown in Figure 3.In order to ensure the validity of these sampling rules in each time step t, the last in first out (LIFO) stack of unexpanded non-terminal nodes can be maintained. You can pop nodes from the stack and expand them according to the sampling rules, and then push the resulting new non-terminal nodes to the stack. When popping up a non-terminal, you can create a mask mt of size K. For the effective rules from the non-terminal, the mask mt is 1, otherwise it is 0. Given the logit of the next extended ft, the probability of the rule rt,k can be given by:Sampling from this masked polynomial distribution can ensure that only valid rules are sampled as rt. Given logit and sampling rules, the probability of the corresponding scene structure T for a given z can be given by:In summary, we can sample the scene structure T~qθ(·|z) from the model, then sample the parameters of each node in the scene α~q(·|T) and render the image v'=R(T ,α)~qI, to generate images. For some v'~qI with parameter α and structure T, the following assumptions can be made:qI(v′|z)=q(α|T)qθ(T|z)Various training methods can be used for this generative model. In at least one embodiment, this training can be performed using variational inference or by optimizing a measure of distribution similarity. Variational reasoning allows the use of reconstruction-based goals by introducing an approximate learning posterior. Using variational inference to train such a model can be challenging, at least in part due to the complexity from discrete sampling and the presence of a renderer in the generation process. In addition, the recognition network here may be equivalent to doing an inverse graph, which is a very challenging problem in itself. In at least one embodiment, it is possible to optimize the measure of distribution similarity between the generated data and the target data. The adversarial training of the generative model can be used in conjunction with reinforcement learning (RL), for example by carefully limiting the ability of the reviewer. In at least one embodiment, reinforcement learning can be used to train discrete generative models in scene graphs. The index can be calculated for each sample, which can significantly improve the overall training process.The generative model can be trained to match the feature distribution of real data in the latent space of certain feature extractors φ. For some v ~ pI, the true feature distribution can be defined by. Similarly, for some v ~ qI, the generated feature distribution can be defined as given by. In at least one embodiment, distribution matching can be achieved by approximately calculating pf, qf from samples and minimizing the KL divergence from pf to qf. In at least one embodiment, the training target can be given in the following manner:Using the above definition of feature distribution, the equivalent target can be given in the following ways:The true basic feature distributions qf and pf are difficult to calculate. In at least one embodiment, an approximation calculated using Kernel Density Estimation (KDE) and an example method can be used to make V={v1,...,vl} and B={v'1,...,v 'm} is a batch of real and generated images. Use B, V to execute KDE to estimate qf and pf to produce:Where KH is the standard multivariate normal kernel of the bandwidth matrix H. Here, H=dI can be used, where d is the dimensionality of the feature space.The generative model according to at least one embodiment can make discrete (eg, non-differentiable) choices at each step, so that it may be advantageous to use reinforcement learning techniques to optimize goals. Specifically, this can include the use of an enhanced score function estimator with a moving average baseline, whereby the gradient can be given in the following way:Where M is the batch size, and is the density estimate defined above.It can be noted that the above gradient needs to calculate the edge probability qI(v') of the generated image v'instead of the condition qI(v'|z). Calculating the edge probability of the generated image involves difficult marginalization on the latent variable z. To avoid this, a fixed and limited number of latent vectors from the set Z can be used, and these latent vectors are uniformly sampled, making it easy to marginalize. Converted to:qI(v′)=q(α|T)qθ(T|Z)Such a method can still provide sufficient modeling capabilities, because only many scene graphs with a limited maximum length of Tmax can be sampled from the grammar. According to experience, it is sufficient to use a latent vector, because the randomness in regular sampling can make up for the randomness loss in the latent space.In at least one embodiment, pre-training can be an important step. A hand-made prior can be defined on the scene structure. For example, a simple prior may be to place a car on a road in a driving scene. The model can be pre-trained at least in part by sampling character strings (eg, scene graphs) from grammatical priors, and trained to maximize the log likelihood of these scene graphs. Feature extraction may also be an important step in distribution matching, because features need to capture structural scene information, such as the number of objects and their contextual spatial relationships, for effective training.During model training, sampling may result in incomplete character strings generated with the maximum Tmax step size. Therefore, the scene graph T can be repeatedly sampled until its length is the maximum Tmax. To ensure that this does not require too many attempts, when sampling a single scene graph used to generate F, the rejection rate rreject(F) of the sampling feature F can be recorded as an average failed sampling attempt. The threshold f can be set in rreject(F) to represent the maximum allowable rejection and the weight λ, and then it can be added to the initial loss, which can be given by the following formula:Based on experience, it has been found that in at least one embodiment, the values of λ=10-2 and f=1 work well.This method can provide unsupervised learning of a generative model of synthetic scene structure by optimizing the visual similarity with real data. Even if comments are provided, it is difficult to infer the structure of the scene. The method according to various embodiments can perform this generation part without any ground truth information. Experiments have verified the ability of this model to learn a reasonable posterior in the structure of the scene, and it has been significantly improved compared with the artificially designed prior. In order to produce satisfactory results, the method can be optimized for the scene structure and parameters of the synthetic scene generator.As mentioned, this method for generating various scene graphics can enable the generation of scenes or environments that mimic the real world or target world or environment. It is possible to learn information about this world or environment directly from the pixels of the example image of the real world or the target world. Such a method can be used to attempt an accurate reconstruction, but in many embodiments, it can allow the generation of an infinite variety of worlds and environments that can be based at least in part on these real worlds or target worlds. A rule set or scene grammar can be provided to describe the world at a micro level and define object-specific relationships. For example, the person can specify or select rules that can be used to automatically generate the scene or image without having to manually generate at least a layout for each scene or image. The scene graph in at least one embodiment can provide a complete description of the layout of the three-dimensional world. In at least one embodiment, the recursive extension used to generate the scene structure rules can also be used to generate a character string that provides a complete representation or definition of the layout of the three-dimensional scene. As mentioned earlier, generative models can be used to perform extensions and generate scene structures. In at least one embodiment, the scene structure can be stored as a JSON file or use another such format. This JSON file can then be provided as input to the rendering engine to generate an image or scene. The rendering engine can extract appropriate asset data for rendering each object.As mentioned, the rendered data can be used to present a virtual environment, such as for games or VR applications. The rendering can also be used to generate training data for such applications and other applications (for example, training models for autonomous or semi-autonomous machines, such as training models for vehicle navigation or robot simulation). Different asset libraries can be selected for these renderings, so that the environment may be suitable for different geographic locations, time points, and so on. Each pixel in the rendered image can be marked to indicate the type of object that the pixel represents. In at least one embodiment, a bounding box or other position indicator (indicator) can be generated for each object, as well as the depth determined for each pixel in the 3D scene, the normal at the pixel position, and so on. In at least some embodiments, this information can be extracted from the rendering engine by extracting data using an appropriate rendering function.In at least one embodiment, the artist can provide an asset set and select a rule set, and the entire virtual environment can be generated without manual input by the artist. In some embodiments, there may be an asset library from which the artist can choose. For example, an artist can select a scene structure for "Japanese cities" and select assets for "Japanese cities" in order to generate environments based on Japanese cities, including appropriate visual objects and layouts, but do not directly correspond to or represent any Japanese cities. In some embodiments, the artist may have the ability to adjust the environment by indicating what the artist likes or dislikes. For example, for this application the artist may not want cars on the street. Therefore, the artist can indicate that the artist does not want the car to be included or at least not included in a specific area or associated with a specific object type, and can generate a new scene graph in which the car has been removed and the appropriate relationship is updated. In some embodiments, the user may provide, obtain, utilize, or generate two or more sub-graphics, such as may indicate things that the user likes and things that the user dislikes. These sprites can then be used to generate new scenes that are more in line with user expectations. This method allows users to easily generate virtual environments with specific aspects and visual appearances, without the need for any professional knowledge creation process, and without the need to manually place, move, or adjust the scene or objects in the scene set. This method allows ordinary people to become 3D artists with minimal effort.Figure 4 illustrates an example process 400 for generating images of scenes that can be utilized in accordance with various embodiments. It should be understood that, for the process and other processes proposed herein, unless otherwise specified, there may be additional, fewer, or alternative steps performed in a similar or alternative order or at least partially in parallel within the scope of various embodiments. In this example, it is determined 402 the set of rules to be used that will generate at least one scene. This can include the user generating these rules or selecting from many rule sets, among other such options. Each rule in the set can define the relationship between the types of objects in the scene. The set of rules can be sampled 404 based on the determined probability to generate a scene structure including the relationships of objects defined by the rules. In at least one embodiment, this may include a hierarchical scene structure, where the nodes of the hierarchical structure correspond to the object types of the scene. The parameters used to render each of these objects can be determined 406, such as by sampling from an appropriate data set. A scene graph can then be generated 408, which can be based on the scene structure but with appropriate parameters for each node or object. The scene graph may be provided 410 to the renderer 410 or other objects together with the asset library or other sources of object content to generate images or scenes. A rendered image of a scene rendered based on the determined scene graph may be received 412. If the image is used as training data as described herein, the rendered image can include object labels and preserve the scene structure.Various methods can be used to train the neural network discussed in this article. For example, a generative model can be trained to analyze an unlabeled image, which may correspond to a captured image of a real-world setting. The generative network can then be trained to generate scenes with similar appearance, layout, and other such aspects. There can be both real scenes and synthetic scenes. Whenever these scenes are passed to the deep neural network, the feature set corresponding to the position in the high-dimensional space (for example, 1,000-dimensional space) can be extracted in the scene. Then, the scene can be regarded as composed of these points in the high-dimensional space instead of pixels in the image. The network can be trained to align features corresponding to synthetic scenes in the feature space with features corresponding to real scenes. In this way, it may be difficult to distinguish the features of the real scene and the synthesized scene in the feature space.In at least one embodiment, this can be achieved using reinforcement learning. As mentioned earlier, the goal can be to align two complete data sets, but there is no data or correlation with specific feature points that should be aligned, because this is an unsupervised space with no correlation. Since the goal is not to generate an exact copy of the scene but to generate a similar scene in many cases, it is sufficient to align the distribution of feature points in this feature space. Therefore, the training process can compare real scenes and synthetic scenes as a whole. In order to evaluate the scene, the scene in the feature space can be compared with the feature point distribution of other scenes. In various methods, it is difficult to determine whether the scene is realistic or useful only by comparing the signals, except for whether the structure is appropriate. Therefore, the training method according to at least one embodiment can extract the signal from each single data point itself without having to look at the entire data set. In this way, the signal can be evaluated for how well a particular scene is aligned with the entire data set. Then you can calculate the probability that a particular scene will be resolved into all composite scenes. This can provide the possibility that the specific scene is synthetic. This can be performed in at least one embodiment by using Kernel Density Estimation (KDE). KDE can be used to obtain the probability that the scene belongs to the composite scene distribution. KDE can also be used to calculate the probability that the scene belongs to the real scene distribution. In at least one embodiment, the ratio of these values can be analyzed, and the ratio can be used to optimize the system. Maximizing this ratio (as a logarithm) as the reward function of the scene can provide a signal that can be optimized for each individual scene.Figure 5 shows an example process 500 for training a network to generate realistic images that can be utilized, according to at least one embodiment. In this example, 502 scene graphs and assets are obtained, such as described above with respect to FIG. 4. Scene graphics and assets can be used to generate 504 composite images of the scene. The position of the feature point in the n-dimensional feature space can be determined 506 for the generated image, where n can be equal to a plurality of rules in the set used to generate the scene graph. The feature point of the generated image can be compared 508 with the distribution of feature points of the composite image in the feature space. The first probability may be determined 510 based on comparing that the generated image is synthesized. It is also possible to compare 512 the feature points of the generated image with the distribution of feature points of the real image in the feature space. The second probability may be determined 514 based on comparing that the generated image is real. The ratio of these two probabilities can be calculated 516, and one or more weights for the trained network can be adjusted in order to optimize the ratio.Another embodiment can use GAN's discriminator. The discriminator can be used to partially train the GAN to determine whether the generated scene is real. The network can then be optimized so that the discriminator determines with a high probability that the scene is real. However, this method may be challenging because current renderers generate high-quality images, but these images can still be recognized as not real captured images, so that even if the images are structurally very similar, the discriminator may mainly distinguish difference. In this case, the GAN may crash during training because the discriminator cannot provide any valuable information, because the rendered image will never confuse the discriminator with a real image. In at least one embodiment, the image-to-image conversion can be performed before providing the image data to the GAN in an attempt to improve the appearance of these composite images. The image-to-image conversion can help reduce the style gap between the real image and the composite image, and help solve the low-level visual aspects that may be related to texture or reflection, which may cause the image to look like a composite image instead of a real image. This may be beneficial for systems that use ray tracing, for example, to produce reflections and other lighting effects.In another embodiment, procedures can be used to ensure termination. A neural network can be defined as running a certain number of steps, for example 150 steps are required for computational reasons. This number may be too low to fully generate a scene graph based on a large number of rules to be analyzed and expanded. Therefore, the generated scene graph will be incomplete and will lead to inaccurate rendering. In at least one embodiment, the network can be allowed to operate to its limits. If the limit is not sufficient for a certain scenario, the remaining features can be determined, and a negative reward can be applied to the model to generate the feature again. Such an approach can result in a scene that does not include all the features originally expected, but ensures that a scene that matches the rendering limit can be rendered.In at least one embodiment, the client device 602 can use the components of the content application 604 on the client device 602 and data stored locally on the client device to generate the content of the session. In at least one embodiment, the content application 624 (for example, an image generation or editing application) executing on the content server 620 can initiate a session associated with at least the client device 602, such as using a session manager and storing in The user data in the user database 634 is the same, and the content manager 626 can determine the content 632 and use the rendering engine for rendering (if required for this type of content or platform), and use the appropriate transmission manager 622 to send to the client device 602 To send by downloading, streaming or other such transmission channels. In at least one embodiment, the content 632 may include assets that can be rendered by the rendering engine based on the determined scene graph. In at least one embodiment, the client device 602 that receives the content can provide the content to the corresponding content application 604. The content application can also or alternatively include a rendering engine for rendering the content. At least some of them are presented via the client device 602, such as image or video content via the display 606 and audio (e.g., sound and music) via at least one audio playback device 608 (e.g., speakers or headphones). In at least one embodiment, at least some of the content may have been stored on the client device 602, rendered on the client device 602, or accessible by the client device 602, so that at least that part of the content does not need to go through the network. 640 transmission, for example, the content can be pre-downloaded or stored locally on the hard drive or CD. In at least one embodiment, a transmission mechanism such as data streaming may be used to transmit the content from the server 620 or the content database 634 to the client device 602. In at least one embodiment, at least a portion of the content can be obtained or streamed from another source (for example, a third-party content service 660). The third-party content service 660 may also include a content application for generating or providing content. Procedure 662. In at least one embodiment, multiple computing devices or multiple processors in one or more computing devices may be used to perform part of the function, for example, a combination of a CPU and a GPU may be included.In at least one embodiment, the content application 624 includes a content manager 626 that can determine or analyze the content before transmitting the content to the client device 602. In at least one embodiment, the content manager 626 may also include or work with other components capable of generating, modifying, or enhancing the content to be provided. In at least one embodiment, this may include a rendering engine for rendering image or video content. In at least one embodiment, the scene graph generation component 628 can be used to generate scene graphs from rule sets and other such data. In at least one embodiment, the image generation component 630, which further includes a neural network, can generate an image from the scene graph. In at least one embodiment, the content manager 626 may then cause the generated image to be transmitted to the client device 602. In at least one embodiment, the content application 604 on the client device 602 can also include components such as a rendering engine, a scene graph generator 612, and an image generation module 614, so that it can be attached or attached to the client device 602. Perform any or all of this function instead. In at least one embodiment, the content application 662 on the third-party content service system 660 may also include such a function. In at least one embodiment, the location to perform at least some of the functions may be configurable or may depend on factors such as the type of client device 602 or the availability of a network connection with appropriate bandwidth, among other such factors. factor. In at least one embodiment, the system for content generation may include any suitable combination of hardware and software in one or more locations. In at least one embodiment, the generated image or video content of one or more resolutions can also be provided to or can be used in other client devices 650, for example, to store a copy of the image or video content. The media source is downloaded or streamed. In at least one embodiment, this may include transmitting an image of game content for a multi-player game, where different client devices may display the content at different resolutions including one or more super-resolutions.In this example, these client devices can include any suitable computing devices, such as desktop computers, notebook computers, set-top boxes, streaming devices, game consoles, smartphones, tablets, VR headsets, AR goggles, wearable devices, computers, or Smart TV. Each client device can submit a request on at least one wired or wireless network, which can include the Internet, Ethernet, local area network (LAN), or cellular network, among other such options. In this example, these requests can be submitted to an address associated with the cloud provider, which can operate or control one or more electronic resources in the cloud provider environment, which can include, for example, a data center or server farm . In at least one embodiment, the request may be received or processed by at least one edge server located at the edge of the network and outside of at least one security layer associated with the cloud provider environment. In this way, the delay can be reduced by enabling the client device to interact with a closer server, while also improving the security of resources in the cloud provider environment.In at least one embodiment, such a system can be used to perform graphics rendering operations. In other embodiments, such a system can be used for other purposes, such as for performing simulation operations to test or verify autonomous machine applications, or for performing deep learning operations. In at least one embodiment, an edge device can be used to implement such a system, or one or more virtual machines (VM) can be incorporated. In at least one embodiment, such a system may be implemented at least partially in a data center or at least partially using cloud computing resources.Inference and training logicFigure 7A shows inference and/or training logic 715 for performing inference and/or training operations associated with one or more embodiments. Details regarding the inference and/or training logic 715 are provided below in conjunction with FIG. 7A and/or FIG. 7B.In at least one embodiment, the inference and/or training logic 715 may include, but is not limited to, code and/or data storage 701 to store forward and/or output weights and/or input/output data and/or other parameters to In an aspect of one or more embodiments, neurons or layers of a neural network for training and/or for inference are configured. In at least one embodiment, the training logic 715 may include or be coupled to the code and/or data storage 701 to store graphics code or other software to control the timing and/or sequence, where the weights and/or other parameter information will be loaded to configure the The logic of integer and/or floating-point units (collectively referred to as arithmetic logic unit (ALU)). In at least one embodiment, the code (eg, graphic code) loads the weight or other parameter information into the processor ALU based on the architecture of the neural network corresponding to the code. In at least one embodiment, the data store 701 stores input/output data and/or weight parameters during forward propagation during training and/or inference using the aspect of one or more embodiments in conjunction with one or more implementations For example, the weight parameters and/or input/output data of each layer of the neural network used for training or use. In at least one embodiment, any part of the code and/or data storage 701 may be included in other on-chip or off-chip data storage, including the processor's L1, L2, or L3 cache or system memory.In at least one embodiment, any part of the code and/or data storage 701 may be internal or external to one or more processors or other hardware logic devices or circuits. In at least one embodiment, the code and/or code and/or data storage 701 may be cache memory, dynamic random addressable memory ("DRAM"), static random addressable memory ("SRAM"), non-volatile memory Lost memory (for example, flash memory) or other storage. In at least one embodiment, the choice of whether the code and/or code and/or data storage 701 is internal or external to the processor, for example, or composed of DRAM, SRAM, flash memory, or some other storage type, may depend on The available storage space on or off the storage chip, the latency requirements of the training and/or reasoning function being performed, the data batch size used in the reasoning and/or training of the neural network, or some combination of these factors.In at least one embodiment, the inference and/or training logic 715 may include, but is not limited to, code and/or data storage 705 to store and be trained as and/or used for inference in aspects of one or more embodiments The neurons or layers of the neural network correspond to the inverse and/or output weights and/or input/output data of the neural network. In at least one embodiment, during training and/or inference using aspects of one or more embodiments, the code and/or data storage 705 is stored in combination with input/output data and/or weight parameters during back propagation. The weight parameters and/or input/output data of each layer of the neural network trained or used by more embodiments. In at least one embodiment, the training logic 715 may include or be coupled to the code and/or data memory 705 to store graphics code or other software to control the timing and/or sequence, where the weights and/or other parameter information will be loaded to configure the The logic of integer and/or floating-point units (collectively referred to as arithmetic logic unit (ALU)). In at least one embodiment, the code (eg, graphic code) loads the weight or other parameter information into the processor ALU based on the architecture of the neural network to which the code corresponds. In at least one embodiment, any portion of the code and/or data storage 705 may be included with other on-chip or off-chip data storage, including the processor's L1, L2, or L3 cache or system memory. In at least one embodiment, any portion of the code and/or data storage 705 may be internal or external on one or more processors or other hardware logic devices or circuits. In at least one embodiment, the data storage 705 may be cache memory, DRAM, SRAM, non-volatile memory (e.g., flash memory), or other storage. In at least one embodiment, the choice of whether the code and/or data storage 705 is internal or external to the processor, for example, is composed of DRAM, SRAM, flash memory, or some other storage type, depending on whether the available storage is on-chip or on-chip. In addition, the delay requirements of the training and/or inference functions being performed, the data batch size used in the inference and/or training of the neural network, or some combination of these factors.In at least one embodiment, the code and/or data storage 701 and the code and/or data storage 705 may be separate storage structures. In at least one embodiment, the code and/or data storage 701 and the code and/or data storage 705 may have the same storage structure. In at least one embodiment, the code and/or data storage 701 and the code and/or data storage 705 may be a partially identical storage structure and a partially separated storage structure. In at least one embodiment, any part of the code and/or data storage 701 and the code and/or data storage 705 may be included with other on-chip or off-chip data storage, including the processor’s L1, L2, or L3 cache or System memory.In at least one embodiment, the inference and/or training logic 1015 may include, but is not limited to, one or more arithmetic logic units ("ALU") 1010, which include integer and/or floating point units for at least partly based on Training and/or inference code (for example, graphics code) or its instructions to perform logical and/or mathematical operations, which may be generated (for example, from the output values of layers or neurons inside the neural network) and stored in the activation storage 720 The activation in, which is a function of the input/output and/or weight parameter data stored in the code and/or data storage 701 and/or the code and/or data storage 705. In at least one embodiment, the activation is in response to executing instructions or other codes, and linear algebra and/or matrix-based mathematics performed by the ALU 710 generates activations stored in the activation store 720, which are stored in the code and/or data store 705 The weight values in the neutral and/or code and/or data storage 701 are used as operands with other values, such as bias values, gradient information, momentum values or other parameters or hyperparameters. Any or all of these can be stored in The code and/or data storage 705 or the code and/or data storage 701 or other on-chip or off-chip storage.In at least one embodiment, one or more processors or other hardware logic devices or circuits include one or more ALU 710, while in another embodiment, one or more ALU 710 may be included in the processor Or other hardware logic devices or circuits that use them (for example, coprocessors). In at least one embodiment, one or more ALUs 710 may be included in the execution unit of the processor, or in other ways included in the ALU group accessible by the execution unit of the processor. The units may be within the same processor or distributed between different types of processors (for example, central processing unit, graphics processing unit, fixed function unit, etc.). In at least one embodiment, the code and/or data storage 701, the code and/or data storage 705, and the activation storage 720 may be on the same processor or other hardware logic device or circuit, while in another embodiment, they may In different processors or other hardware logic devices or circuits or some combination of the same and different processors or other hardware logic devices or circuits. In at least one embodiment, any part of the activation store 720 can be included with other on-chip or off-chip data storage, including the processor's L1, L2, or L3 cache or system memory. In addition, the inference and/or training code can be stored together with other codes accessible to the processor or other hardware logic or circuits, and can use the processor’s extraction, decoding, scheduling, execution, exit, and/or other logic circuits to extract and /Or processing.In at least one embodiment, the activation storage 720 may be cache memory, DRAM, SRAM, non-volatile memory (e.g., flash memory), or other storage. In at least one embodiment, the activation store 720 may be entirely or partially internal or external to one or more processors or other logic circuits. In at least one embodiment, it may depend on the storage available on-chip or off-chip, the latency requirements for training and/or inference functions, the batch size of the data used in inference and/or training the neural network, or some of these factors. In combination, select whether the active storage 720 is internal or external to the processor, for example, or contains DRAM, SRAM, flash memory, or other storage types. In at least one embodiment, the inference and/or training logic 715 shown in FIG. 7A can be used in conjunction with an application specific integrated circuit ("ASIC"), such as a processing unit from Google, an inference processing unit (IPU) from GraphcoreTM, or (E.g. "Lake Crest") processor from Intel Corp. In at least one embodiment, the inference and/or training logic 715 shown in FIG. 7A can be combined with central processing unit ("CPU") hardware, graphics processing unit ("GPU") hardware, or other hardware (such as field programmable gate arrays). ("FPGA")) used in combination.Figure 7B shows inference and/or training logic 715 in accordance with at least one various embodiments. In at least one embodiment, the inference and/or training logic 715 may include, but is not limited to, hardware logic, in which computing resources are dedicated or otherwise uniquely combined with the weights corresponding to one or more layers of neurons within the neural network Value or other information. In at least one embodiment, the inference and/or training logic 715 shown in FIG. 7B can be used in conjunction with an application specific integrated circuit (ASIC), such as a processing unit from Google, an inference processing unit (IPU) from GraphcoreTM, or an Intel Corp's (eg "Lake Crest") processor. In at least one embodiment, the inference and/or training logic 715 shown in FIG. 7B can be combined with central processing unit (CPU) hardware, graphics processing unit (GPU) hardware, or other hardware (such as a field programmable gate array (FPGA)). ) In conjunction with. In at least one embodiment, the inference and/or training logic 715 includes, but is not limited to, code and/or data storage 701 and code and/or data storage 705, which can be used to store codes (eg, graphics codes), weight values, and /Or other information, including bias value, gradient information, momentum value, and/or other parameter or hyperparameter information. In at least one embodiment shown in FIG. 7B, each of the code and/or data storage 701 and the code and/or data storage 705 are respectively associated with dedicated computing resources (eg, computing hardware 702 and computing hardware 706) United. In at least one embodiment, each of the computing hardware 702 and the computing hardware 706 includes one or more ALUs, and these ALUs are only used for those stored in the code and/or data storage 701 and the code and/or data storage 705, respectively. The information performs a mathematical function (for example, a linear algebra function), and the result of the function is stored in the activation storage 720.In at least one embodiment, each of the code and/or data storage 701 and 705 and the corresponding computing hardware 702 and 706 respectively corresponds to a different layer of the neural network, so that the code and/or data storage 701 and the computing hardware 702 The activation obtained from one "storage/computation pair 701/702" of is provided as the input of the next "storage/computation pair 705/706" of the code and/or data storage 705 and computing hardware 706, so as to reflect the conceptual organization of the neural network. In at least one embodiment, each storage/computation pair 701/702 and 705/706 may correspond to more than one neural network layer. In at least one embodiment, additional storage/computation pairs (not shown) may be included in the inference and/or training logic 715 after or in parallel with the storage computation pairs 701/702 and 705/706.data centerFigure 8 shows an example data center 800 in which at least one embodiment may be used. In at least one embodiment, the data center 800 includes a data center infrastructure layer 810, a framework layer 820, a software layer 830, and an application layer 840.In at least one embodiment, as shown in FIG. 8, the data center infrastructure layer 810 may include a resource coordinator 812, a grouped computing resource 814, and a node computing resource ("Node CR") 816(1)-816(N) , Where "N" represents any complete positive integer. In at least one embodiment, nodes CR 816(1)-816(N) may include, but are not limited to, any number of central processing units ("CPUs") or other processors (including accelerators, field programmable gate arrays (FPGA)) , Graphics processor, etc.), memory devices (for example, dynamic read-only memory), storage devices (for example, solid state drives or disk drives), network input/output ("NW I/O") devices, network switches, virtual machines ( "VM"), power supply module and cooling module, etc. In at least one embodiment, one or more of the nodes C.R. 816(1)-816(N) may be a server having one or more of the aforementioned computing resources.In at least one embodiment, the grouped computing resources 814 may include a separate group (not shown) of nodes CR housed in one or more racks, or a number of racks housed in data centers in various geographic locations. (Also not shown). Individual groupings of nodes C.R. within grouped computing resources 814 may include computing, network, memory, or storage resources that can be configured or allocated as a group to support one or more workloads. In at least one embodiment, several nodes C.R. including CPUs or processors can be grouped in one or more racks to provide computing resources to support one or more workloads. In at least one embodiment, one or more racks may also include any number of power supply modules, cooling modules, and network switches, in any combination.In at least one embodiment, the resource coordinator 812 can configure or otherwise control one or more nodes C.R. 816(1)-816(N) and/or grouped computing resources 814. In at least one embodiment, the resource coordinator 812 may include a software design infrastructure ("SDI") management entity for the data center 800. In at least one embodiment, the resource coordinator may include hardware, software, or some combination thereof.In at least one embodiment, as shown in FIG. 8, the framework layer 820 includes a job scheduler 822, a configuration manager 824, a resource manager 826 and a distributed file system 828. In at least one embodiment, the framework layer 820 may include a framework supporting software 832 of the software layer 830 and/or one or more application programs 842 of the application layer 840. In at least one embodiment, the software 832 or the application 842 may respectively include web-based service software or applications, such as those provided by Amazon Web Services, Google Cloud, and Microsoft Azure. In at least one embodiment, the framework layer 820 can be, but is not limited to, a free and open source software network application framework, such as Apache that can use the distributed file system 828 to perform large-scale data processing (for example, "big data") SparkTM (hereinafter referred to as "Spark"). In at least one embodiment, the job scheduler 822 may include a Spark driver to facilitate scheduling of workloads supported by the various layers of the data center 800. In at least one embodiment, the configuration manager 824 may be able to configure different layers, such as a software layer 830 and a framework layer 820 including Spark and a distributed file system 828 for supporting large-scale data processing. In at least one embodiment, the resource manager 826 can manage cluster or group computing resources mapped to or allocated to support the distributed file system 828 and the job scheduler 822. In at least one embodiment, the cluster or grouped computing resources may include grouped computing resources 814 on the data center infrastructure layer 810. In at least one embodiment, the resource manager 826 can coordinate with the resource coordinator 812 to manage these mapped or allocated computing resources.In at least one embodiment, the software 832 included in the software layer 830 may include at least a part of the nodes CR816(1)-816(N), group computing resources 814 and/or the distributed file system 828 of the framework layer 820 Software used. One or more types of software may include, but are not limited to, Internet web search software, email virus scanning software, database software, and streaming video content software.In at least one embodiment, one or more application programs 842 included in the application layer 840 may include at least a part of nodes CR816(1)-816(N), grouped computing resources 814 and/or framework layer 820 The distributed file system 828 uses one or more types of applications. One or more types of applications can include, but are not limited to, any number of genomics applications, cognitive computing and machine learning applications, including training or inference software, machine learning framework software (e.g., PyTorch, TensorFlow, Caffe Etc.) or other machine learning applications used in conjunction with one or more embodiments.In at least one embodiment, any one of the configuration manager 824, the resource manager 826, and the resource coordinator 812 can realize any number and type of self based on any number and type of data obtained in any technically feasible way. Modify the action. In at least one embodiment, the self-modifying action can relieve the data center operator of the data center 800 from making potentially bad configuration decisions and can avoid underutilized and/or poorly performed parts of the data center.In at least one embodiment, the data center 800 may include tools, services, software, or other resources to train one or more machine learning models or use one or more machine learning models according to one or more embodiments described herein. A machine learning model to predict or reason about information. For example, in at least one embodiment, the machine learning model can be trained by calculating weight parameters according to the neural network architecture by using the software and computing resources described above in relation to the data center 800. In at least one embodiment, by using the weight parameters calculated by one or more of the training techniques described herein, the resources described above and with respect to the data center 800 can be used, and the resources corresponding to one or more nerves can be used. The trained machine learning model of the network infers or predicts information.In at least one embodiment, the data center may use a CPU, an application specific integrated circuit (ASIC), a GPU, an FPGA, or other hardware to use the above resources to perform training and/or inference. In addition, one or more of the above-mentioned software and/or hardware resources may be configured as a service to allow users to train or perform information reasoning, such as image recognition, speech recognition, or other artificial intelligence services.Inference and/or training logic 715 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding the inference and/or training logic 715 are provided below in conjunction with FIG. 7A and/or FIG. 7B. In at least one embodiment, inference and/or training logic 715 can be used in the system of Figure 8, based at least in part on calculations using neural network training operations, neural network functions and/or architecture, or neural network use cases described herein. Weight parameters to inference or predict operations.Such components can be used to generate various scene graphs from one or more rule sets, which can be used to generate training data or image content representing one or more scenes of a virtual environment.computer system9 is a block diagram showing an exemplary computer system according to at least one embodiment, the exemplary computer system may be a system with interconnected devices and components, a system on a chip (SOC), or some combination thereof formed with a processor 900. The processor may include an execution unit to execute instructions. In at least one embodiment, according to the present disclosure, such as the embodiments described herein, the computer system 900 may include, but is not limited to, components, such as a processor 902, the execution unit of which includes logic to execute an algorithm for process data. In at least one embodiment, the computer system 900 may include a processor, such as the processor family available from Intel Corporation of Santa Clara, California, XeonTM, XScaleTM and/or StrongARMTM, CoreTM Or NervanaTM microprocessor, although other systems (including PCs with other microprocessors, engineering workstations, set-top boxes, etc.) can also be used. In at least one embodiment, the computer system 900 may execute a version of the WINDOWS operating system available from Microsoft Corporation of Redmond, Wash., although other operating systems (e.g., UNIX and Linux) , Embedded software and/or graphical user interface can also be used.The embodiments can be used in other devices, such as handheld devices and embedded applications. Some examples of handheld devices include cellular phones, Internet Protocol devices, digital cameras, personal digital assistants ("PDAs"), and handheld PCs. In at least one embodiment, the embedded application may include a microcontroller, a digital signal processor ("DSP"), a system on a chip, a network computer ("NetPC"), a set-top box, a network hub, a wide area network ("WAN") switch, Or any other system that can execute one or more instructions according to at least one embodiment.In at least one embodiment, the computer system 900 may include, but is not limited to, a processor 902, which may include, but is not limited to, one or more execution units 908 to perform machine learning model training and/ Or reasoning. In at least one embodiment, the computer system 900 is a single-processor desktop or server system, but in another embodiment, the computer system 900 may be a multi-processor system. In at least one embodiment, the processor 902 may include, but is not limited to, a complex instruction set computer ("CISC") microprocessor, a reduced instruction set computing ("RISC") microprocessor, and a very long instruction word ("VLIW") A microprocessor, a processor that implements a combination of instruction sets, or any other processor device, such as a digital signal processor. In at least one embodiment, the processor 902 may be coupled to a processor bus 910, and the processor bus 910 may transmit data signals between the processor 902 and other components in the computer system 900.In at least one embodiment, the processor 902 may include, but is not limited to, a level 1 ("L1") internal cache memory ("cache") 904. In at least one embodiment, the processor 902 may have a single internal cache or multiple levels of internal cache. In at least one embodiment, the cache memory may reside outside the processor 902. Depending on specific implementations and requirements, other embodiments may also include a combination of internal and external caches. In at least one embodiment, the register file 906 can store different types of data in various registers, including but not limited to integer registers, floating point registers, status registers, and instruction pointer registers.In at least one embodiment, an execution unit 908 that includes but is not limited to logic that performs integer and floating-point operations is also located in the processor 902. In at least one embodiment, the processor 902 may also include a microcode ("ucode") read-only memory ("ROM") for storing the microcode of certain macro instructions. In at least one embodiment, the execution unit 908 may include logic for processing the packaged instruction set 909. In at least one embodiment, by including the packaged instruction set 909 in the instruction set of the general-purpose processor 902 and related circuits to execute the instructions, the packaged data in the general-purpose processor 902 can be used to perform many operations used by multimedia applications. . In one or more embodiments, it is possible to speed up and more efficiently execute many multimedia applications by using the full width of the processor’s data bus to perform operations on the packaged data, which may not require the processor’s data Smaller data units are transferred on the bus to perform one or more operations of one data element at a time.In at least one embodiment, the execution unit 908 can also be used in microcontrollers, embedded processors, graphics devices, DSPs, and other types of logic circuits. In at least one embodiment, the computer system 900 may include, but is not limited to, a memory 920. In at least one embodiment, the memory 920 may be implemented as a dynamic random access memory ("DRAM") device, a static random access memory ("SRAM") device, a flash memory device, or other storage device. In at least one embodiment, the memory 920 may store instructions 919 and/or data 921 represented by data signals that can be executed by the processor 902.In at least one embodiment, the system logic chip may be coupled to the processor bus 910 and the memory 920. In at least one embodiment, the system logic chip may include, but is not limited to, a memory controller hub ("MCH") 916, and the processor 902 may communicate with the MCH 916 via the processor bus 910. In at least one embodiment, the MCH 916 may provide a high-bandwidth memory path 918 to the memory 920 for instruction and data storage and for storage of graphics commands, data, and textures. In at least one embodiment, the MCH 916 can initiate data signals between the processor 902, the memory 920, and other components in the computer system 900, and bridge data between the processor bus 910, the memory 920, and the system I/O 922 Signal. In at least one embodiment, the system logic chip may provide a graphics port for coupling to a graphics controller. In at least one embodiment, the MCH 916 may be coupled to the memory 920 through a high bandwidth memory path 918, and the graphics/video card 912 may be coupled to the MCH 916 through an accelerated graphics port (Accelerated Graphics Port) ("AGP") interconnect 914.In at least one embodiment, the computer system 900 can use the system I/O 922 as a proprietary hub interface bus to couple the MCH 916 to an I/O controller hub ("ICH") 930. In at least one embodiment, the ICH 930 can provide a direct connection to certain I/O devices through a local I/O bus. In at least one embodiment, the local I/O bus may include, but is not limited to, a high-speed I/O bus for connecting peripheral devices to the memory 920, the chipset, and the processor 902. Examples may include, but are not limited to, audio controller 929, firmware hub ("Flash BIOS") 928, wireless transceiver 926, data storage 924, traditional I/O controller 923 containing user input and keyboard interface 925, serial expansion port 927 (e.g. Universal Serial Bus (USB)) and network controller 934. The data storage 924 may include a hard disk drive, a floppy disk drive, a CD-ROM device, a flash memory device, or other mass storage devices.In at least one embodiment, FIG. 9 shows a system including interconnected hardware devices or "chips", while in other embodiments, FIG. 9 may show an exemplary system on a chip ("SoC"). In at least one embodiment, the device can be interconnected with a proprietary interconnect, a standardized interconnect (e.g., PCIe), or some combination thereof. In at least one embodiment, one or more components of the computer system 900 are interconnected using computational express link (CXL) interconnection.Inference and/or training logic 715 is used to perform inference and/or training operations related to one or more embodiments. Details regarding the inference and/or training logic 715 are provided below in conjunction with FIG. 7A and/or FIG. 7B. In at least one embodiment, inference and/or training logic 715 may be used in the system of FIG. 9 for at least partly based on the use of neural network training operations, neural network functions and/or architecture or neural network use cases described herein The calculated weight parameters are used for inference or prediction operations.Such components can be used to generate various scene graphs from one or more rule sets, which can be used to generate training data or image content representing one or more scenes of a virtual environment.FIG. 10 is a block diagram showing an electronic device 1000 for using a processor 1010 according to at least one embodiment. In at least one embodiment, the electronic device 1000 may be, for example, but not limited to, a notebook computer, a tower server, a rack server, a blade server, a laptop computer, a desktop computer, a tablet computer, a mobile device, a phone, an embedded computer Computer or any other suitable electronic device.In at least one embodiment, the system 1000 may include, but is not limited to, a processor 1010 communicatively coupled to any suitable number or type of components, peripherals, modules, or devices. In at least one embodiment, the processor 1010 uses a bus or interface coupling, such as a 1°C bus, system management bus ("SMBus"), low pin count (LPC) bus, serial peripheral interface ("SPI"), high-definition Audio ("HDA") bus, Serial Advanced Technology Attachment ("SATA") bus, Universal Serial Bus ("USB") (version 1, 2, 3), or universal asynchronous receiver/transmitter ("UART") bus. In at least one embodiment, FIG. 10 shows a system that includes interconnected hardware devices or "chips", while in other embodiments, FIG. 15 may show an exemplary system on a chip ("SoC"). In at least one embodiment, the device shown in FIG. 10 may be interconnected with a proprietary interconnection line, a standardized interconnection (e.g., PCIe), or some combination thereof. In at least one embodiment, one or more of the components of FIG. 10 are interconnected using a compute fast link (CXL) interconnection line.In at least one embodiment, FIG. 10 may include a display 1024, a touch screen 1025, a touch panel 1030, a near field communication unit ("NFC") 1045, a sensor hub 1040, a thermal sensor 1046, an express chipset ("EC") 1035, Trusted Platform Module ("TPM") 1038, BIOS/firmware/flash memory ("BIOS, FW Flash") 1022, DSP 1060, drive ("SSD" or "HDD") 1020 (for example, solid state disk ("SSD") or Hard Disk Drive ("HDD")), Wireless Local Area Network Unit ("WLAN") 1050, Bluetooth Unit 1052, Wireless Wide Area Network Unit ("WWAN") 1056, Global Positioning System (GPS) 1055, Camera ("USB3.0 Camera") 1054 (for example, a USB 3.0 camera) and/or a low power double data rate ("LPDDR") memory unit ("LPDDR3") 1015 implemented in, for example, the LPDDR3 standard. These components can each be implemented in any suitable way.In at least one embodiment, other components may be communicatively coupled to the processor 1010 through the components discussed above. In at least one embodiment, an accelerometer 1041, an ambient light sensor ("ALS") 1042, a compass 1043, and a gyroscope 1044 may be communicatively coupled to the sensor hub 1040. In at least one embodiment, the thermal sensor 1039, the fan 1037, the keyboard 1046, and the touchpad 1030 may be communicatively coupled to the EC 1035. In at least one embodiment, the speaker 1063, earphone 1064, and microphone ("mic") 1065 can be communicatively coupled to an audio unit ("audio codec and class D amplifier") 1064, which in turn can be communicatively coupled to the DSP 1060. In at least one embodiment, the audio unit 1064 may include, for example, but not limited to, an audio encoder/decoder ("codec") and a class D amplifier. In at least one embodiment, a SIM card ("SIM") 1057 can be communicatively coupled to the WWAN unit 1056. In at least one embodiment, components such as the WLAN unit 1050 and the Bluetooth unit 1052 and the WWAN unit 1056 may be implemented as next-generation form factors ("NGFF").Inference and/or training logic 715 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding the inference and/or training logic 715 are provided below in conjunction with FIG. 7A and/or FIG. 7B. In at least one embodiment, inference and/or training logic 1015 can be used in system diagram 10 for calculations based at least in part on the use of neural network training operations, neural network functions and/or architecture, or neural network use cases described herein The weight parameters are used for inference or prediction operations.Such components can be used to generate various scene graphs from one or more rule sets, which can be used to generate training data or image content representing one or more scenes of a virtual environment.Figure 11 is a block diagram of a processing system according to at least one embodiment. In at least one embodiment, the system 1100 includes one or more processors 1102 and one or more graphics processors 1108, and may be a single-processor desktop system, a multi-processor workstation system, or a large number of processors 1102 Or a server system with a processor core 1107. In at least one embodiment, the system 1100 is a processing platform incorporated in a system-on-chip (SoC) integrated circuit for use in mobile, handheld, or embedded devices.In at least one embodiment, the system 1100 may include or be incorporated in a server-based gaming platform, a gaming console including games and media consoles, a mobile gaming console, a handheld gaming console, or an online gaming console. In at least one embodiment, the system 1100 is a mobile phone, a smart phone, a tablet computing device, or a mobile Internet device. In at least one embodiment, the processing system 1100 may further include a wearable device, coupled with the wearable device or integrated in the wearable device, such as a smart watch wearable device, a smart glasses device, an augmented reality device, or a virtual reality device. In at least one embodiment, the processing system 1100 is a television or set-top box device, which has one or more processors 1102 and a graphical interface generated by the one or more graphics processors 1108.In at least one embodiment, the one or more processors 1102 each include one or more processor cores 1107 to process instructions that, when executed, perform operations for system and user software. In at least one embodiment, each of the one or more processor cores 1107 is configured to process a specific instruction set 1109. In at least one embodiment, the instruction set 1109 can facilitate complex instruction set computing (CISC), reduced instruction set computing (RISC), or computing via very long instruction words (VLIW). In at least one embodiment, the processor cores 1107 may each process a different instruction set 1109, which may include instructions that help emulate other instruction sets. In at least one embodiment, the processor core 1107 may also include other processing devices, such as a digital signal processor (DSP).In at least one embodiment, the processor 1102 includes a cache memory 1104. In at least one embodiment, the processor 1102 may have a single internal cache or multiple levels of internal cache. In at least one embodiment, the cache memory is shared among the various components of the processor 1102. In at least one embodiment, the processor 1102 also uses an external cache (eg, level three (L3) cache or last level cache (LLC)) (not shown), which can use known cache coherence Sexual technology is shared among the processor cores 1107. In at least one embodiment, the processor 1102 additionally includes a register file 1106, which may include different types of registers for storing different types of data (for example, integer registers, floating point registers, status registers, and instruction pointer registers). In at least one embodiment, the register file 1106 may include general registers or other registers.In at least one embodiment, one or more processors 1102 are coupled with one or more interface buses 1110 to transmit communication signals, such as addresses, data, or control, between the processor 1102 and other components in the system 1100 Signal. In at least one embodiment, the interface bus 1110 in one embodiment may be a processor bus, such as a version of the direct media interface (DMI) bus. In at least one embodiment, the interface 1110 is not limited to the DMI bus, and may include one or more peripheral component interconnection buses (for example, PCI, PCI Express), memory bus, or other types of interface buses. In at least one embodiment, the processor 1102 includes an integrated memory controller 1116 and a platform controller hub 1130. In at least one embodiment, the memory controller 1116 facilitates communication between the memory device and other components of the system 1100, and the platform controller hub (PCH) 1130 provides connection to the I/O device via the local I/O bus.In at least one embodiment, the memory device 1120 may be a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, a flash memory device, a phase change memory device, or other memory devices with appropriate performance as process memory. Storage device. In at least one embodiment, the memory device 1120 may be used as a system memory of the system 1100 to store data 1122 and instructions 1121 for use when one or more processors 1102 execute applications or processes. In at least one embodiment, the memory controller 1116 is also coupled with an optional external graphics processor 1112, which can communicate with one or more graphics processors 1108 in the processors 1102 to perform graphics and Media operations. In at least one embodiment, the display device 1111 may be connected to one or more processors 1102. In at least one embodiment, the display device 1111 may include one or more of internal display devices, such as in a mobile electronic device or a laptop device or an external display device connected via a display interface (for example, DisplayPort, etc.). In at least one embodiment, the display device 1111 may include a head-mounted display (HMD), such as a stereoscopic display device used in a virtual reality (VR) application or an augmented reality (AR) application.In at least one embodiment, the platform controller hub 1130 enables peripheral devices to be connected to the memory device 1120 and the processor 1102 via a high-speed I/O bus. In at least one embodiment, I/O peripheral devices include, but are not limited to, audio controller 1146, network controller 1134, firmware interface 1128, wireless transceiver 1126, touch sensor 1125, data storage device 1124 (e.g., hard drive, flash memory) Wait). In at least one embodiment, the data storage device 1124 may be connected via a storage interface (e.g., SATA) or via a peripheral bus, such as a peripheral component interconnect bus (e.g., PCI, PCI Express). In at least one embodiment, the touch sensor 1125 may include a touch screen sensor, a pressure sensor, or a fingerprint sensor. In at least one embodiment, the wireless transceiver 1126 may be a Wi-Fi transceiver, a Bluetooth transceiver, or a mobile network transceiver such as a 3G, 4G, or Long Term Evolution (LTE) transceiver. In at least one embodiment, the firmware interface 1128 enables communication with system firmware, and may be, for example, a unified extensible firmware interface (UEFI). In at least one embodiment, the network controller 1134 can enable a network connection to a wired network. In at least one embodiment, a high-performance network controller (not shown) is coupled to the interface bus 1110. In at least one embodiment, the audio controller 1146 is a multi-channel high-definition audio controller. In at least one embodiment, the system 1100 includes an optional legacy I/O controller 1140 for coupling legacy (eg, Personal System 2 (PS/2)) devices to the system. In at least one embodiment, the platform controller hub 1130 can also be connected to one or more universal serial bus (USB) controllers 1142, which are connected to input devices, such as a combination of a keyboard and mouse 1143, and a camera 114 Or other USB input devices.In at least one embodiment, instances of the memory controller 1116 and the platform controller hub 1130 may be integrated into a discrete external graphics processor, such as the external graphics processor 1112. In at least one embodiment, the platform controller hub 1130 and/or the storage controller 1116 may be external to the one or more processors 1102. For example, in at least one embodiment, the system 1100 may include an external storage controller 1116 and a platform controller hub 1130, which may be configured as a storage controller hub and a system chipset communicating with one or more processors 1102. Peripheral controller hub.Inference and/or training logic 715 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding the inference and/or training logic 715 are provided below in conjunction with FIG. 7A and/or FIG. 7B. In at least one embodiment, part or all of the inference and/or training logic 715 may be incorporated into the graphics processor 1500. For example, in at least one embodiment, the training and/or inference techniques described herein may use one or more ALUs embodied in a graphics processor. In addition, in at least one embodiment, the inference and/or training operations described herein can be completed using logic other than the logic shown in FIG. 7A or FIG. 7B. In at least one embodiment, the weight parameters can be stored in on-chip or off-chip memory and/or registers (shown or not shown) that configure the ALU of the graphics processor to perform one or more Multiple machine learning algorithms, neural network architectures, use cases, or training techniques described here.Such components can be used to generate various scene graphs from one or more rule sets, which can be used to generate training data or image content representing one or more scenes of a virtual environment.Figure 12 is a block diagram of a processor 1200 having one or more processor cores 1202A-1202N, an integrated memory controller 1214, and an integrated graphics processor 1208 according to at least one embodiment. In at least one embodiment, the processor 1200 may include additional cores up to and including the additional core 1202N represented by the dashed box. In at least one embodiment, each processor core 1202A-1202N includes one or more internal cache units 1204A-1204N. In at least one embodiment, each processor core can also access one or more shared cache units 1206.In at least one embodiment, the internal cache units 1204A-1204N and the shared cache unit 1206 represent the cache memory hierarchy within the processor 1200. In at least one embodiment, the cache memory units 1204A-1204N may include at least one level of instructions and data caches in one or more levels of each processor core and a shared intermediate cache, such as level 2 (L2) , Level 3 (L3), Level 4 (L4) or other levels of cache, where the highest level of cache before the external memory is classified as LLC. In at least one embodiment, the cache coherency logic maintains coherency between the various cache units 1206 and 1204A-1204N.In at least one embodiment, the processor 1200 may further include a set of one or more bus controller units 1216 and a system agent core 1210. In at least one embodiment, one or more bus controller units 1216 manage a set of peripheral buses, such as one or more PCI or PCI express buses. In at least one embodiment, the system agent core 1210 provides management functions for various processor components. In at least one embodiment, the system agent core 1210 includes one or more integrated memory controllers 1214 to manage access to various external memory devices (not shown).In at least one embodiment, one or more processor cores 1202A-1202N include support for simultaneous multithreading. In at least one embodiment, the system agent core 1210 includes components for coordinating and operating the cores 1202A-1202N during multithreading. In at least one embodiment, the system agent core 1210 may additionally include a power control unit (PCU) that includes logic and components to adjust one or more powers of the processor cores 1202A-1202N and the graphics processor 1208 state.In at least one embodiment, the processor 1200 additionally includes a graphics processor 1208 to perform graphics processing operations. In at least one embodiment, the graphics processor 1208 is coupled with a shared cache unit 1206 and a system agent core 1210 including one or more integrated memory controllers 1214. In at least one embodiment, the system agent core 1210 also includes a graphics processor for the display controller 1211 to drive output to one or more coupled displays. In at least one embodiment, the display controller 1211 may also be an independent module coupled with the graphics processor 1208 via at least one interconnection, or may be integrated in the graphics processor 1208.In at least one embodiment, the ring-based interconnection unit 1212 is used to couple the internal components of the processor 1200. In at least one embodiment, alternative interconnection units may be used, such as point-to-point interconnection, exchange interconnection, or other technologies. In at least one embodiment, the graphics processor 1208 is coupled to the ring interconnect 1212 via an I/O link 1213.In at least one embodiment, the I/O link 1213 represents at least one of a variety of I/O interconnections, including on-package I/O that facilitates communication between various processor components and the high-performance embedded memory module 1218. /O interconnect, such as eDRAM modules. In at least one embodiment, each of the processor cores 1202A-1202N and the graphics processor 1208 uses the embedded memory module 1218 as a shared last level cache.In at least one embodiment, the processor cores 1202A-1202N are homogeneous cores that execute a common instruction set architecture. In at least one embodiment, the processor cores 1202A-1202N are heterogeneous in terms of instruction set architecture (ISA), where one or more of the processor cores 1202A-1202N execute a common instruction set, and the processor core 1202A One or more other cores of 1202N execute a common instruction set or a subset of a different instruction set. In at least one embodiment, in terms of microarchitecture, the processor cores 1202A-1202N are heterogeneous, in which one or more cores with relatively high power consumption and one or more cores with relatively low power consumption are heterogeneous. Power core coupling. In at least one embodiment, the processor 1200 may be implemented on one or more chips or as an SoC integrated circuit.Inference and/or training logic 715 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding the inference and/or training logic 715 are provided below in conjunction with FIG. 7a and/or FIG. 7b. In at least one embodiment, part or all of the inference and/or training logic 715 may be incorporated into the processor 1200. For example, in at least one embodiment, the training and/or inference techniques described herein may use one or more embodied in the graphics processor 1512, one or more graphics cores 1202A-1202N, or other components of FIG. 12 ALU. In addition, in at least one embodiment, the inference and/or training operations described herein can be completed using logic other than the logic shown in FIG. 7A or FIG. 7B. In at least one embodiment, the weight parameters can be stored in on-chip or off-chip memory and/or registers (shown or not shown) that configure the ALU of the graphics processor 1200 to perform one or More kinds of machine learning algorithms, neural network architectures, use cases, or training techniques introduced in this article.Such components can be used to generate various scene graphs from one or more rule sets, which can be used to generate training data or image content representing one or more scenes of a virtual environment.Virtualized computing platformFigure 13 is an example data flow diagram of a process 1300 for generating and deploying an image processing and inference pipeline in accordance with at least one embodiment. In at least one embodiment, the process 1300 may be deployed for use with imaging equipment, processing equipment, and/or other equipment types at one or more facilities 1302. The process 1300 may be performed within the training system 1304 and/or the deployment system 1306. In at least one embodiment, the training system 1304 can be used to perform training, deployment, and implementation of machine learning models (for example, neural networks, object detection algorithms, computer vision algorithms, etc.) for the deployment system 1306. In at least one embodiment, the deployment system 1306 can be configured to offload processing and computing resources in a distributed computing environment to reduce the infrastructure requirements of the facility 1302. In at least one embodiment, one or more applications in the pipeline can use or call the services of the deployment system 1306 (for example, inference, visualization, computing, AI, etc.) during the execution of the application.In at least one embodiment, some applications used in advanced processing and inference pipelines may use machine learning models or other AI to perform one or more processing steps. In at least one embodiment, the data 1308 (eg, imaging data) generated at the facility 1302 can be used to train a machine learning model at the facility 1302 (and the data 1308 is stored in one or more picture archives at the facility 1302 and Communication system (PACS) server), using imaging or sequencing data 1308 from another facility or a combination thereof to train a machine learning model. In at least one embodiment, the training system 1304 can be used to provide applications, services, and/or other resources to generate deployable machine learning models for deploying the work of the system 1306.In at least one embodiment, the model registry 1324 can be supported by an object store, which can support version control and object metadata. In at least one embodiment, the object storage can be accessed from within the cloud platform through, for example, a cloud storage (eg, cloud 1426 of FIG. 14) compatible application programming interface (API). In at least one embodiment, the machine learning model in the model registry 1324 can be uploaded, listed, modified, or deleted by personnel or partners of the system interacting with the API. In at least one embodiment, the API can provide access to methods that allow users with appropriate credentials to associate the model with the application so that the model can be executed as part of the execution of a containerized instance of the application.In at least one embodiment, the training pipeline 1404 (FIG. 14) may include a scenario where the facility 1302 is training their own machine learning model, or has an existing machine learning model that needs to be optimized or updated. In at least one embodiment, imaging data 1308 generated by one or more imaging devices, sequencing devices, and/or other device types may be received. In at least one embodiment, once the imaging data 1308 is received, the AI-assisted annotation 1310 can be used to help generate annotations corresponding to the imaging data 1308 for use as ground truth data for the machine learning model. In at least one embodiment, the AI-assisted annotation 1310 may include one or more machine learning models (for example, a convolutional neural network (CNN)), which may be trained to generate imaging data corresponding to certain types 1038 ( For example, comments from some devices). In at least one embodiment, the AI-assisted annotation 1310 can then be used directly, or an annotation tool can be used to adjust or fine-tune it to generate ground truth data. In at least one embodiment, AI-assisted annotation 1310, labeled clinical data 1312, or a combination thereof can be used as ground truth data for training machine learning models. In at least one embodiment, the trained machine learning model may be referred to as the output model 1316, and may be used by the deployment system 1306, as described herein.In at least one embodiment, the training pipeline 1404 (FIG. 14) may include a scenario where the facility 1302 requires a machine learning model for executing one or more processing tasks of one or more applications in the deployment system 1306, However, the facility 1302 may not currently have such a machine learning model (or may not have an efficient or effective model optimized for this purpose). In at least one embodiment, an existing machine learning model can be selected from the model registry 1324. In at least one embodiment, the model registry 1324 may include a machine learning model trained to perform various inference tasks on imaging data. In at least one embodiment, the machine learning model in the model registry 1324 can be trained on imaging data from a facility different from facility 1302 (eg, a facility located remotely). In at least one embodiment, the machine learning model may have been trained on imaging data from one location, two locations, or any number of locations. In at least one embodiment, when training imaging data from a specific location, the training can be performed at that location, or at least in a manner that protects the confidentiality of the imaging data or restricts the transfer of the imaging data from the local. In at least one embodiment, once the model is trained or partially trained on a location, the machine learning model can be added to the model registry 1324. In at least one embodiment, the machine learning model can then be retrained or updated at any number of other facilities, and the retrained or updated model can be made available in the model registry 1324. In at least one embodiment, the machine learning model can then be selected from the model registry 1324 and referred to as the output model 1316, and can be used in the deployment system 1306 to execute one or more applications of the deployment system. More processing tasks.In at least one embodiment, the training pipeline 1404 (FIG. 14), the scenario may include a facility 1302 that requires a machine learning model for executing one or more applications for one or more applications in the deployment system 1306 Processing tasks, but the facility 1302 currently may not have such a machine learning model (or may not have an efficient or effective model optimized for this purpose). In at least one embodiment, due to population differences, the robustness of the training data used to train the machine learning model, the anomalous diversity of training data, and/or other problems with the training data, the imaging data 1308 generated at the facility 1302 may be The machine learning model selected from the model registry 1324 is not fine-tuned or optimized. In at least one embodiment, the AI-assisted annotation 1310 can be used to assist in generating annotations corresponding to the imaging data 1308 for use as ground truth data for retraining or updating the machine learning model. In at least one embodiment, the labeled data 1312 can be used as ground truth data for training machine learning models. In at least one embodiment, retraining or updating a machine learning model may be referred to as model training 1314. In at least one embodiment, model training 1314 (eg, AI-assisted annotation 1310, labeled clinical data 1312, or a combination thereof) can be used to retrain or update ground truth data for machine learning models. In at least one embodiment, the trained machine learning model may be referred to as the output model 1316, and may be used by the deployment system 1306, as described herein.In at least one embodiment, the deployment system 1306 may include software 1318, services 1320, hardware 1322, and/or other components, features, and functions. In at least one embodiment, the deployment system 1306 can include a software "stack" so that the software 1318 can be built on top of the service 1320 and the service 1320 can be used to perform some or all processing tasks, and the service 1320 and the software 1318 can be built On top of the hardware 1322 and using the hardware 1322 to perform the processing, storage, and/or other computing tasks of the deployment system 1306. In at least one embodiment, the software 1318 can include any number of different containers, where each container can perform an instantiation of an application program. In at least one embodiment, each application can perform one or more processing tasks (eg, inference, object detection, feature detection, segmentation, image enhancement, calibration, etc.) in the advanced processing and inference pipeline. In at least one embodiment, in addition to receiving and configuring a container for imaging data used by each container and/or device 1302 after processing through a pipeline (for example, to convert the output back to a usable data type), the imaging data may be processed based on The selection of different containers expected or required by the data 1308 defines the advanced processing and inference pipeline. In at least one embodiment, the combination of containers within the software 1318 (for example, containers that make up a pipeline) can be referred to as a virtual instrument (as described in more detail herein), and the virtual instrument can be executed using the service 1320 and the hardware 1322 Part or all of the processing tasks of the application instantiated in the container.In at least one embodiment, the data processing pipeline may receive input data (e.g., imaging data 1308) in a specific format in response to an inference request (e.g., a request from a user of the deployment system 1306). In at least one embodiment, the input data may represent one or more images, videos, and/or other data representations generated by one or more imaging devices. In at least one embodiment, data can be pre-processed as part of a data processing pipeline to prepare the data for processing by one or more applications. In at least one embodiment, post-processing can be performed on the output of one or more inference tasks or other processing tasks of the pipeline to prepare output data for the next application and/or prepare output data for transmission and/or user use ( For example, as a response to an inference request). In at least one embodiment, the inference task may be performed by one or more machine learning models, such as a trained or deployed neural network, which may include the output model 1316 of the training system 1304.In at least one embodiment, the tasks of the data processing pipeline may be encapsulated in one or more containers, and each container represents an application program capable of referencing a machine learning model and a discrete full-function instantiation of a virtualized computing environment. In at least one embodiment, the container or application can be published to a private (eg, limited access) area of the container registry (described in more detail here), and the trained or deployed model can be stored in the model registry 1324 and associated with one or more applications. In at least one embodiment, the image of the application (eg, container image) can be available in the container registry, and once selected by the user from the container registry for deployment in the pipeline, the image can be used to generate The instantiated container of the application used by the user system.In at least one embodiment, developers (e.g., software developers, clinicians, doctors, etc.) can develop, publish, and store applications for performing image processing and/or inference on the provided data (e.g., as a container ). In at least one embodiment, a software development kit (SDK) associated with the system can be used to perform development, release, and/or storage (for example, to ensure that the developed application and/or container is consistent or compatible with the system). In at least one embodiment, an SDK that can support at least some services 1320 as a system (for example, the system 1400 of FIG. 14) can be used to test and develop locally (for example, at the first facility, on data from the first facility). s application. In at least one embodiment, because DICOM objects can contain anywhere from one to hundreds of images or other data types, and due to changes in data, developers can be responsible for management (for example, setting up the structure to build the preprocessing as an application Procedures, etc.) to extract and prepare incoming data. In at least one embodiment, once verified by the system 1400 (e.g., for accuracy), the application may be available in the container registry for the user to select and/or implement to target the user’s facility (e.g., the second facility The data at) performs one or more processing tasks.In at least one embodiment, the person can then share the application or container over the network for access and use by users of the system (e.g., system 1400 of FIG. 14). In at least one embodiment, the completed and verified application or container can be stored in the container registry, and the associated machine learning model can be stored in the model registry 1324. In at least one embodiment, the requesting entity (which provides inference or image processing requests) can browse the container registry and/or model registry 1324 to find applications, containers, data sets, machine learning models, etc., and select desired elements Combine to be included in the data processing pipeline, and submit an imaging processing request. In at least one embodiment, the request may include input data necessary to execute the request (and in some examples, data associated with the patient), and/or may include one or more applications to be executed in processing the request Choice of program and/or machine learning model. In at least one embodiment, the request may then be passed to one or more components (eg, cloud) of the deployment system 1306 to perform processing of the data processing pipeline. In at least one embodiment, the processing performed by the deployment system 1306 may include referencing selected elements (eg, applications, containers, models, etc.) from the container registry and/or model registry 1324. In at least one embodiment, once the results are generated by the pipeline, the results can be returned to the user for reference (for example, for viewing in a viewing application suite executed locally, on a local workstation or terminal).In at least one embodiment, to help process or execute applications or containers in the pipeline, services 1320 can be utilized. In at least one embodiment, the service 1320 may include a computing service, an artificial intelligence (AI) service, a visualization service, and/or other service types. In at least one embodiment, the service 1320 can provide a function common to one or more applications in the software 1318, so the function can be abstracted as a service that can be called or utilized by the application. In at least one embodiment, the functions provided by the service 1320 can run dynamically and more efficiently, while also scaling well by allowing applications to process data in parallel (for example, using the parallel computing platform 1430 (Figure 14)) . In at least one embodiment, it is not required that each application program of the same function provided by the shared service 1320 has a corresponding instance of the service 1320, but the service 1320 may be shared among and among various applications. In at least one embodiment, as a non-limiting example, the service may include an inference server or engine that can be used to perform tasks of detecting or segmenting segments. In at least one embodiment, a model training service may be included, which may provide machine learning model training and/or retraining capabilities. In at least one embodiment, it may further include a data enhancement service that can provide GPU-accelerated data (for example, DICOM, RIS, CIS, REST compatible, RPC, raw, etc.) extraction, resizing, scaling, and/or Other enhancements. In at least one embodiment, a visualization service can be used, which can add image rendering effects (for example, ray tracing, rasterization, denoising, sharpening, etc.) to two-dimensional (2D) and/or three-dimensional (3D) ) The model image adds a sense of reality. In at least one embodiment, a virtual instrument service may be included, which provides beamforming, segmentation, inference, imaging, and/or support for other applications in the pipeline of the virtual instrument.In at least one embodiment, where the service 1320 includes an AI service (for example, an inference service), one or more machine learning may be performed by calling (for example, as an API call) an inference service (for example, an inference server) Models to execute one or more machine learning models or process as part of application execution. In at least one embodiment, where another application program includes one or more machine learning models for segmented tasks, the application program may call an inference service to execute one or more machine learning models associated with the segmented task. Or more machine learning models for processing operations. In at least one embodiment, the software 1318 that implements advanced processing and inference pipelines including segmentation applications and anomaly detection applications can be simplified because each application can call the same inference service to perform one or more inference tasks .In at least one embodiment, the hardware 1322 may include a GPU, a CPU, a graphics card, an AI/deep learning system (for example, an AI supercomputer, such as NVIDIA's DGX), a cloud platform, or a combination thereof. In at least one embodiment, different types of hardware 1322 can be used to provide effective special purpose support for the software 1318 and services 1320 in the deployment system 1306. In at least one embodiment, the use of GPU processing for local processing (for example, at facility 1302), within an AI/deep learning system, in a cloud system, and/or other processing components in the deployment system 1306, can be implemented In order to improve the efficiency, accuracy and efficacy of image processing and generation. In at least one embodiment, as a non-limiting example, the software 1318 and/or service 1320 may be optimized for GPU processing for deep learning, machine learning, and/or high performance computing. In at least one embodiment, at least some of the computing environments in which the deployment system 1306 and/or the training system 1304 are deployed can utilize GPU-optimized software (for example, a combination of NVIDIA’s DGX system hardware and software) to be deployed in one or more of the data centers. Executed in multiple supercomputers or high-performance computing systems. In at least one embodiment, the hardware 1322 can include any number of GPUs that can be called to perform data processing in parallel, as described herein. In at least one embodiment, the cloud platform may further include GPU processing for GPU-optimized execution of deep learning tasks, machine learning tasks, or other computing tasks. In at least one embodiment, one or more AI/deep learning supercomputers and/or GPU optimized software (for example, as provided on NVIDIA’s DGX system) can be used as a hardware abstraction and scaling platform to execute the cloud platform (For example, NVIDIA's NGC). In at least one embodiment, the cloud platform can integrate an application container cluster system or an orchestration system (for example, KUBERNETES) on multiple GPUs to achieve seamless scaling and load balancing.Figure 14 is a system diagram of an example system 1400 for generating and deploying an imaging deployment pipeline according to at least one embodiment. In at least one embodiment, the system 1400 can be used to implement the process 1300 of FIG. 13 and/or other processes including advanced processing and inference pipelines. In at least one embodiment, the system 1400 may include a training system 1304 and a deployment system 1306. In at least one embodiment, the training system 1304 and the deployment system 1306 can be implemented using software 1318, services 1320, and/or hardware 1322, as described herein.In at least one embodiment, system 1400 (e.g., training system 1304 and/or deployment system 1306) may be implemented in a cloud computing environment (e.g., using cloud 1426). In at least one embodiment, the system 1400 can be implemented locally with respect to medical service facilities, or as a combination of cloud and local computing resources. In at least one embodiment, the access to the API in the cloud 1426 can be restricted to authorized users through established security measures or protocols. In at least one embodiment, the security protocol can include a network token that can be signed by an authentication (for example, AuthN, AuthZ, Gluecon, etc.) service, and can carry appropriate authorization. In at least one embodiment, the API (described herein) of the virtual instrument or other instances of the system 1400 may be restricted to a set of public IPs that have been audited or authorized for interaction.In at least one embodiment, the various components of the system 1400 can use any of a variety of different network types including but not limited to local area networks (LAN) and/or wide area networks (WAN) via wired and/or wireless communication protocols. Communicate between each other. In at least one embodiment, communications between facilities and components of the system 1400 (e.g., , Used to send reasoning request, used to receive the result of reasoning request, etc.).In at least one embodiment, the training system 1304 can execute a training pipeline 1404 similar to those described herein with respect to FIG. 13. In at least one embodiment, where one or more machine learning models will be used in the deployment pipeline 1410 by the deployment system 1306, the training pipeline 1404 can be used to train or retrain one or more (e.g., pre-trained ) Model, and/or implement one or more pre-trained models 1406 (for example, no need to retrain or update). In at least one embodiment, as a result of the training pipeline 1404, one or more output models 1316 may be generated. In at least one embodiment, the training pipeline 1404 may include any number of processing steps, such as, but not limited to, the conversion or adaptation of imaging data (or other input data). In at least one embodiment, for different machine learning models used by the deployment system 1306, different training pipelines 1404 may be used. In at least one embodiment, a training pipeline 1404 similar to the first example described with respect to FIG. 13 may be used for the first machine learning model, and a training pipeline 1404 similar to the second example described with respect to FIG. 13 may be used for the second machine learning. Model, and a training pipeline 1404 similar to the third example described with respect to FIG. 13 can be used for the third machine learning model. In at least one embodiment, any combination of tasks within the training system 1304 can be used according to the requirements of each corresponding machine learning model. In at least one embodiment, one or more of the machine learning models may have been trained and ready to be deployed, so the machine learning model may not be processed by the training system 1304 and may be implemented by the deployment system 1306.In at least one embodiment, depending on the implementation or embodiment, the one or more output models 1316 and/or the one or more pre-trained models 1406 may include any type of machine learning model. In at least one embodiment, but not limited to, the machine learning model used by the system 1400 may include the use of linear regression, logistic regression, decision tree, support vector machine (SVM), naive Bayes, k nearest neighbor (Knn), K Represents clustering, random forest, dimensionality reduction algorithm, gradient boosting algorithm, neural network (for example, autoencoder, convolution, recursion, perceptron, long/short-term memory (LSTM), Hopfield, Boltz One or more machine learning models of Boltzmann, deep belief, deconvolution, generative confrontation, liquid state machine, etc.) and/or other types of machine learning models.In at least one embodiment, the training pipeline 1404 may include AI-assisted annotations, as described herein in more detail at least with respect to FIG. 15B. In at least one embodiment, the markup data 1312 (eg, traditional annotations) can be generated by any number of techniques. In at least one embodiment, labels or other annotations can be generated in drawing programs (for example, annotation programs), computer-aided design (CAD) programs, marking programs, and other types of programs suitable for generating annotations or labels for ground truth, And/or can be hand-drawn in some examples. In at least one embodiment, the ground truth data can be synthetically generated (for example, generated from a computer model or rendering), real generated (for example, designed and generated from real-world data), machine automation (for example, using feature analysis and learning To extract features from the data, and then generate labels), manual annotations (for example, a labeling machine or an annotation expert, define the location of the labels), and/or a combination thereof. In at least one embodiment, for each instance of imaging data 1308 (or other data types used by the machine learning model), there may be corresponding ground truth data generated by the training system 1304. In at least one embodiment, the AI-assisted annotation can be performed as part of the deployment pipeline 1410; supplement or replace the AI-assisted annotation included in the training pipeline 1404. In at least one embodiment, the system 1400 may include a multi-layered platform that may include a software layer (eg, a diagnostic application (or other application type)) that performs one or more medical imaging and diagnostic functions Software 1318). In at least one embodiment, the system 1400 can be communicatively coupled (e.g., via an encrypted link) to a PACS server network of one or more facilities.In at least one embodiment, the system 1400 can be configured to access and reference data from a PACS server to perform operations such as training machine learning models, deploying machine learning models, image processing, reasoning, and/or other operations.In at least one embodiment, the software layer can be implemented as a secure encryption and/or authentication API, through which an application or application can be activated (e.g., called) from one or more external environments (e.g., facility 1302). container. In at least one embodiment, the application program may then call or execute one or more services 1320 to perform computing, AI, or visualization tasks associated with each application program, and the software 1318 and/or service 1320 may utilize hardware 1322 to Perform processing tasks in an efficient and effective manner.In at least one embodiment, the deployment system 1306 can execute the deployment pipeline 1410. In at least one embodiment, the deployment pipeline 1410 can include any number of applications (and/or other applications) that can be applied sequentially, non-sequentially, or otherwise to imaging data generated by imaging equipment, sequencing equipment, genome equipment, etc. Data type), including AI-assisted comments as described above. In at least one embodiment, as described herein, the deployment pipeline 1410 for an individual device may be referred to as a virtual instrument for the device (eg, a virtual ultrasound instrument, a virtual CT scanning instrument, a virtual sequencing instrument, etc.). In at least one embodiment, for a single device, there may be more than one deployment pipeline 1410 depending on the information expected from the data generated by the device. In at least one embodiment, there may be a first deployment pipeline 1410 in cases where it is desired to detect abnormalities from an MRI machine, and a second deployment pipeline 1410 in cases where image enhancement from the output of the MRI machine is desired.In at least one embodiment, the image generation application may include processing tasks including the use of machine learning models. In at least one embodiment, users may wish to use their own machine learning model, or select a machine learning model from the model registry 1324. In at least one embodiment, users can implement their own machine learning models or choose machine learning for models included in applications that perform processing tasks. In at least one embodiment, the application program may be optional and customizable, and by defining the structure of the application program, the deployment and implementation of the application program for a specific user is presented as a more seamless user experience. In at least one embodiment, by leveraging other features of the system 1400, such as services 1320 and hardware 1322, the deployment pipeline 1410 can be even more user-friendly, provide easier integration and produce more accurate, effective, and timely results.In at least one embodiment, the deployment system 1306 can include a user interface 1414 (eg, a graphical user interface, a web interface, etc.), which can be used to select applications to be included in one or more deployment pipelines 1410 , Modify or change the application program or parameters or its configuration, use and interact with the deployment pipeline 1410 during setup and/or deployment, and/or interact with the deployment system 1306 in other ways. In at least one embodiment, although not shown with respect to the training system 1304, the user interface 1414 (or a different user interface) can be used to select the model used in the deployment system 1306 for the selection of training or re-training in the training system 1304. The trained model, and/or used to otherwise interact with the training system 1304.In at least one embodiment, in addition to the application orchestration system 1428, the pipeline manager 1412 can also be used to manage the interaction between one or more applications or containers of the deployment pipeline 1410 and the service 1320 and/or hardware 1322 . In at least one embodiment, the pipeline manager 1412 may be configured to facilitate interaction from application to application, from application to service 1320, and/or from application or service to hardware 1322. In at least one embodiment, although shown as included in the software 1318, this is not intended to be limiting, and the pipeline manager 1412 may be included in the service 1320 in some examples. In at least one embodiment, the application program orchestration system 1428 (for example, Kubernetes, DOCKER, etc.) may include a container orchestration system, which may group applications into multiple containers, as a logical unit for coordination, management, expansion, and deployment . In at least one embodiment, by associating applications from one or more deployment pipelines 1410 (e.g., rebuild applications, staging applications, etc.) with individual containers, it can be in a self-contained environment (e.g., At the kernel level) execute each application to improve speed and efficiency.In at least one embodiment, each application and/or container (or its image) can be developed, modified, and deployed separately (for example, a first user or developer can develop, modify, and deploy the first application and the second user Or developers can develop, modify, and deploy a second application independent of the first user or developer), which can make it possible to focus on and focus on the tasks of a single application and/or one or more containers without being affected by Obstruction to the task of another application or one or more containers. In at least one embodiment, the pipeline manager 1412 and the application orchestration system 1428 can assist communication and collaboration between different containers or applications. In at least one embodiment, as long as the expected input and/or output of each container or application is known by the system (for example, based on the construction of the application or container), the application orchestration system 1428 and/or pipeline manager 1412 can facilitate communication among and between each application or container, and communication among and sharing resources in each application or container. In at least one embodiment, because one or more applications or containers in the deployment pipeline 1410 can share the same services and resources, the application orchestration system 1428 can coordinate, load balance, and determine among various applications or containers. The sharing of services or resources between and among. In at least one embodiment, the scheduler can be used to track resource requirements of applications or containers, current or planned use of these resources, and resource availability. Therefore, in at least one embodiment, the scheduler can allocate resources to different applications, and allocate resources among and among applications, taking into account the needs and availability of the system. In some examples, the scheduler (and/or other components of the application orchestration system 1428) may be based on constraints imposed on the system (eg, user constraints), such as quality of service (QoS), and the need for data output is urgent Performance (for example, to determine whether to perform real-time processing or delayed processing) and so on.In at least one embodiment, the services 1320 utilized by and shared by applications or containers in the deployment system 1306 may include computing services 1416, AI services 1418, visualization services 1420, and/or other service types. In at least one embodiment, the application program may call (eg, execute) one or more of the services 1320 to perform processing operations for the application program. In at least one embodiment, an application program can utilize computing services 1416 to perform supercomputing or other high-performance computing (HPC) tasks. In at least one embodiment, one or more computing services 1416 can be utilized to perform parallel processing (for example, using the parallel computing platform 1430) to pass one or more applications and/or one or more of a single application. Multiple tasks to basically process data at the same time. In at least one embodiment, the parallel computing platform 1430 (for example, NVIDIA's CUDA) can implement general-purpose computing on a GPU (GPGPU) (for example, GPU 1422). In at least one embodiment, the software layer of the parallel computing platform 1430 may provide access to the virtual instruction set and parallel computing elements of the GPU to execute the computing kernel. In at least one embodiment, the parallel computing platform 1430 may include memory, and in some embodiments, may be shared among and among multiple containers, and/or among different processing tasks within a single container Memory. In at least one embodiment, inter-process communication (IPC) calls can be generated for multiple containers and/or multiple processes within the containers to use the same data from the shared segment of the memory of the parallel computing platform 1430 (e.g., where Multiple different stages of one application or multiple applications are processing the same information). In at least one embodiment, instead of copying and moving data to different locations in the memory (e.g., read/write operations), the same data in the same location in the memory can be used for any number of processing tasks (e.g., At the same time, at different times, etc.). In at least one embodiment, since the data is used as a result of processing to generate new data, this information of the new location of the data can be stored and shared among various applications. In at least one embodiment, the location of the data and the location of the updated or modified data may be part of the definition of how the payload is understood in the container.In at least one embodiment, the AI service 1418 can be utilized to execute inference services to execute one or more machine learning models associated with the application (for example, a task that has one or more processing tasks that execute the application) ). In at least one embodiment, the AI service 1418 may utilize the AI system 1424 to execute one or more machine learning models (for example, neural networks such as CNN) for segmentation, reconstruction, object detection, feature detection , Classification, and/or other reasoning tasks. In at least one embodiment, one or more applications of the deployment pipeline 1410 may use one or more of the output models 1316 from the training system 1304 and/or other application models to perform inference on the imaging data. In at least one embodiment, two or more examples of inference using an application orchestration system 1428 (e.g., a scheduler) may be available. In at least one embodiment, the first category may include a high priority/low delay path that can achieve a higher service level agreement, for example, for performing reasoning on urgent requests in an emergency, or for providing radiation during diagnosis. Used by doctors. In at least one embodiment, the second category may include a standard priority path, which may be used for requests that may not be urgent or requests for which analysis may be performed at a later time. In at least one embodiment, the application orchestration system 1428 can allocate resources (e.g., services 1320 and/or hardware 1322) based on priority paths for different inference tasks for AI services 1418.In at least one embodiment, shared storage can be installed into the AI service 1418 within the system 1400. In at least one embodiment, shared storage can be used as a cache (or other storage device type), and can be used to process inference requests from applications. In at least one embodiment, when an inference request is submitted, the API instance set of the deployment system 1306 can receive the request, and one or more instances can be selected (for example, for best cooperation, for load balancing, etc.) to Process the request. In at least one embodiment, in order to process the request, the request can be entered into the database, the machine learning model can be found from the model registry 1324 (if not already in the cache), and the verification step can ensure that the appropriate machine learning model is loaded into A copy of the model in the cache (e.g., shared storage) and/or can be saved to the cache. In at least one embodiment, if the application is not yet running or if there are not enough application instances, a scheduler (e.g., the scheduler of the pipeline manager 1412) can be used to start the application referenced in the request. In at least one embodiment, if the inference server has not been started to execute the model, the inference server can be started. Each model can start any number of inference servers. In at least one embodiment, in a pull model where inference servers are clustered, the model can be cached as long as load balancing is advantageous. In at least one embodiment, the inference server may be statically loaded in the corresponding distributed server.In at least one embodiment, an inference server running in a container can be used to perform inference. In at least one embodiment, an instance of an inference server can be associated with a model (and optionally multiple versions of the model). In at least one embodiment, when a request to perform an inference on a model is received, if an instance of the inference server does not exist, a new instance may be loaded. In at least one embodiment, when the inference server is started, the model can be passed to the inference server, so that the same container can be used to serve different models, as long as the inference server runs as different instances.In at least one embodiment, during application execution, an inference request for a given application can be received, and a container (for example, hosting an instance of an inference server) can be loaded (if not already loaded), and the launcher can be called. In at least one embodiment, the preprocessing logic in the container can load, decode, and/or perform any additional preprocessing (for example, using a CPU and/or GPU) on the incoming data. In at least one embodiment, once the data is ready for inference, the container can perform inference on the data as needed. In at least one embodiment, this may include a single inference call for one image (e.g., hand X-ray), or may require inference on hundreds of images (e.g., chest CT). In at least one embodiment, the application may summarize the results before completion, which may include, but is not limited to, a single confidence score, pixel-level segmentation, voxel-level segmentation, generating visualizations, or generating text to summarize findings. In at least one embodiment, different models or applications can be assigned different priorities. For example, some models may have a real-time (TAT <1 minute) priority, while other models may have a lower priority (for example, TAT <10 minutes). In at least one embodiment, the model execution time can be measured from the requesting agency or entity, and the model execution time can include the partner network traversal time and the execution on the reasoning service.In at least one embodiment, the request transmission between the service 1320 and the inference application can be hidden behind a software development kit (SDK), and a robust transmission can be provided through a queue. In at least one embodiment, the request will be placed in a queue via the API for a single application/tenant ID combination, and the SDK will pull the request from the queue and provide the request to the application. In at least one embodiment, the name of the queue can be provided in the environment from which the SDK picks up the queue. In at least one embodiment, asynchronous communication through a queue may be useful because it may allow any instance of the application to pick up work when it is available. The results can be passed back through the queue to ensure that no data is lost. In at least one embodiment, the queue can also provide the ability to segment work, because the highest priority work can enter the queue connected to most instances of the application, and the lowest priority work can enter the connection to a single instance The queue that processes tasks in the order they are received. In at least one embodiment, the application program can run on a GPU accelerated instance generated in the cloud 1426, and the inference service can perform inference on the GPU.In at least one embodiment, a visualization service 1420 may be utilized to generate visualizations for viewing the output of applications and/or one or more deployment pipelines 1410. In at least one embodiment, the visualization service 1420 can utilize the GPU 1422 to generate visualizations. In at least one embodiment, the visualization service 1420 can implement rendering effects such as ray tracing to generate higher-quality visualizations. In at least one embodiment, visualization may include, but is not limited to, 2D image rendering, 3D volume rendering, 3D volume reconstruction, 2D tomographic slices, virtual reality display, augmented reality display, and the like. In at least one embodiment, a virtualized environment can be used to generate a virtual interactive display or environment (eg, virtual environment) for users of the system (eg, doctors, nurses, radiologists, etc.) to interact. In at least one embodiment, the visualization service 1420 may include an internal visualizer, movie technology, and/or other rendering or image processing capabilities or functions (eg, ray tracing, rasterization, internal optics, etc.).In at least one embodiment, the hardware 1322 may include a GPU 1422, an AI system 1424, a cloud 1426, and/or any other hardware used to execute the training system 1304 and/or the deployment system 1306. In at least one embodiment, the GPU 1422 (eg, NVIDIA’s TESLA and/or QUADRO GPU) may include any number of GPUs that can be used to perform computing services 1416, AI services 1418, visualization services 1420, other services, and/or Any feature or function processing task of the software 1318 processing task. For example, regarding AI services 1418, GPU 1422 can be used to perform preprocessing on imaging data (or other data types used by machine learning models), postprocessing on the output of machine learning models, and/or to perform inference (e.g., to perform machine learning). Learning model). In at least one embodiment, the cloud 1426, the AI system 1424, and/or other components of the system 1400 may use the GPU 1422. In at least one embodiment, the cloud 1426 may include a GPU optimized platform for deep learning tasks. In at least one embodiment, the AI system 1424 may use a GPU, and one or more AI systems 1424 may be used to execute the cloud 1426 or be responsible for at least part of deep learning or inference. As such, although the hardware 1322 is shown as discrete components, this is not intended to be limiting, and any component of the hardware 1322 can be combined with or utilized by any other component of the hardware 1322.In at least one embodiment, the AI system 1424 may include a dedicated computing system (e.g., a supercomputer or HPC) configured for inference, deep learning, machine learning, and/or other artificial intelligence tasks. In at least one embodiment, in addition to the CPU, RAM, memory, and/or other components, features or functions, the AI system 1424 (e.g., NVIDIA’s DGX) may include GPU-optimized software that can be executed using multiple GPUs 1422 ( For example, software stack). In at least one embodiment, one or more AI systems 1424 may be implemented in the cloud 1426 (for example, in a data center) to perform some or all of the AI-based processing tasks of the system 1400.In at least one embodiment, the cloud 1426 may include a GPU-accelerated infrastructure (eg, NVIDIA's NGC), which may provide a GPU-optimized platform for performing the processing tasks of the system 1400. In at least one embodiment, the cloud 1426 may include one or more AI systems 1424 for performing one or more AI-based tasks of the system 1400 (eg, as a hardware abstraction and scaling platform). In at least one embodiment, the cloud 1426 can be integrated with an application orchestration system 1428 that utilizes multiple GPUs to achieve seamless scaling and load balancing between and among applications and services 1320. In at least one embodiment, the cloud 1426 may be responsible for executing at least some of the services 1320 of the system 1400, including computing services 1416, AI services 1418, and/or visualization services 1420, as described herein. In at least one embodiment, the cloud 1426 can perform small batch and large batch inference (for example, execute NVIDIA's TENSOR RT), provide an accelerated parallel computing API and platform 1430 (for example, NVIDIA's CUDA), and execute an application orchestration system 1428 (E.g., KUBERNETES), provides graphics rendering APIs and platforms (e.g., for ray tracing, 2D graphics, 3D graphics and/or other rendering technologies to produce higher quality movie technology), and/or can provide other systems for system 1400 Function.Figure 15A shows a data flow diagram of a process 1500 for training, retraining, or updating a machine learning model according to at least one embodiment. In at least one embodiment, the process 1500 may be performed using the system 1400 of FIG. 14 as a non-limiting example. In at least one embodiment, the process 1500 can utilize the services 1320 and/or hardware 1322 of the system 1400, as described herein. In at least one embodiment, the improved model 1512 generated by the process 1500 can be executed by the deployment system 1306 against one or more containerized applications in the deployment pipeline 1410.In at least one embodiment, model training 1314 may include using new training data (e.g., new input data, such as customer data set 1506 and/or new ground truth data associated with the input data) to the initial model 1504 ( For example, the pre-trained model) is retrained or updated. In at least one embodiment, in order to retrain or update the initial model 1504, the output or one or more loss layers of the initial model 1504 can be reset or deleted, and/or updated or new output or one or more Replace with a loss layer. In at least one embodiment, the initial model 1504 may have previously fine-tuned parameters (eg, weights and/or biases) retained from prior training, so training or retraining 1314 may not require the expense and training of the model from scratch. The same length of time may not require as much processing as training the model from scratch. In at least one embodiment, during model training 1314, by resetting or replacing the output of the initial model 1504 or one or more loss layers, on the new customer data set 1506 (for example, the image data 1308 of FIG. 13) When generating predictions, the parameters can be updated and readjusted for the new data set based on loss calculations associated with the accuracy of the output or one or more loss layers.In at least one embodiment, the pre-trained model 1406 may be stored in a data storage or registry (for example, the model registry 1324 of FIG. 13). In at least one embodiment, the pre-training model 1406 may have been at least partially trained at one or more facilities other than the facility execution process 1500. In at least one embodiment, in order to protect the privacy and rights of patients, subjects, or customers of different facilities, the pre-training model 1406 may have been trained on the spot using customer or patient data generated on the spot. In at least one embodiment, the cloud 1426 and/or other hardware 1322 can be used to train the pre-trained model 1406, but the confidential and privacy-protected patient data may not be transmitted to any component of the cloud 1426, and not by any component of the cloud 1426. The component uses or does not access any component (or other non-local hardware) of the cloud 1426. In at least one embodiment, in the case of using patient data from more than one facility to train the pre-training model 1406, before training on patient or customer data from another facility, the pre-training may be separately performed for each facility. Model 1406 was trained. In at least one embodiment, for example, where customer or patient data has been posted for privacy issues (e.g., through abandonment, for experimental use, etc.), or customer or patient data is included in a public data set, data from any number of facilities Customer or patient data can be used to train the pre-trained model 1406 internally and/or externally, for example in a data center or other cloud computing infrastructure.In at least one embodiment, when selecting an application to use in the deployment pipeline 1410, the user can also select a machine learning model to be used for a specific application. In at least one embodiment, the user may not have a model to use, so the user can select a pre-trained model 1406 to be used with the application. In at least one embodiment, the pre-trained model 1406 may not be optimized to generate accurate results on the customer data set 1506 of the user facility (eg, based on patient diversity, demographics, type of medical imaging equipment used, etc.). In at least one embodiment, before deploying the pre-training model 1406 into the deployment pipeline 1410 for use with one or more applications, the pre-training model 1406 may be updated, retrained, and/or fine-tuned for use. To the corresponding facilities.In at least one embodiment, the user can select a pre-trained model 1406 to be updated, retrained, and/or fine-tuned, and the pre-trained model 1406 can be referred to as the initial model 1504 for the system 1304 in the training process 1500. In at least one embodiment, a customer data set 1506 (eg, imaging data, genome data, sequencing data, or other data types generated by equipment in a facility) can be used to perform model training 1314 on the initial model 1504 (which may include but not Limited to transfer learning) to generate a refined model 1512. In at least one embodiment, the training system 1304 may generate ground truth data corresponding to the customer data set 1506. In at least one embodiment, ground truth data may be generated at least in part by clinicians, scientists, doctors, and practitioners in a facility (eg, labeled clinical data 1312 in FIG. 13).In at least one embodiment, AI assisted annotation 1310 may be used in some examples to generate ground truth data. In at least one embodiment, the AI-assisted annotation 1310 (for example, implemented using the AI-assisted annotation SDK) may utilize a machine learning model (for example, a neural network) to generate suggested or predicted ground truth data for the customer data set. In at least one embodiment, the user 1510 can use the annotation tool within a user interface (graphical user interface (GUI)) on the computing device 1508.In at least one embodiment, the user 1510 can interact with the GUI via the computing device 1508 to edit or fine-tune (automatically) annotations. In at least one embodiment, the polygon editing feature can be used to move the vertices of the polygon to a more precise or fine-tuned position.In at least one embodiment, once the customer data set 1506 has associated ground truth data, ground truth data (e.g., from AI assisted annotations, manual labeling, etc.) can be used during model training 1314 to generate refined models 1512. In at least one embodiment, the customer data set 1506 can be applied to the initial model 1504 any number of times, and ground truth data can be used to update the parameters of the initial model 1504 until an acceptable level of accuracy is reached for the refined model 1512. In at least one embodiment, once the refined model 1512 is generated, the refined model 1512 can be deployed in one or more deployment pipelines 1410 at a facility for performing one or more processing tasks on medical imaging data.In at least one embodiment, the refined model 1512 can be uploaded to the pre-trained model 1406 in the model registry 1324 for selection by another facility. In at least one embodiment, his process can be completed at any number of facilities, so that the refined model 1512 can be further refined on a new data set any number of times to generate a more general model.Figure 15B is an example illustration of a client-server architecture 1532 for enhancing annotation tools using pre-trained annotation models according to at least one embodiment. In at least one embodiment, the AI-assisted annotation tool 1536 can be instantiated based on the client-server architecture 1532. In at least one embodiment, the annotation tool 1536 in the imaging application can help the radiologist, for example, to identify organs and abnormalities. In at least one embodiment, the imaging application may include a software tool. As a non-limiting example, the software tool helps the user 1510 to identify some poles on a particular organ of interest in the original image 1534 (for example, in a 3DMRI or CT scan). Value points, and receive automatic annotation results of all 2D slices of a specific organ. In at least one embodiment, the results can be stored in the data storage as training data 1538, and can be used as (for example, but not limited to) ground truth data for training. In at least one embodiment, when the computing device 1508 sends extreme points for the AI-assisted annotation 1310, the deep learning model may, for example, receive the data as input and return an inference result of segmented organs or abnormalities. In at least one embodiment, the pre-exemplified annotation tool (e.g., AI-assisted annotation tool 1536B in FIG. 15B) may be used for example through a server (e.g., may include a set of pre-trained models 1542 stored in the annotation model registry). The annotation assistant server 1540) makes an API call (for example, an API call 1544) for enhancement. In at least one embodiment, the annotation model registry may store a pre-trained model 1542 (eg, a machine learning model, such as a deep learning model), which is pre-trained to perform AI-assisted annotation on specific organs or abnormalities. These models can be further updated by using the training pipeline 1404. In at least one embodiment, as new labeled clinical data 1312 is added, the pre-installed annotation tools may improve over time.Such components can be used to generate various scene graphs from one or more rule sets, which can be used to generate training data or image content representing one or more scenes of a virtual environment.Other variations are within the spirit of this disclosure. Therefore, although the disclosed technology is susceptible to various modifications and alternative configurations, some of its illustrated embodiments are shown in the drawings and have been described in detail above. However, it should be understood that there is no intention to limit the disclosure to one or more specific forms disclosed, but on the contrary, the intention is to cover those falling within the spirit and scope of the disclosure as defined by the appended claims. All modifications, alternative constructions and equivalents.Unless otherwise stated, unless otherwise stated or clearly contradictory to the context, rather than as a definition of a term, the term “a The use of "", "a" and "the" and similar references shall be interpreted as covering both the singular and the plural. The terms "include", "have", "include" and "include" shall be interpreted as open-ended terms (meaning "including but not limited to"). The term "connected" refers to a physical connection when it is not modified, and should be understood as part or all of which is contained in, connected to, or connected together, even if there is any intervention. Unless otherwise indicated herein, the quotation of a numerical range herein is only intended to be used as a shorthand method to refer to each individual value falling within the range separately, and each individual value is incorporated into the specification as if it were It is described separately in this article. Unless otherwise indicated or contradicted by circumstances, the use of the term "group" (eg, "a group of items") or "subset" should be interpreted as a non-empty set that includes one or more members. In addition, unless otherwise indicated or contradicted by circumstances, the term "subset" of the corresponding set does not necessarily mean an appropriate subset of the corresponding set, but the subset and the corresponding set may be equal.Unless explicitly stated otherwise or clearly contradictory to the environment, joint languages such as "at least one of A, B, and C" or "at least one of A, B and C" are understood to be commonly used in the environment. Denotes items, terms, etc. can be A or B or C, or any non-empty subset of the set of A and B and C. For example, with three members, the connecting phrases "at least one of A, B, and C" and "at least one of A, B, and C" refer to any of the following sets: {A}, {B}, {C}, {A, B}, {A, C}, {B, C}, {A, B, C}. Therefore, this joint language is generally not intended to imply that certain embodiments require the presence of at least one of A, at least one of B, and at least one of C. In addition, unless otherwise stated or contradictory to the environment, the term "plurality" means a plural state (for example, "plurality of items" means a plurality of items). The plural is at least two items, but may be more than one when explicitly or dictated by the environment. In addition, unless otherwise stated or clear from the environment, the phrase "based on" means "based at least in part on" rather than "based only on."Unless otherwise indicated herein or clearly contradictory to the environment, the operations of the processes described herein can be performed in any suitable sequence. In at least one embodiment, processes such as those described herein (or variants and/or combinations thereof) are executed under the control of one or more computer systems configured with executable instructions, and are implemented as code (For example, executable instructions, one or more computer programs, or one or more application programs) are commonly executed on one or more processors by hardware or a combination thereof. In at least one embodiment, the code is stored on a computer-readable storage medium, for example, in the form of a computer program, which includes a plurality of instructions executable by one or more processors. In at least one embodiment, the computer-readable storage medium is a non-transitory computer-readable storage medium, which does not include transient signals (for example, propagated transient electrical or electromagnetic transmission), but includes transient signals in a transceiver. Non-transitory data storage circuits (for example, buffers, caches, and queues). In at least one embodiment, code (e.g., executable code or source code) is stored on a set of one or more non-transitory computer-readable storage media having executable instructions stored thereon (or used to store On the other memory for executing instructions, when executed by one or more processors of the computer system (that is, due to being executed), the computer system executes the operations described herein. In at least one embodiment, a set of non-transitory computer-readable storage media includes a plurality of non-transitory computer-readable storage media and one or more single non-transitory computer-readable storage media lacking all codes. Non-transitory storage media, and multiple non-transitory computer-readable storage media collectively store all codes. In at least one embodiment, executable instructions are executed so that different instructions are executed by different processors, for example, a non-transitory computer-readable storage medium stores the instructions, and the main central processing unit ("CPU") executes some instructions, The graphics processing unit ("GPU") executes other instructions. In at least one embodiment, different components of the computer system have separate processors, and different processors execute different subsets of instructions.Therefore, in at least one embodiment, the computer system is configured to implement one or more services that individually or collectively perform the operations of the processes described herein, and such computer system is configured with suitable hardware capable of implementing the operations And/or software. In addition, the computer system implementing at least one embodiment of the present disclosure is a single device, and in another embodiment, is a distributed computer system that includes multiple devices operating in different ways, so that the distributed computer system executes The operations described in this article do not allow a single device to perform all operations.The use of any and all examples or exemplary language (eg, "such as") provided herein is only intended to better clarify the embodiments of the present disclosure, and does not limit the scope of the disclosure unless otherwise stated. No language in the specification should be construed as indicating any unclaimed elements that are indispensable for implementing the disclosure.All references cited herein, including publications, patent applications, and patents, are incorporated herein by reference, as if each reference was individually and specifically indicated to be incorporated herein by reference.In the description and claims, the terms "coupled" and "connected" and their derivatives may be used. It should be understood that these terms may not be intended as synonyms for each other. On the contrary, in certain examples, "connected" or "coupled" may be used to indicate that two or more elements are in direct or indirect physical or electrical contact with each other. "Coupled" may also mean that two or more elements are not in direct contact with each other, but still cooperate or interact with each other.Unless specifically stated otherwise, it can be understood that throughout the specification, such as "processing", "calculation", "operation", "determining", etc., refer to actions and/or processes of a computer or computing system. Or similar electronic computing equipment, which processes and/or converts data represented as physical quantities (such as electronics) in the registers and/or memory of the computing system into memory, registers, or other such information storage, transmission, or other similar expressions in the computing system. Display other data of the physical quantity in the device.In a similar manner, the term "processor" can refer to any device or part of a device that processes electronic data from registers and/or memory and converts the electronic data into other electronic data that can be stored in registers and/or memory. As a non-limiting example, the "processor" may be a CPU or a GPU. A "computing platform" may include one or more processors. As used herein, "software" processes may include, for example, software and/or hardware entities that perform work over time, such as tasks, threads, and intelligent agents. Also, each process can refer to multiple processes to execute instructions sequentially or in parallel continuously or intermittently. Because a system can embody one or more methods and a method can be considered a system, the terms "system" and "method" are used interchangeably herein.In this document, reference can be made to obtain, obtain, receive, or input analog or digital data into a subsystem, computer system, or computer-implemented machine. Obtaining, acquiring, receiving, or inputting analog and digital data can be accomplished in a variety of ways, such as by receiving data as a parameter of a function call or a call to an application program interface. In some embodiments, the process of obtaining, acquiring, receiving, or inputting analog or digital data can be accomplished by transmitting data via a serial or parallel interface. In another embodiment, the process of obtaining, obtaining, receiving, or inputting analog or digital data can be completed by transmitting data from the providing entity to the obtaining entity via a computer network. See also providing, outputting, transmitting, sending, or presenting analog or digital data. In various examples, the process of providing, outputting, transmitting, sending, or presenting analog or digital data can be implemented by transmitting the data as input or output parameters of function calls, application programming interfaces, or inter-process communication mechanisms.Although the above discussion sets forth example implementations of the described technology, other architectures may be used to implement the described functions and are intended to be within the scope of this disclosure. In addition, although specific responsibilities are defined above for discussion purposes, various functions and responsibilities can be assigned and divided in different ways depending on the situation.In addition, although the subject matter has been described in terms of structural features and/or method actions, it should be understood that the subject matter claimed in the appended claims is not necessarily limited to the specific features or actions described. Rather, specific features and actions are disclosed as exemplary forms of implementing the claims. |
A first gate structure and a second gate structure are formed overlying a semiconductor substrate. A first protective layer is formed overlying the first gate structure and an associate source drain region. A first epitaxial layer is formed overlying the second source drain prior to removal of the first protective layer. |
WHAT IS CLAIMED IS: 1. A method comprising: forming a first gate structure (15) and a second gate structure (18) overlying a semiconductor substrate(l[theta]); forming a first protective layer (17) overlying the first gate structure (15) and a first source/drain region associated with the first gate structure; and forming a first epitaxial layer (20) overlying a second source/drain region associated with the second gate structure prior to removal of the first protective layer (17), wherein the first protective layer (17) prevents formation of the first epitaxial layer (20) at a first location. 2. The method of claim 1, wherein forming the first epitaxial layer (20) overlying the second source/drain region further comprises incorporating a first dopant into the epitaxial layer during growth of the first epitaxial layer. 3. The method of claim 1, wherein the first protective layer (17) comprises a material selectively etchable with respect to a spacer material of the first gate structure (15). 4. The method of claim 1, wherein the first gate structure (15) is for anN-type transistor, and the second gate structure ( 18) is for a P-type transistor. 5. The method of claim 1, wherein the first gate structure (15) is for a P-type transistor, and the second gate structure (18) is for an N-type transistor. 6. The method of claim 1, further comprising: removing the first protective layer (17) overlying the first gate structure (15) and the first source/drain region; forming a second protective layer (19) overlying the second gate structure (18) and a second source/drain region associated with the second gate structure (18); and forming a second epitaxial layer (21) overlying a first source/drain region associated with the first gate structure (15), wherein the second protective layer (19) prevents formation of the second epitaxial layer (21) overlying the second gate structure (18). 7. The method of claim 6, further comprising: forming the second epitaxial layer (21) further comprises the second epitaxial layer overlying the second source/drain region further comprises incorporating a second dopant into the second epitaxial layer (21) during growth of the first epitaxial layer. 8. The method of claim 6, wherein the second protective layer (19) comprises a material selectively etchable with respect to a spacer material (13) of the second gate structure (18). [Upsilon]. '"PL aeVice'compfis[iota]rig-:-<11>'- a first gate structure (15) comprising a first source/drain of a first conductivity type and a first height, the first source/drain comprising a first raised epitaxial layer (21); and a second gate structure (18) comprising a second source/drain of a second conductivity type and a second height, the second source/drain comprising a second raised epitaxial layer (20), wherein the second height is substantially different than the first height. 10. The device of claim 9, wherein the first raised epitaxial layer (21) is of a first conductivity type and the second raised epitaxial layer (20) is of a second conductivity type. |
METHODOLOGY FOR DEPOSITION OF DOPED SEG FOR RAISED SOURCE/DRAIN REGIONSTechnical FieldThe present disclosure relates generally to a semiconductor manufacturing process, and more particularly to a method of epitaxial formation.Background ArtThin-film fully-depleted (FD) Silicon-on-Insulator (SOI) has shown to be an attractive candidate for deep sub-micron CMOS low-power, high-speed applications. For FD SOI CMOS, scaling also includes reducing the thickness of the thin silicon film of the SOI substrate. During device fabrication, however, the silicidation of ultra thin-films (< 50 nm) may consume the entire silicon film in the S/D areas. This results in a high source/drain (S/D) contact resistance, or possibly the formation of a void between the extension and S/D area, which will result in device failure.In order to avoid these detrimental effects, extra silicon should be provided in the S/D areas by using Selective Epitaxial Growth (SEG) of silicon. However, since the epitaxial growth includes a high-temperature pre-bake at 900 degrees C, this process is not very attractive for sub- 100 nm devices unless the device fabrication scheme is altered.Therefore a method which overcomes these problems would be useful.BRIEF DESCRIPTION OF THE DRAWINGSThe present disclosure may be better understood, and its numerous features and advantages made apparent to those skilled in the art by referencing the accompanying drawings. It will be appreciated that the various features in the accompanying drawings are not necessarily drawn to scale relative to other features.FIGS. 1 through 8 illustrate, in cross-section, semiconductor device manufacturing process steps according to at least one embodiment of the present disclosure;FIG. 9 illustrates, in cross-section, a portion of a semiconductor device manufactured according to an embodiment of the present disclosure; andFIG. 10 is a flow diagram illustrating a method for determining a desired thickness of a source-drain region for a semiconductor device according to an embodiment of the present disclosure.The use of the same reference symbols in different drawings indicates similar or identical items. DETAILED DESCRIPTION OF THE DRAWINGSMethods of forming NMOS and PMOS transistors are presented, as are devices fabricated according to the methods of the present disclosure. One embodiment of the disclosure results in the growth of differential source/drains for NMOS or PMOS transistors. The disclosure further provides for embodiments whereby improved device performance can be achieved, through reduction of implantation defects and transient defect diffusion problems, by incorporating dopants into raised source/drain regions during epitaxial formation to produce sharper dopant profiles, as well as crystallographic lattice positions for the dopant that are better than positions resulting from conventional implantation schemes.Figures 1 through 8 illustrate, in cross-section, a portion 100 of a semiconductor device during a manufacturing process according to an embodiment of the present disclosure. At the manufacturing stage presented in FIG. 1, gate structures 18 and 15 have been formed overlying a semiconductor substrate 10. The gate structure 18 is for anN-type transistor, and the gate structure 15 is for a P-type transistor, although the gate structures 18 and 15 can be described interchangeably such that the gate structure 18 is for a P-type transistor, and the gate structure 15 is for an N-type transistor.In the example of FIG. 1, both gate structures 18 and 15 include conductive gate portions 14, liner oxide11, spacers 13, and protective caps 16 overlying the conductive gate portions 14 of gate structures 15 and 18. Protective caps 16 can include antireflective coatings (ARC) or other materials such as silicon nitride or oxide. Other features of portion 100 include an isolation feature 12, and a protective layer 17.Protective layer 17 has been formed overlying gate structure 15 and source/drain regions associated with gate structure 15. In an embodiment, protective layer 17 comprises a material selectively etchable with respect to a spacer material of the non-protected gate structures 18 and 19. The protective layer 17 will typically comprise an oxide or a nitride. The thickness of the protective layer 17 ranges from 100 to 1000 Angstroms. Thickness ultimately chosen should include consideration of the material's optical properties such that the selected thickness does not corrupt gate- patterning capabilities.Semiconductor substrate 10 can be a silicon-on-insulator substrate. Alternatively, substrate 10 can also be a gallium arsenide substrate, a mono-crystalline silicon substrate, a silicon-on-sapphire substrate, or the like. Conductive gate structure portions 14 can be poly-crystalline or amorphous silicon having a length ranging from 200 to 1000 Angstroms or more, and a height ranging from 500 to 2000 Angstroms or more. The portion 100 is ready to undergo epitaxial formation, which can include selective epitaxial growth (SEG) and dopant incorporation, as illustrated in FIG. 2.Figure 2 illustrates portion 100 of FIG. 1 during formation of an epitaxial layer 20 over a source/drain region associated with the gate structure 18. During the epitaxial formation process, protective layer 17 prevents formation of an epitaxial layer over the S/D regions associated with gate structure 15. In an embodiment, the epitaxial layer 20 formed at the unprotected gate structure 18 S/D regions includes a dopant incorporation process 7 that can include an ion implantation process, a diffusion process, or an in situ process capable of incorporating the dopants in situ with growing the epitaxial layer to incorporate a first species of dopant. The dopant incorporation process 7 can utilize an n-type species of dopant, or a p-type species of dopant, depending upon the conductivity type requirements of a design. It should be noted that although it is not specifically illustrated, an epitaxial cap may be formed over the conductive gate structure 14 if protective layer 16 is removed prior to the epitaxial process, or if a protective layer 16 is not employed. The embodiment illustrated in FIGS. 2-7 illustrates no epitaxial cap formed over the unprotected gate structure 14.In situ doping during epitaxial growth results in reducing the number of defects caused by ion implantation that occurs subsequent to epitaxial formation and the transient defect diffusion problems that can be caused by anneals that are used when regions are doped subsequent to epitaxial formation. Dopant incorporation7 during epitaxial growth produces sharper dopant profiles, as well as producing crystallographic lattice positions for the dopant that are generally better than positions resulting from conventional implantation schemes.Doping during epitaxial growth means results in dopants starting out on lattice sites during epitaxial formation, as opposed to doping schemes that require annealing to drive the dopants into the lattice. Defect rates produced as a result of performing dopant incorporation 7 during epitaxial growth have lower defect rates than when doping occurs after epitaxial growth.In situ doping of the epitaxial layer 20 is accomplished by the addition of appropriate precursors into the process gases in the process tool, e.g., an LPCVD tool. Examples of suitable precursors are Diborane (B2Hg), Arsine (AsH3), Phosphine (PH3), and others known in the art. The dopant profile created during the dopant incorporation 7 process can be a uniform dopant profile, or a gradient dopant profile, that is, a dopant profile with a gradient from one concentration of dopant to another concentration of dopant. A dopant gradient can be useful to optimize the connection to the transistor channel in some transistor architectures. In alternate embodiments diffusion techniques can be used to introduce doping into the epitaxial layer 20.FIG. 3 shows portion 100 of FIG. 2 following the removal of protective layer 17 and the formation of a protective layer 19 overlying the gate structure 18 and source/drain region associated with the gate structure 18. Portion 100 is now ready to undergo raised S/D formation by performing another selective epitaxial growth and dopant incorporation process, as shown in FIG. 4.Removal of the protective layer 17 is accomplished by methods suitable for the materials situation. It is desirable to have the protective layers comprised of a material which is the "opposite" of the material used in the spacer formation, e.g., a nitride protective layer with an oxide spacer, or an oxide layer with a nitride spacer. If protective layer 17 is an oxide, removal can be achieved using either a wet chemistry, e.g., hydrofluoric acid (HF), or a reactive ion etch (RIE) using, e.g., CH4 or CH3F. In the case of oxide spacers, a nitride hard mask is applied. The hard mask can be removed wet using phosphoric acid (H3PO4) or dry (RIE), using CF4/HBR or SF6 or the like.FIG. 4 illustrates portion 100 of FIG. 3 during epitaxial layer 21 growth. The formation includes dopant incorporation process9 to form a doped epitaxial layer 21 overlying the now unprotected S/D region associated with gate structure 18. The protective layer 19 prevents formation of the epitaxial layer 21 overlying the gate structure 15. The dopant species utilized in creating the doped epitaxial layer 21 comprises a species of dopant different man t[eta]e species ut[iota]lized iri the dopant (item 7, FIG. 2) during formation of the first epitaxial layer (iteml7, FIG. 2). For example, if the earlier dopant incorporation process illustrated in FIG. 2 employed an n- type dopant, the dopant incorporation process 9 illustrated in FIG. 4 would utilize a p-type dopant.The thickness of doped epitaxial layer 21 is illustrated to be different than the thickness of the doped epitaxial layer 20 of FIG. 2. Thus, the method of the present disclosure permits different source/drain thicknesses for respective NMOS or PMOS transistors. The differential thickness between source/drain regions of different conductivity types is achievable whether or not in situ doping of the epitaxial layers is used. In an embodiment, protective layer 19 comprises a material selectively etchable with respect to a spacer material of the gate structures 15 and 18. Typically, the protective layer 19 comprises a nitride or an oxide.FIG. 5 illustrates portion 100 of FIG. 4 following complete removal of protective layer 19, spacers 13, gate oxide 11, and protective layer 16. Portion 100 is ready to undergo S/D extension manufacture, as illustrated in FIG. 6.FIG. 6 illustrates portion 100 after conventional masking of the raised source/drains 21 and conductive gate portion 14 of one channel. The resist mask 22 protects that channel during implantation 27, which serves to create lightly doped drains (LDD) 30 and Halo implantations (not shown). The resist mask 22 is then stripped, and the process repeated for the other channel, as shown in FIG. 7.FIG. 7 illustrates portion 100 after conventional masking of the raised source/drains 20 and conductive gate portion 14 of one channel. The resist mask 26 protects that channel during implantation 28, which serves to create lightly doped drains (LDD) 31 and Halo implantations (not shown) in the other channel. After extension formation processes, the resist mask will be stripped, and a rapid thermal anneal (RTA) performed to activate the dopants. Note that the conductivity types of the lightly doped drain regions and Halo implants are opposite for NMOS and PMOS transistors.FIG. 8 illustrates portion 100 following the formation of a liner oxide 26, spacers 23 and a silicidation process to form a suicide 25 at conductive gate portions 14 and the epitaxial layers 20 and 21. By adjusting the heights of the epitaxial regions 20 and 21 independently, the distance of the silicide 25 from the channels of then- respective NMOS and PMOS transistors can be controlled independently, and as needed, based on design requirements. For example, a thicker epitaxial layer, such as epitaxial layer 21 relative to SEG 20, will result in the silicide 25 associated with epitaxial layer 21 being further from a channel region under gate 14 than silicide 25 associated with epitaxial layer 21 is to its channel.In addition, such as in the case of fully depleted epitaxial formation, consumption of the entire SEG layer can be undertaken to provide good contact to underlying plugs, such as tungsten plugs. Thus, the method disclosed herein provides the benefit of permitting differential heights based on the requirements of the transistor design criteria.Spacers 23 will range in width, and can be differential, depending upon the amount of offset desired from the edge of the silicide layer 25 from the edges of conductive gate structures 14. Typical spacer widths are 50 to 1000 Angstroms<'>, source/drain regions 32 and 33 are also illustrated in FIG. 8 subsequent to formation of deep portions of the source/drain regions.FIG. 9 illustrates, in cross-section, a portion 700 of a semiconductor device manufactured according to an embodiment of the present disclosure. FIG. 9 is a simplified diagram which does not necessarily show all of the features of portion 700 in order to keep the illustration from being cluttered.The device of FIG. 9 comprises a structure 714 comprising an adjacent epitaxial layer 721 forming a raised source/drain of a first conductivity type and a first height. In addition, the device further comprises a gate structure 714 comprising an adjacent epitaxial layer 720 of a second conductivity type and a second height, where the second height is different than the first height. Depending upon transistor architecture or different diffusion behavior, the ability to produce differential heights in the combined SEG-silicide layers can be of benefit to the process engineer. In an embodiment, the epitaxial layer 721 adjacent the first gate structure 714 is of a first conductivity type, e.g., PMOS, and the second gate structure 714 is of a second conductivity type, e.g., NMOS.Other features illustrated in FIG. 9 include interconnects 777 connected to vias/contacts (not numbered) within an interconnect dielectric region 779. The conductive gate structures 714 may include gate stacks comprising a dielectric layer (not shown) in addition to the doped epitaxial layers 720 and 721. In FIG. 9, deep source/drain regions 732, 733 in the substrate 710, along with suicided epitaxial layers 725, 726 are shown integrated with their respective transistors.FIG. 10 is a flow diagram illustrating a method for determining a desired thickness of a source-drain region for a semiconductor device according to an embodiment of the present disclosure. At step 1010, a determination is made as to a desired thickness of a first source/drain (SfD) region for a first type of transistor. At step 1020, a determination is made as to a desired thickness of a second S/D region for a second type of transistor. These determinations are part of an integration scheme to consider a plurality of thicknesses and doped epitaxial growth processes at intervals integrated into a process line to produce a desired outcome. At step 1030, the desired thickness values are provided to a semiconductor device fabrication facility to implement the desired thickness. At step 1040, these values are utilized to fabricate devices based upon the desired thickness values. An example of such a device was illustrated in FIG. 9.The method and apparatus herein provides for a flexible implementation. Although described using certain specific examples, it will be apparent to those skilled in the art that the examples are illustrative, and that many variations exist. For example, the disclosure is discussed herein primarily with regard to formation of for a CMOS device, however, the invention can be employed with other device technologies. Additionally, various types of deposition and etch devices are currently available which could be suitable for use in employing the method as taught herein. Note also, that although an embodiment of the present invention has been shown and described in detail herein, along with certain variants thereof, many other varied embodiments that incorporate the teachings of the invention may be easily constructed by those skilled in the art. For example, the technique is discussed primarily with regard to SOI substrates, though other substrates can be used. Also, the suicide described herein can be formed using a reactive process or a deposition process. Th a'M[iota][iota]i6n,"itwiirbe"appfeciated that any number of substrate preclean steps can occur before the formation of any epitaxial layer. For example, United States Patent Application having serial number 10/791,346, which is hereby incorporated in its entirety by reference, discloses several substrate preclean techniques appropriate for cleaning a substrate prior to forming an epitaxial layer.In one example, contaminates on the surface of a substrate are subjected to a cleaning process comprising applying a plasma to a surface of the active regions produce a reduction reaction with the contaminates in an upper portion of the surface of the active regions. In an embodiment, the plasma comprises H2. While the plasma is being applied to the upper portion of the exposed active regions, the resultant products or vapor byproducts of the reduction reaction are removed by the normal vacuum process within the chamber. Therefore, contaminates contained in the vapor byproducts and are vented away, leaving the upper portion of the surface of the active regions suitably clean for the ensuing epitaxial process. In one embodiment, the plasma process parameters comprise a gas flow of 450 seem H2 and 300 seem argon, at a chamber temperature of 400 degrees Celsius, with an high frequency (HF) power setting of 700 W, and a low frequency (LF) power setting of between approximately 50 to 100 W. Chamber pressure is 1 Torr, and the spacing between the surface of the active region and the faceplate of the tool (not shown) should be 300 mils. In other embodiments, plasma process parameters comprise a gas flow ranging from between 100-800 seem H2 and from between 100 and 600 seem argon. Chamber temperatures can range between 300 to 450 degrees Celsius, and HF power settings from between 400-900 W, with LF power settings varying from between 0-150 W. Chamber pressures can range from between 1 mT- 5 Torr, with spacing between the surface of the active region and the faceplate of the tool varying from between 200 to 400 mils. Exposure times for the various embodiments utilizing plasma range from between approximately 10 seconds up to approximately 120 seconds.Various tool types are suitable for this cleaning, for example, CVD (Chemical VaporDeposition) equipment, HDP (High Density Plasma) tools, etch chambers, or the like. Differences in chamber design, power settings, and species, e.g., H2 with or H2 without helium or nitrogen, will result in different thickness of the layer after anneal. Typically the layer after anneal will be between 20 and 50 Angstroms thick. This plasma cleaning process also results in passivation of Si-H bonds in the layer after anneal. No wet cleaning dip with hydrofluoric (HF) acid prior to SEG is necessary.. In addition to no longer requiring an HF dip prior to SEG, the reduced temperature of this H2 plasma cleaning treatment results in a reduction of the SEG process thermal budget of more than 100 degrees Celsius. Typically pre-SEG cleaning processes are conducted at approximately 900 degrees Celsius or greater. In an embodiment of the present disclosure, the cleaning process occurs at less than approximately 800 degrees Celsius. In another embodiment, the cleaning process occurs at less than approximately 500 degrees Celsius or less. In addition, the cleaning processes of the present disclosure could be conducted at less than approximately 700 degrees Celsius or less, or even at less than approximately 600 degrees Celsius or less.In another embodiment, location including includes a gate structure and active regions is subjected to a cleaning process utilizing a low-power dry etch to selectively remove an upper atomic layer of material from the active regions. The thickness of the upper atomic layer of material to be removed ranges from between 20 to about 50 Angstroms. In one embodiment, the dry etch process is an anisotropic dry etch utilizing a carbon-free gas as an etchant gas. In another embodiment, the anisotropic dry etch utilizes an oxygen- and carbon-free gas as an etchant gas. The etchant gas can comprise HBr, NF3, SF[beta], gaseous fluorine-interhalogenics such as CIF3, or any gas containing fluorine, suitable to disassociate F-radicals, which does not contain oxygen and carbon. Prior to undergoing the anisotropic dry etch process, location 200 is subjected to a standard wet etch chemistry process utilizing a dilute HF solution (100: 1) at room temperature, e.g., 20 to 26 degrees Celsius, for a time period ranging from 50 to 200 seconds. Following the HF clean, a low-power dry etch utilizing a temperature of approximately 400 degrees Celsius, RF power of approximately 375 W, pressure of approximately 150 mTorr, and a gas flow rate ranging from 50 to 100 seem, is conducted. In other embodiments, the low-power dry etch utilizes a temperature ranging from between 300-500 degrees Celsius, with RF power ranging from between 200- 700W, a pressure ranging between 0-1 Torr, and a gas flow rate ranging from between 10-300 seem, for a time ranging between 10 to 60 seconds.This low-power dry etch removes carbon and oxygen contamination, and provides a very clean surface for SEG. The low temperature HF clean followed by the low-power dry etch does not require a high temperature bake. This results in a reduction of thermal budget for SEG of more than 100 degrees Celsius.In another embodiment, a cleaning process is used that forms an oxidation layer of between 20 to 50Angstroms on an upper surface of the active regions using a plasma to produce the oxidation layer on doped active regions. In an embodiment, the plasma is an O2 plasma. In another embodiment, the plasma is an O3 plasma.An O2 plasma production utilizes O2 gas at a flow rate of 400 seem, a pressure of 5 Torr, an HF of 300 W, an LF of 100 W, and a temperature of 400 degrees Celsius, with the time ranging from between about 10 to about 120 seconds. The spacing between the surface of the active regions and the faceplate of the vapor deposition apparatus (not shown) should be 400 mils. In other embodiments, the plasma production utilizes O2 gas at a flow rate of between 100 and 1000 seem, a pressure ranging from between 2-10 Torr, an HF ranging between 200-500 W3 an LF ranging between 50-200 W, a temperature ranging between 300-450 degrees Celsius, for a time ranging from between approximately 10 to approximately 120 seconds. In an embodiment, the spacing between the surface of the active regions and the faceplate of the vapor deposition apparatus ranges from between 200 and 600 mils. The tool type used to generate the plasma could be CVD equipment, HDP tools, or etch chambers. In an embodiment where the plasma is O3, plasma production utilizes O3 gas at a flow rate of 300 seem, a pressure of 5 Torr, an HF of 300 W, an LF of 100 W, and a temperature of 400 degrees Celsius for a time period ranging from between 10 to 120 seconds. The spacing between the surface of the active regions and the face plate of the vapor deposition apparatus (not shown) should be 400 mils. In other embodiments, plasma production utilizes O3 gas at a flow rate of between 50 and 600 seem, a pressure ranging from between 2-10 Torr, an HF ranging between 200-500 W, an LF ranging between 50-200 W, and a temperature ranging from between 300-450 degrees Celsius for a time period ranging from between about 10 to about 120 seconds. In an embodiment, the spacing between the surface of the active regions and the faceplate of the vapor deposition apparatus ranges from between 200 and 600 mils. As was the case with the O2 plasma, the tool type used to generate the plasma could be HDP tools, CVD equipment, or etch chambers. facilitates trapping or fixing contamination in the oxide layer overlying the upper layer of the doped active regions for subsequent removal using a wet chemistry process. The wet etch chemistry process utilizes a dilute HF acid solution of 100:1 at room temperature, e.g. 20 to 26 degrees Celsius, for a time ranging from 50 to 200 seconds. Differences in chamber design, power settings and species employed, e.g., O2 or O3, results in differing thickness of the oxidation layer, hence the wide range in times for the HF dip. The use of an O2 or O3 plasma to create a contamination-trapping oxidation layer for removal by a room temperature HF dip results in a reduction of the thermal input for location 300.Another possible pre-clean for use prior to formation of an SEG is disclosed in United States Patent Application having serial number 10/969,769, (Attorney Docket Number 1458-H1962) which is hereby incorporated in its entirety by reference, discloses another substrate preclean technique that facilitates a reduced temperature H2 bake is performed following formation of any desired spacers, which can comprise one or more nitride or oxide layers and prior to SEG formation. This pre-clean and comprises a first pre-rinse with deionized water, followed by an oxide etch utilizing an aqueous solution of deionized water and hydrofluoric acid (HF or hydrogen fluoride in water) aqueous solution of approximately 30:1 (volumetric ratio) at 21 degrees Celsius, for a time period ranging from between 50-60 seconds. The weight percentage of HF recommended for the HF aqueous solution is 49% in a balance of deionized water (H2O). Bulk HF aqueous solution can be purchased from various chemical suppliers in the HF weight percent range of 10% to 49%. In semiconductor fabrication facilities, this aqueous HF aqueous solution is typically diluted in the range 10:1 to 200:1. A 10:1 HF is 1 part aqueous HF (at 49% weight percent) and 10 parts H2O. It will be appreciated that the etch rate of the HF aqueous solution is substantially linear with respect to both the concentration of the HF aqueous solution and the etch time. Therefore, various combinations of HF concentrations and etch times can be used to accomplish the oxide etch. Additionally, the temperature may vary.After the HF etch, an overflow rinse utilizing deionized water is performed for a period ranging from approximately 120 to 600 seconds with a typical rinse being about 400 seconds. The cleaning process of portion 100 results in etching away of the surface contamination/debris located on substrate 10 resulting from offset spacer formation and/or dopant implantation. The upper semiconductor surface, i.e. silicon surface, of substrate 10 is also slightly etched, for example, from one to several mono layers of silicon, during the HF etch.It should be noted that the amount of material removed during the HF etch is dependent upon the type of material being removed. For example, when native oxide is present, the HF etch will remove approximately 20 to 30 Angstroms of oxide. If a deposited oxide layer is present in addition to a native oxide, an over-etch of approximately 30% is generally desirable. For example, if removal of 100 Angstroms of a chemical vapor deposition (CVD) oxide is desired, the HF etch could be employed to remove approximately 120 to 130 Angstroms oxide removal. This latter example would be applicable in applications where a liner oxide of approximately 100 Angstroms thickness is employed between a conductive gate 25 and a nitride spacer.The next steps in the cleaning process comprise a second pre-rinse with deionized water of approximately 30 seconds duration precedes the performance of a Standard Clean- 1 (SC-I), a quick dry rinse (QDR), and a Standard Clean-2 (SC-2). The SC-I and SC-2 components are followed by a second QDR, and an HF: H2O etch, a third rinse, and an isopropyl alcohol (IPA) dry. The amount of material removed by the SC-I and<'> St;-/ comp[upsilon]hents"are" rmplernettted such that they etch from approximately one monolayer of silicon to approximately 10 to 100 Angstroms of silicon.In an embodiment, the SC-I utilizes an aqueous solution of ammonium hydroxide: hydrogen peroxide: deionized water at a ratio of approximately 1:1-4:6-40, at a temperature of approximately 60 degrees Celsius for approximately 72 minutes, to etch approximately 100 Angstroms of silicon. Synonyms for ammonium hydroxide (NH4OH) include ammonia solution (typically contains between 12% and 44% ammonia before dilution), dilute ammonia, or concentrated ammonia. A first quick dry rinse is conducted for approximately 3 minutes. In an embodiment, the SC-2 utilizes a solution of hydrochloric acid: hydrogen peroxide: deionized water at an initial ratio of approximately 1:1:50 at a temperature of approximately 60 degrees for about 5 minutes. A second quick dry rinse is then conducted. Synonyms for hydrochloric acid (HCl) are hydrogen chloride, anhydrous hydrogen chloride, aqueous hydrogen chloride, chlorohydric acid, spirit of salts, and muriatic acid.In a particular embodiment, the SC-I utilizes a solution of ammonium hydroxide: hydrogen peroxide: deionized water at a ratio of approximately 1:4:20 at a temperature ranging of approximately 60 degrees Celsius for approximately 72 minutes. The SC-I is the step in the clean sequence that etches the silicon. This occurs because the H2O2 (the oxidizer) becomes depleted in the solution with increasing time and increasing temperature. The methods of the present disclosure allow the initial concentration of hydrogen peroxide to be depleted to facilitate etching of the upper-most semiconductor portion. Depletion of the H2O2 is greatly enhanced when the solution temperature rises above 80 degrees Celsius, which can lead to an etch that is difficult to control if not carefully monitored. The temperature range of the SC-I is expected to be approximately 55 to 85 degrees Celsius, with the etch occurring in a shorter period of time at higher temperatures than at lower temperatures. It is expected that the SC-I etching will be better controlled at temperatures in the range of 55-80 degrees Celsius and better still at temperatures in the range of 55-75 degrees Celsius. Generally, it is expected that the substrate will be exposed to the SC-I etch process for longer that 60 minutes. When the oxidizer stops protecting the silicon surface, the ammonium hydroxide (NH4OH) starts to etch the silicon. Thus, a small amount of silicon can be etched in a controlled manner. The SC-I can be performed in a re-usable bath where the solution is re- circulated and heated to maintain the desired temperature.The mechanism of silicon and SiO2 etching by a KH4OH/ H2O2 solution occurs when the solution is allowed to be depleted OfH2O2. An alkaline solution, such as NH4OH4 in our example, will attack silicon by water molecules, according to the reaction:Si + 2H2O + 2Off -> Si(OH)2(OO2 + 2H2TA passivation layer formed by the H2O2 prevents this attack by the NH4OH. H2O2 decomposes in the course to form O2 and H2O.O2When the concentration OfH2O2 is below 3xlO<"3>M, then silicon will begin to etch, because of the absence of the inhibition layer. As indicated in the above "equations, heat is given off as the H2O2 is depleted. If a bath is used that is not recharged with fresh solution all H2O2 will be depleted, thereby no longer releasing heat. Therefore, the temperature can be monitored on the low end to indicate when the solution should be refreshed, while the temperature on the high end is monitored to prevent unusually rapid decomposition of the H2O2, which can lead to a process that is difficult to control.The first quick dry rinse is conducted for approximately 3 minutes. The subsequent SC-2 utilizes a solution of hydrochloric acid: hydrogen peroxide: deionized water at a ratio of approximately 1:1:50 at a temperature of approximately 60 degrees for about 5 minutes. A quick dry rinse with deionized water, followed by an IPA dry process, is performed following the SC-2.The IPA dry process uses a heated IPA vapor at approximately 82 degrees Celsius. The IPA vapor is generated in a separate chamber with 100% N2 bubbled through 100% IPA (heated to 82 degrees Celsius). The IPA condenses on the wafer, and the solution drips off the bottom of the wafer. The IPA vapor concentration is slowly diluted to 100% N2 before the wafers are removed from the rinsing/drying tank.Subsequent to the SC-I and SC-I processes, the substrate will be further recessed (etched) as a result of the cleaning process. Next, an HF: H2O etch can be conducted at an aqueous solution ratio of 200: 1 for about 65 seconds, which typically results in approximately 30 Angstroms of oxide removal. The HF: H2O etch 8 is followed by a rinse with deionized water for approximately 10 minutes duration. The deionized water rinse is followed by an IPA dry as described in the preceding paragraph. At this time, the source/drain regions of the substrate are ready for ion implantation or selective epitaxial growth.In a particular embodiment, the SC-I process comprises a pre-rinse with deionized water of approximately 30 seconds duration. The pre-rinse is followed by a SC-I solution at a ratio of approximately 1:1- 4:6-40, which includes the subranges of 0.25:1:5, 0.5:1:5, 1:1:5, 1:1:6, 1:4:20, and 1:1:40, ammonium hydroxide: hydrogen peroxide: deionized water at a temperature of approximately 60 degrees Celsius for approximately 5 minutes. A quick dry rinse (QDR) is then performed for approximately 3 minutes.Following the SC-I cleaning process, an SC-2 cleaning process is performed. In an embodiment, theSC-2 cleaning process includes utilizing an aqueous solution of hydrochloric acid: hydrogen peroxide: deionized water at a ratio of approximately 1:1:50 at a temperature of approximately 60 degrees Celsius for approximately 5 minutes. A QDR is then performed, and portion 200 is ready for the third cleaning. The weight percent composition of the hydrochloric acid: hydrogen peroxide: deionized water is 29% (weight percent) hydrochloric acid and 30% (weight percent) hydrogen peroxide in a balance of deionized water.After the SC-I and SC-2, a third cleaning process comprising an approximate 30 second pre-rinse, an oxide etch, an overflow rinse and an IP dry is performed. The oxide etch is accomplished utilizing a solution of deionized water and hydrofluoric acid at a ratio of approximately 200:1 for a time period ranging from between 450-650 seconds. Following the HF etch, an overflow rinse is performed for approximately 10 minutes. A final isopropyl alcohol (IPA) dry is then performed. Approximately 120-140 Angstroms of the surface of substrate is removed in this process. Portion 200 is ready to undergo selective epitaxial growth. ThS'aB[delta]Ve-d'[epsilon]'Scrib'ed'bfe^hing process has been found to facilitate formation of an epitaxial layer on a semiconductor surface, specifically silicon. Because various etch processes can etch N- and P- type regions at different rates, it can be useful to amorphize an upper-most surface of the source/drain regions prior to the above- described clean to reduce any preferential etch differences between substrate regions of differing dopant types.For example, the above-described clean process can etch the N-type silicon preferentially, as compared to the P-type silicon, resulting in a quality difference of the SEG between the N and P regions after SEG processing. Etch rate differences between N- and P-type regions can allow for contaminates to remain in the lesser-etched region. For example, an etch process that does not etch P-type regions at the same rate as N-type regions can result in P-regions maintaining embedded carbon that is incorporated from previous process steps. Without appropriate etching of silicon in the P-type regions during the clean, the carbon will remain, and the SEG will grow inconsistently. A high bake temperature of 900<0>C can be used to overcome this growth issue on P areas, however, as stated previously, high bake temperatures can be detrimental to the device in that it causes diffusion and deactivation of the dopants. Amorphizing the source/drain regions can reduce etch differences associated with the above-described cleaning process as well as other processes that are used to etch doped substrate regions, thereby improving the quality of both the N and P regions.It has been observed that the selective etching may be P-type over N-type, or N-type over P-type depending on the solution temperature, flow rate of the aqueous ammonia, concentration of the aqueous ammonia, agitation, or illumination of light. By amorphizing the silicon in this manner to a pre-defined depth, it has been observed that unbiased etching to the depth of the amorphized silicon can be achieved.In one embodiment, N- and P-type extensions formed in the source/drain regions are amorphized by being implanted with the Xe, at a dose of 2El 4 and energy of 10keV, to create an amorphous depth of 100 A.In accordance with another embodiment, a spacer structure having an undercut can be used to reduce or inhibit facet formation during a selective epitaxial growth process. Such a process can allow for greater lateral uniformity of junction or silicide features during implantation or silicidation processes, and can be accomplished by using a spacer formed with a bi-layer of materials, e.g., a thin liner, such as portion 29 of FIG. 1, of one material underlying another layer of material from which the 'main' spacer is formed. The thin liner and other material layer are selected such that the two materials are selectively etchable with respect to the other, for example, a thin oxide liner and a nitride layer. By etching the underlying portion of the spacer, an undercut can be formed that reduces facets during epitaxial formation.Benefits, other advantages, and solutions to problems have been described above with regard to specific embodiments. However, the benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature or element of any or all the claims. Accordingly, the present invention is not intended to be limited to the specific form set forth herein, but on the contrary, it is intended to cover such alternatives, modifications, and equivalents, as can be reasonably included within the spirit and scope of the invention. |
An integrated circuitry construction comprises a substrate comprising conductive nodes of integrated circuitry. A conductive line structure is above the conductive nodes. Elevationally-extending conductive vias are spaced longitudinally along the conductive line structure. The conductive vias individually directly electrically couple the conductive line structure to individual of the conductive nodes. The conductive line structure comprises conductive material directly electrically coupled to the conductive vias and extending between immediately-longitudinally-adjacent of the conductive vias.An upper insulative material is directly below the conductive material between the immediately-longitudinally-adjacent conductive vias. Doped or undoped semiconductor material directly is below the upper insulative material between the immediately-longitudinally-adjacent conductive vias. A lower insulative material is directly below the semiconductor material between the immediately- longitudinally-adjacent conductive vias. Other aspects, including method, are disclosed. |
1.An integrated circuit system structure, which includes:The substrate, which includes the conductive nodes of the integrated circuit system;A wire structure above the conductive node; andVertically extending conductive paths are longitudinally spaced along the wire structure, the conductive paths electrically couple the wire structures individually and directly to individual ones of the conductive nodes, and the wire structure includes:A conductive material, which is directly electrically coupled to the conductive path and extends between the conductive paths immediately adjacent in the longitudinal direction;An upper insulating material, which is directly below the conductive material between the longitudinally immediately adjacent conductive paths;Doped or undoped semiconductor material, which is directly below the upper insulating material between the longitudinally immediately adjacent conductive paths; andA lower insulating material directly below the semiconductor material between the longitudinally immediately adjacent conductive paths.2.The construction of claim 1, wherein the semiconductor material does not directly abut the conductive material anywhere and does not directly abut any of the conductive paths anywhere.3.The construction of claim 1, wherein the semiconductor material does not directly abut any conductive material anywhere.4.The construction of claim 1, wherein the semiconductor material is undoped.5.The construction of claim 4, wherein the semiconductor material is free of conductivity modifying impurities.6.The construction of claim 1, wherein the semiconductor material is doped.7.The construction of claim 6, wherein the semiconductor material is semiconducting doped.8.The construction of claim 6, wherein the semiconductor material is conductively doped.9.The structure of claim 1, wherein the semiconductor material includes both doped portions and undoped portions.10.The structure according to claim 1, wherein the conductive material mainly includes a metal material, and the semiconductor material mainly includes a combination of polysilicon and a conductivity modifying dopant.11.The structure according to claim 10, wherein the conductive path mainly comprises conductively-doped polysilicon.12.The construction according to claim 1, wherein the conductive material directly abuts the top surface of the upper insulating material, the upper insulating material directly abuts the top surface of the semiconductor material, and the semiconductor material directly abuts The top surface of the lower insulating material.13.The construction of claim 1, wherein the conductive via and the upper insulating material have coplanar corresponding planar top surfaces.14.The structure according to claim 1, wherein the upper insulating material and the lower insulating material have the same composition with respect to each other.15.The structure according to claim 14, which includes an insulator material having a composition different from that of the insulating material, the insulator material being located between (a) and (b) longitudinally along the wire structure, wherein:(a): the upper insulating material, the semiconductor material, and the lower insulating material; and(b): The conductive path.16.The structure according to claim 1, wherein the wire structure includes a digit line of a memory circuit system.17.The configuration according to claim 16, wherein the memory circuitry includes DRAM.18.A DRAM structure including:A pair of recessed access devices, the recessed access devices individually comprising:A conductive gate, which is in a trench in the semiconductor material;A gate insulator along the sidewall and the substrate of the trench between the conductive gate and the semiconductor material;A pair of source/drain regions in the upper portion of the semiconductor material on opposite sides of the trench;A channel region located below the pair of source/drain regions in the semiconductor material along the trench sidewalls and surrounding the trench base; andOne of the source/drain regions of the pair of source/drain regions in the individual of the recessed access device pair is laterally interposed in the individual recessed access device pair Between the conductive gates and shared by the individual recessed access device pairs, and the other of the source/drain regions in the pair of source/drain regions is not in the individual recess Incoming access device to share;A digital line structure directly electrically coupled to the one shared source/drain region of a plurality of the individual recessed access device pairs;A pair of capacitors, which are individually directly electrically coupled to one of the other source/drain regions in the pair of individual recessed access devices; andVertically extending conductive paths, which are longitudinally spaced along the digit line structure, and the conductive paths electrically couple the digit line structure individually and directly to the shared source of the individual recessed access device pair. For each of the drain regions, the wire structure includes:A conductive material, which is directly electrically coupled to the conductive path and extends between the conductive paths immediately adjacent in the longitudinal direction;An upper insulating material, which is directly below the conductive material between the longitudinally immediately adjacent conductive paths;Doped or undoped semiconductor material, which is directly below the upper insulating material between the longitudinally immediately adjacent conductive paths; andA lower insulating material directly below the semiconductor material between the longitudinally immediately adjacent conductive paths.19.A method for forming an integrated circuit system structure, which includes:A substrate is provided, which includes a conductive node, a lower insulating material directly above the conductive node, a doped or undoped semiconductor material directly above the lower insulating material, and an upper insulating material directly above the semiconductor material material;Forming contact openings by the upper insulating material, the semiconductor material, and the lower insulating material; the contact openings individually extend to individual ones of the conductive nodes;Forming a conductive material directly against the individual conductive node in the contact opening;A conductive material formed directly above the upper insulating material and the conductive material, the conductive material directly abuts the conductive material;Patterning the conductive material, the upper insulating material, the semiconductor material, and the lower insulating material to form a wire structure above the conductive node;Vertically extending conductive paths are longitudinally spaced along the wire structure, the conductive paths include the conductive material and individually directly electrically couple the wire structures to the individual conductive nodes, the wire structures are formed to include:The conductive material, which is directly electrically coupled to the conductive path and extends between the conductive paths immediately adjacent in the longitudinal direction;The upper insulating material is directly below the upper conductive material between the longitudinally adjacent conductive paths;The semiconductor material, which is directly below the upper insulating material between the longitudinally adjacent conductive paths; andThe lower insulating material is directly below the semiconductor material between the longitudinally immediately adjacent conductive paths.20.18. The method of claim 19, comprising lining the sidewalls of the contact opening with an insulator material before forming the conductor material in the contact opening.21.The method of claim 19, wherein the semiconductor material does not directly abut the conductive material anywhere and does not directly abut any of the conductive paths anywhere.22.19. The method of claim 19, comprising after forming the conductive material in the contact opening, patterning the conductive material to reduce its width in individual ones of the contact openings.23.The method of claim 22, wherein the patterning of the conductive material and the patterning of the conductive material, the upper insulating material, the semiconductor material, and the lower insulating material are performed in a single shielding step In common.24.The method according to claim 19, wherein the integrated circuit system includes a DRAM, the conduction node is a source/drain region of a recessed access device of the DRAM, and the wire structure is included in the recess Into the digital line structure above the access device.25.The method of claim 24, wherein:The recessed access device is formed to include the pair of recessed access devices, and the recessed access device individually includes:A conductive gate, which is in a trench in the semiconductor material;A gate insulator along the sidewall and the substrate of the trench between the conductive gate and the semiconductor material;A pair of source/drain regions in the upper portion of the semiconductor material on opposite sides of the trench;A channel region located below the pair of source/drain regions in the semiconductor material along the trench sidewalls and surrounding the trench base; andOne of the source/drain regions of the pair of source/drain regions in the individual of the recessed access device pair is laterally interposed between the individual recessed access device pair Between the conductive gates in the pair and shared by the individual recessed access device pairs, and the other of the source/drain regions in the pair of source/drain regions are not in the individual Recessed access device for central sharing; andThe conductive gate is formed in the trench in the semiconductor material before forming the digit line structure. |
Integrated circuit system structure, DRAM structure, and used to form integrated circuit system structure
Method of makingTechnical fieldThe embodiments disclosed herein relate to integrated circuit system construction, dynamic random access memory (DRAM) construction, and methods for forming integrated circuit system constructions.Background techniqueMemory is a type of integrated circuit and is used in computer systems to store data. The memory can be fabricated as one or more arrays of individual memory cells. Digital lines (also called bit lines, data lines, or sense lines) and access lines (also called word lines) can be used to write or read memory cells. Digit lines can conductively interconnect memory cells along the columns of the array, and access lines can conductively interconnect memory cells along the rows of the array. Each memory cell can be uniquely addressed by a combination of digit lines and access lines.The memory cell may be volatile, semi-volatile or non-volatile. Non-volatile memory cells can store data for a long time without power. The non-volatile memory is conventionally designated as a memory having a retention time of at least about 10 years. Volatile memory is dissipated, so it is refreshed/rewritten to maintain data storage. Volatile memory can have a retention time of a few milliseconds or less. Regardless, the memory unit is configured to hold or store the memory in at least two different selectable states. In the binary system, the state is regarded as "0" or "1". In other systems, at least some individual memory cells can be configured to store more than two levels or states of information.Capacitors are a type of electronic component that can be used in memory cells. The capacitor has two electrical conductors separated by an electrically insulating material. Energy can be stored electrostatically in this material as an electric field. Depending on the composition of the insulator material, the stored field will be volatile or non-volatile. For example, a capacitor insulator material containing only SiO2 will be volatile. One type of nonvolatile capacitor is a ferroelectric capacitor, which has a ferroelectric material as at least part of an insulating material. Ferroelectric materials are characterized by having two stable polarization states, and therefore can include programmable materials for capacitors and/or memory cells. The polarization state of the ferroelectric material can be changed by applying a suitable programming voltage and maintained (at least for a period of time) after the programming voltage is removed. Each polarization state has a different charge storage capacitance from each other, and ideally, it can be used to write (ie, store) and read memory states without reversing the polarization state until it is desired to reverse this. It is less desirable that in some memories with ferroelectric capacitors, the act of reading the state of the memory can reverse the polarization. Therefore, after the polarization state is determined, rewriting of the memory cell is performed immediately after its determination to place the memory cell in the state before reading. In any case, due to the bistable characteristics of the ferroelectric material forming part of the capacitor, ideally, the memory cell incorporating the ferroelectric capacitor is non-volatile. Other programmable materials can be used as capacitor insulators to make the capacitor non-volatile.Field effect transistors are another type of electronic component that can be used in memory cells. These transistors include a pair of conductive source/drain regions with a semiconductor channel region in between. The conductive gate is adjacent to the channel region and separated from it by a thin gate insulator. Applying a suitable voltage to the gate allows current to flow from one of the source/drain regions to the other through the channel region. When the voltage is removed from the gate, current is prevented to a large extent from flowing through the channel region. Field effect transistors may also include additional structures, such as charge storage regions that can be reversibly programmed, as part of the gate structure between the gate insulator and the conductive gate. In any case, the gate insulator can be programmable, such as ferroelectric.An ongoing goal in the manufacture of memories and other circuit systems is to make components that are getting smaller and closer together. Unfortunately, undesired parasitic capacitance appears and increases to place closer conductors next to each other, and may adversely affect the design and operation of the circuit system.Description of the drawingsFigure 1 is a schematic hybrid schematic and cross-sectional view of a part of a DRAM configuration according to some embodiments of the present invention and is taken along line 1-1 in Figures 1 to 8.Fig. 2 is a view taken along line 2-2 in Figs. 1, 7 and 8.Fig. 3 is a view taken along line 3-3 in Figs. 1, 7 and 8.Fig. 4 is a view taken along line 4-4 in Figs. 1, 7 and 8.Fig. 5 is a view taken along line 5-5 in Figs. 1, 7 and 8.Fig. 6 is a view taken along line 6-6 in Figs. 1, 7 and 8.Figure 7 is a view taken along line 7-7 in Figures 1 to 6.Fig. 8 is a view taken along line 8-8 in Figs. 2 to 6.9 is a schematic cross-sectional view of a part of the lead substrate structure of the substrate structure of FIG. 1 in the process according to an embodiment of the present invention and is taken along the line 9-9 in FIG. 10.Fig. 10 is a view taken along line 10-10 in Fig. 9.FIG. 11 is a view of the substrate of FIG. 9 at a processing step after the processing step shown by FIG. 9 and is taken along the line 11-11 in FIGS. 12 and 13.Fig. 12 is a view taken along line 12-12 in Figs. 11 and 13.Fig. 13 is a view taken along line 13-13 in Figs. 11 and 12.14 is a view of the substrate of FIG. 11 at a processing step after the processing step shown by FIG. 11 and is taken along the line 14-14 in FIGS. 15 and 16.Fig. 15 is a view taken along line 15-15 in Figs. 14 and 16.Figure 16 is a view taken along line 16-16 in Figures 14 and 15.FIG. 17 is a view of the substrate of FIG. 14 at a processing step after the processing step shown by FIG. 14 and is taken along the line 17-17 in FIGS. 18 and 19. FIG.Figure 18 is a view taken along line 18-18 in Figures 17 and 19.Figure 19 is a view taken along line 19-19 in Figures 17 and 18.20 is a view of the substrate of FIG. 17 at a processing step after the processing step shown by FIG. 17 and is taken along line 20-20 in FIG. 21.Fig. 21 is a view taken along line 21-21 in Fig. 20.22 is a view of the substrate of FIG. 20 at a processing step after the processing step shown by FIG. 20 and is taken along line 22-22 in FIG. 23.Fig. 23 is a view taken along line 23-23 in Fig. 22.24 is a view of the substrate of FIG. 22 at a processing step after the processing step shown by FIG. 22 and is taken along line 24-24 in FIG. 25. FIG.FIG. 25 is a view taken along line 25-25 in FIG. 24. FIG.FIG. 26 is a view of the substrate of FIG. 24 shown at a processing step after the processing step shown by FIG. 22 and with respect to the cross section of FIG. 6.Detailed waysEmbodiments of the present invention cover integrated circuit system configurations such as DRAM configurations, and methods for forming integrated circuit system configurations such as DRAM configurations. A first example embodiment including a DRAM structure is described with reference to FIGS. 1 to 8, which shows an example fragment of a substrate structure 8 including an array or array area 10 that has been fabricated with respect to a base substrate 11. The substrate 11 may include any of conductive/conductor/conductive (that is, electrically in this context), semiconductive/semiconductor/semi-conductive, and insulating/insulator/insulating (that is, electrically in this context) materials. One or more. Various materials are above the base substrate 11. The material can be on one side of the material depicted in Figures 1 to 8, vertically inside or vertically outside. For example, other parts or all of the manufacturing components of the integrated circuit system may be provided somewhere on, around, or inside the base substrate 11. The control and/or other peripheral circuit systems for operating the components in the memory array may also be manufactured and may or may not be wholly or partly in the memory array or sub-array. In addition, multiple sub-arrays can also be manufactured and manipulated independently, cooperatively, or otherwise relative to each other. As used in this document, "subarray" can also be considered an array.The base substrate 11 includes a semiconductor material 12 (for example, monocrystalline silicon and/or polycrystalline silicon, Ge, SiGe, GaAs, and/or other semiconducting materials currently or developed in the future, appropriately and in various ways), trenches The trench isolation region 14 (for example, silicon nitride and/or silicon dioxide), and the active area region 16 including the semiconductive material 12 suitably and doped in various ways. In one embodiment, configuration 8 includes memory cell 75 (FIG. 8, for clarity in such drawings, only four outlines 75 are shown in FIGS. 4 and 5), such as individually including field effect transistor device 25 ( Figure 2) and a DRAM memory cell of a charge storage device 85 (e.g., a capacitor, Figures 1 and 8). However, the embodiments of the present invention cover other memory cells and other integrated circuit system configurations, regardless of whether memory cells are included.The field effect transistor 25 is in the form of a recessed access device (a type of field effect transistor configuration), where example configuration 8 shows such recessed access devices grouped in individual pairs of such devices. The individual recessed access device 25 includes a buried access line structure 18, for example, in a trench 19 in the semiconductive material 12. The construction 18 includes a conductive gate material 22 (eg, conductively doped semiconductor material and/or metallic material) that serves as a conductive gate of the individual device 25. The gate insulator 20 (for example, silicon dioxide and/or silicon nitride) is along the sidewall 21 and the substrate 23 of the individual trench 19 between the conductive gate material 22 and the semiconductor material 12. An insulator material 37 (eg, silicon dioxide and/or silicon nitride) is located above the materials 20 and 22 in the trench 19. The individual device 25 includes a pair of source/drain regions 24, 26 in the upper portion of the semiconductor material 12 on opposite sides of the individual trench 19 (for example, the regions 24, 26 are laterally outside of the access wire structure 18 and Higher than the access line structure 18). At least a portion of each of the source/drain regions 24, 26 has a conductivity increasing dopant therein, which has the greatest concentration of this conductivity increasing dopant within the respective source/drain regions 24, 26 For example, to make this part conductive (for example, the maximum dopant concentration is at least 1019 atoms/cm3). Therefore, all or only a portion of each source/drain region 24, 26 may have this maximum concentration of conductivity increasing dopant. The source/drain regions 24 and/or 26 may include other doped regions (not shown), such as halo regions, LDD regions, and so on.One of the source/drain regions (for example, region 26) of the pair of source/drain regions in each of the pair of recessed access devices 25 is laterally interposed between the conductive gate material 22 And shared by the pair of devices 25. The other of the source/drain regions (for example, region 24) of the pair of source/drain regions are not shared by the pair of devices 25. Therefore, in the example embodiment, each active area region 16 includes two devices 25 (eg, a pair of devices 25), each of which shares a central source/drain region 26. The digital line structure 30 is directly electrically coupled to a shared source/drain region 26 of a plurality of the individual pairs of devices 25. A pair of capacitors 85 (FIGS. 1 and 8) are individually directly electrically coupled to one of the other source/drain regions 24 in the respective pair of devices 25. The vertically extending conductive paths 34 (for example, metal materials and/or conductively doped semiconductor materials) are longitudinally spaced along the digit line structure 30. The conductive via 34 directly electrically couples the digit line structure 30 individually to an individual of the shared source/drain regions 26 of the individual pair of devices 25. It is shown that a vertically extending conductive path 36 (the composition of which is the same as or different from the composition of the path 34) interconnects the non-shared source/drain region 24 and the individual capacitor 85. Example insulator materials 38, 39, and/or 40 (eg, silicon nitride and/or silicon dioxide) surround the vias 34,36.The channel region 27 is located below the pair of source/drain regions 24 and 26 in the semiconductor material 12 along the trench sidewall 21 and surrounding the trench base 23. The channel region 27 may be suitably doped with conductivity-increasing dopants of the opposite conductivity type that may have the dopants in the source/drain regions 24, 26, and for example, it is not more than 1× in the channel. Maximum concentration of 1017 atoms/cm3. When a suitable voltage is applied to the gate material 22 of the access line structure 18, a conductive channel is formed in the channel region 27 close to the gate insulator 20 (for example, along the channel current flow line/path 29 [FIG. 8] ), so that current can flow between the pair of source/drain regions 24 and 26 below the access line structure 18 in the individual active area region 16. The stippling is schematically shown to indicate the main conductivity modifying dopant concentration (regardless of type), where denser stipples indicate higher dopant concentration and lighter stipples indicate lower dopant concentration. The conductivity modifying dopants can and will likely be in other parts of the material 12, as shown. For convenience, only two different stipple densities are shown in material 12, and additional dopant concentrations can be used, and a constant dopant concentration is not required in any region.The digital line structure 30 includes a conductive material 42 (the composition of which is the same as or different from the composition of the conductive paths 34 and/or 36), which is directly electrically coupled to the conductive paths 34 and extends between the conductive paths 34 that are immediately adjacent in the longitudinal direction. The digit line structure 30 includes an upper insulator material 50 (for example, silicon nitride and/or silicon dioxide) and an insulator material 38 above the conductive material 42. The digit line structure 30 also includes an upper insulating material 44 (e.g., silicon dioxide, silicon nitride, aluminum oxide, oxide dioxide with a thickness of 10 to 100 angstroms in the example) directly below the conductive material 42 between the longitudinally immediately adjacent conductive paths 34 One or more of hafnium, etc.). The digit line structure 30 also includes a doped or undoped semiconductor material 46 (example thickness is 25 to 250 angstroms) between the longitudinally adjacent conductive paths 34. In this document, "doped" and "undoped" are conductivity-modifying impurities present in the reference example semiconductor material 46, where "undoped semiconductor material" is defined as having a content from 0 atomic percent to less than 4.0 atomic percent The conductivity modifying impurity, and "doped semiconductor" means the conductivity modifying impurity having at least 4.0 atomic percent therein up to and including the conductivity modifying impurity having 57.7 atomic percent therein. The digit line structure 30 also includes a lower insulating material 48 (e.g., silicon dioxide, silicon nitride, aluminum oxide, silicon dioxide with a thickness of 10 to 200 angstroms in the example) between the conductive paths 34 immediately adjacent in the longitudinal direction under the semiconductor material 46. One or more of hafnium, etc.).In an ideal embodiment, the semiconductor material 46 does not directly abut the conductive material 42 anywhere, and does not directly abut any of the conductive paths 34 anywhere. In one embodiment, the semiconductor material 46 does not directly abut any conductive material anywhere, for example, whereby its voltage or any electric field therein is allowed to float during operation. In one embodiment, the semiconductor material 46 is undoped, and in one such embodiment, there are no conductivity modifying impurities (ie, such impurities are not detectable in the material 46). In one embodiment, the semiconductor material 46 is doped. In one such embodiment, the semiconductor material 46 is semiconductively doped (ie, from 1×1015 atoms/cm3 to less than 1×1019 atoms/cm3), and in another such embodiment is conductively doped ( That is, at least 1×1019 atoms/cm3, and for example, less than 1×1022 atoms/cm3). In one embodiment, the semiconductor material 46 includes both doped and undoped portions.In one embodiment, the conductive material 42 mainly includes (that is, means greater than 50% by volume up to and including 100% by volume) metal material, and the semiconductor material 46 mainly includes a combination of polysilicon and conductivity modifying dopants . In one embodiment, the conductive path 34 mainly includes conductive doped polysilicon.In one embodiment, the conductive material 42 directly abuts the top surface of the upper insulating material 44. In one embodiment, the upper insulating material 44 directly abuts the top surface of the semiconductor material 46. In one embodiment, the semiconductor material 46 directly abuts the top surface of the lower insulating material 48. In one embodiment, the conductive via 34 and the upper insulating material 44 have corresponding flat top surfaces, in one embodiment, the flat top surfaces are coplanar. In one embodiment, the upper insulating material 44 and the lower insulating material 48 each have the same composition relative to each other. In one such embodiment, the insulator material 38 has a different composition from the upper insulating material 44 and the lower insulating material 48, wherein the insulator material 38 is located longitudinally along the digit line structure 30 between (a) and (b), where (a ) Is the upper insulating material 44, the semiconductor material 46, and the lower insulating material 48, and (b) is the conductive path 34 (FIGS. 4 and 7).Any other attributes or aspects as shown and/or described herein with respect to other embodiments may be used.Embodiments of the present invention cover integrated circuit system configurations (for example, 8) regardless of whether DRAM or other memory circuitry is included. This configuration includes a substrate (e.g., 11) containing the conductive nodes (e.g., 24, 26) of the integrated circuit system. The wire structure (for example, 30, and regardless of whether a digit line is included) is above the conductive node. Vertically extending conductive paths (for example, 34) are longitudinally spaced along the wire structure. The conductive via directly electrically couples the wire structure individually to the individual ones of the conductive nodes. The wire structure includes a conductive material (e.g., 42) that is directly electrically coupled to the conductive path and extends between the conductive paths immediately adjacent in the longitudinal direction. The wire structure includes an insulating material (e.g., 44) directly below the conductive material between longitudinally adjacent conductive paths. The wire structure includes a doped or undoped semiconductor material (for example, 46) directly below the insulating material between the longitudinally immediately adjacent conductive paths. The wire structure includes a lower insulating material (eg, 48) directly below the semiconductor material between longitudinally adjacent conductive paths. In one embodiment, the wire structure includes the digital lines of the memory circuitry, and in one such embodiment, the memory circuitry includes DRAM (eg, regardless of whether it includes any of the specific example configurations described above with respect to FIGS. 1 to 8). Any other attributes or aspects as shown and/or described herein with respect to other embodiments may be used.Embodiments of the present invention cover a method for forming an integrated circuit system structure (e.g., including DRAM, other memories, and/or non-memory circuitry). Regardless, the method aspect of the present invention can use or have any of the structural aspects described above.9 and 10, this method includes providing a substrate (e.g., 8), which includes a conductive node (e.g., 26), a lower insulating material (e.g., 48) directly above the conductive node, and a doped layer directly above the lower insulating material. Doped or undoped semiconductor material (for example, 46), and an upper insulating material (for example, 44) directly above the semiconductor material. In one embodiment, the conductive node is the source/drain region of a recessed access device (e.g., 25 in FIG. 2) that includes a DRAM in one embodiment.Referring to FIGS. 11 to 13, contact openings (for example, 56) have been formed by the upper insulating material, the semiconductor material, and the lower insulating material. The contact openings individually extend to individual ones of the conductive nodes.Referring to Figures 14 to 16, and in one embodiment, the sidewalls of the contact opening 56 have been lined with an insulator material (e.g., 39). By way of example, this can be formed by depositing an insulator material to the exemplarily depicted thickness, followed by maskless anisotropic etch back to it to substantially remove this from the upper horizontal surface. This can reduce the thickness of the upper insulating material (not shown).Referring to Figures 17 to 19, a conductive material (e.g., 35) has been formed in the contact opening 56 that directly abuts the individual conductive node. This can occur by depositing a conductive material in the contact opening 56 and on top of the upper insulating material, and then removing this conductive material at least to the top surface of the upper insulating material. This can reduce the thickness of the upper insulating material (not shown).20 and 21, a conductive material (for example, 42) has been formed directly above the upper insulating material and the conductive material, where the conductive material directly abuts the conductive material. The upper insulator material 50 is also shown as having been deposited over the conductive material.22 and 23, the conductive material, the upper insulating material (for example, 44), and the semiconductor material have been patterned to form a wire structure (for example, 30) above the conductive node. This can reduce the thickness of the upper insulator material 39 (not shown). In one embodiment and as shown, the conductive material (eg, 35) has been patterned to reduce its width within the individual in the contact opening, and in one embodiment, the upper insulator material 50 has also been patterned. In one embodiment, this patterning is already performed together in a single masking step (eg, using lithography and etching with or without hard masking materials and/or with or without pitch multiplication).Conductive paths 34 that extend vertically have been formed and are spaced longitudinally along the wire structure. The conductive paths include conductive materials and individually directly electrically couple the wire structures to the individual conductive nodes. The wire structure is formed to include a conductive material directly electrically coupled to the conductive path and extend between longitudinally immediately adjacent conductive paths. The upper insulating material (for example, 44) is directly below the upper conductive material immediately between the conductive paths in the longitudinal direction. The semiconductor material is directly below the upper insulating material (for example, 44) between the conductive paths in the longitudinal direction. The lower insulating material is directly below the semiconductor material between the conductive paths in the longitudinal direction. In one embodiment, the semiconductor material does not directly abut the conductive material anywhere and does not directly abut any of the conductive paths anywhere.Referring to Figures 24 and 25, and in one embodiment, an insulator material 38 has been formed. By way of example, this can be formed by depositing it with an example showing the thickness, followed by maskless anisotropic etching to substantially remove this from the upper horizontal surface.Referring to FIG. 26, an insulator material (e.g., 40) has been deposited and patterned as shown to form contact openings 45. This can then be filled with a conductive material for forming conductive via 36 (Figure 6).Any other attributes or aspects as shown and/or described herein with respect to other embodiments may be used.In this document, unless otherwise indicated, "vertical", "higher", "upper", "lower", "top", "at the top", "bottom", "above", " "Below", "below", "below", "up" and "down" usually refer to the vertical direction. "Horizontal" refers to a general direction along the surface of the main substrate (ie, within 10 degrees), and may be a direction relative to the processing substrate during manufacturing, and vertical is a direction generally orthogonal to the horizontal. The so-called "completely horizontal" refers to a direction along the surface of the main substrate (that is, no angle to the surface of the main substrate), and may be a direction relative to the processing substrate during manufacturing. In addition, "vertical" and "horizontal" as used herein generally refer to directions perpendicular to each other, and are independent of the orientation of the substrate in a three-dimensional space. In addition, "extend vertically" and "extend vertically" refer to a direction that is at least 45° angularly apart from the complete horizontal. In addition, the "vertical extension", "vertical extension", horizontal extension and horizontal extension of the field effect transistor refer to the channel length of the transistor along which the reference current flows between the source/drain regions in operation Orientation. For bipolar junction transistors, "vertical extension", "vertical extension", horizontal extension, and horizontal extension are the orientation of the base length along which the reference current flows between the emitter and the collector during operation.In addition, "directly above" and "directly below" require that the two stated areas/materials/components have at least some lateral overlap (ie, horizontal) relative to each other. In addition, the use of "above" without "positive" in front only requires that a certain part of the stated area/material/component above another part is vertically located outside of the other part (ie, independent of the two stated areas/materials). / Whether there is any horizontal overlap of components). Similarly, the use of "below" without "directly" in front only requires that a certain part of the stated area/material/component below another part is vertically located inside the other part (ie, independent of the two stated areas/ Whether there is any horizontal overlap of materials/components).Any of the materials, regions, and structures described herein can be homogeneous or heterogeneous, and in any case can be continuous over any material overlaid by any of the materials, regions, and structures Or discontinuous. Where one or more example compositions are provided for any material, the material may include, consist essentially of, or consist of one or more compositions. In addition, unless otherwise specified, any suitable or yet to be developed technique can be used to form each material, such as atomic layer deposition, chemical vapor deposition, physical vapor deposition, epitaxial growth, diffusion doping, and ion implantation. .In addition, "thickness" itself (without a pre-direction adjective) is defined as the average straight line distance perpendicular to a given material or zone from the closest surface of the adjacent material or zone of different composition. Additionally, the various materials or regions described herein may have a substantially constant thickness or a variable thickness. If it has a variable thickness, unless otherwise indicated, the thickness refers to the average thickness, and due to the variable thickness, the material or zone will have a certain minimum thickness and a certain maximum thickness. As used herein, "different composition" only requires that those parts of the two stated materials or regions that may directly abut each other are chemically and/or physically different, for example if such materials or regions are not homogeneous. If the two stated materials or zones are not directly against each other, then the "different composition" only requires that those parts of the two stated materials or zones that are closest to each other are chemically and/or physically different, if the material or The zone is not homogeneous. In this document, when the materials, regions or structures are in at least some physical touch contact with each other, the stated materials, regions or structures "directly abut" each other. In contrast, "above," "upper," "adjacent," "edge," and "but against" without "directly" in front cover "directly against" and the intermediate materials, areas or structures in which the stated material , Regions, or structures that do not physically touch each other.Here, if the current can flow continuously from one zone-material-component to another zone-material-component in normal operation and when sufficient subatomic positive and/or negative charges are generated, it mainly passes through the subatomic positive and/or negative The movement of charge to achieve the flow, then the zone-material-components are "electrically coupled" with respect to each other. Another electronic component may be between the zone-material-component and electrically coupled to the zone-material-component. In contrast, when the zone-material-component is referred to as "direct electrical coupling", there are no intermediate electronic components between the directly electrically coupled zone-material-component (for example, no diodes, transistors, resistors, transducers, etc.). Devices, switches, fuses, etc.).In addition, "metal material" is any one or a combination of elemental metals, mixtures or alloys of two or more elemental metals, and any conductive metal compounds.in conclusionIn some embodiments, an integrated circuit system construction includes a substrate including conductive nodes of the integrated circuit system. The wire structure is above the conductive node. The vertically extending conductive paths are longitudinally spaced along the wire structure. The conductive via directly electrically couples the wire structure individually to individual ones of the conductive nodes. The wire structure includes a conductive material that is directly electrically coupled to the conductive path and extends between the conductive paths immediately adjacent in the longitudinal direction. The upper insulating material is directly below the conductive material between the longitudinally adjacent conductive paths. The doped or undoped semiconductor material is directly below the upper insulating material between the longitudinally adjacent conductive paths. The lower insulating material is directly below the semiconductor material between the longitudinally adjacent conductive paths.In some embodiments, a DRAM configuration includes a pair of recessed access devices. The recessed access device pairs individually include conductive gates in trenches in semi-conductive material. The gate insulator is along the sidewall and the base of the trench between the conductive gate and the semiconductor material. A pair of source/drain regions are in the upper portion of the semiconducting material on opposite sides of the trench. A channel region is located below the pair of source/drain regions in the semiconductor material along the trench sidewalls and surrounding the trench base. One of the source/drain regions of the pair of source/drain regions in the individual of the recessed access device pair is laterally interposed in the individual recessed access device pair Between the conductive gates and shared by the individual recessed access device pairs. The other of the source/drain regions of the pair of source/drain regions are not shared in the individual recessed access device pair. The digit line structure is directly electrically coupled to the one shared source/drain region of a plurality of the individual recessed access device pairs. A pair of capacitors are individually directly electrically coupled to one of the other source/drain regions in the pair of individual recessed access devices. The vertically extending conductive paths are longitudinally spaced along the digit line structure. The conductive paths directly electrically couple the digit line structure individually to individual ones of the shared source/drain regions of the individual recessed access device pairs. The wire structure includes a conductive material that is directly electrically coupled to the conductive path and extends between the conductive paths immediately adjacent in the longitudinal direction. The upper insulating material is directly below the conductive material between the longitudinally adjacent conductive paths. The longitudinal direction of the doped or undoped semiconductor material is immediately below the upper insulating material between the conductive paths. The lower insulating material is directly below the semiconductor material between the longitudinally adjacent conductive paths.In some embodiments, a method for forming an integrated circuit system structure includes providing a substrate that includes a conductive node, a lower insulating material directly above the conductive node, and a doping directly above the lower insulating material Or undoped semiconductor material, and an upper insulating material directly above the semiconductor material. A contact opening is formed by the upper insulating material, the semiconductor material, and the lower insulating material. The contact openings individually extend to individual ones of the conductive nodes. Conductor material is formed in the contact opening directly abutting the individual conductive node. A conductive material is formed directly above the upper insulating material and the conductive material. The conductive material directly abuts the conductive material. The conductive material, the upper insulating material, the semiconductor material, and the lower insulating material are patterned to form a wire structure above the conductive node. Vertically extending conductive paths are longitudinally spaced along the wire structure, and include the conductive material and directly electrically couple the wire structure to the individual conductive nodes individually. The wire structure is formed to include a conductive material directly electrically coupled to the conductive path and extend between the conductive paths immediately adjacent in the longitudinal direction. The upper insulating material is directly below the upper conductive material between the longitudinally adjacent conductive paths. The semiconductor material is directly below the upper insulating material between the longitudinally adjacent conductive paths. The lower insulating material is directly below the semiconductor material between the longitudinally adjacent conductive paths. |
A double blanket ion implant method for forming diffusion regions in memory array devices, such as a MOSFET access device is disclosed. The method provides a semiconductor substrate with a gate structure formed on its surface Next, a first pair of diffusion regions are formed in a region adjacent to the channel region by a first blanket ion implantation process. The first blanket ion implantation process has a first energy level and dose. The device is subjected to oxidizing conditions, which form oxidized sidewalls on the gate structure. A second blanket ion implantation process is conducted at the same location as the first ion implantation process adding additional dopant to the diffusion regions. The second blanket ion implantation process has a second energy level and dose. The resultant diffusion regions provide the device with improved static refresh performance over prior art devices. In addition, the first and second energy levels and doses are substantially lower than an energy level and dose used in a prior art single implantation process. |
What is claimed as new and desired to be protected by Letters Patent of the United States is: 1. A method of forming a device to be used in a memory array, the device comprising a gate structure provided on a surface of a semiconductor substrate, said method comprising the steps of:forming a first doping implant within the substrate to form first and second diffusion regions underneath the surface of the substrate on opposite sides of the gate structure; diffusing a portion of said first doping implant underneath a portion of said gate structure, thereby forming first and second overlap regions corresponding to said first and second diffusion regions; and forming a second doping implant within the substrate at locations of the first and second diffusion regions to add additional dopant to the first and second diffusion regions, wherein each diffusion region comprises a first portion having a first dopant concentration and a second portion having a second dopant concentration. 2. The method of claim 1 wherein the dopant is selected from the group consisting of phosphorous, arsenic and antimony.3. The method of claim 2 wherein the dopant is phosphorous.4. The method of claim 1 wherein said first doping implant step is performed at a first energy level and first dose and said second doping implant is performed at a second energy level and second dose.5. The method of claim 4 wherein the first energy level is different from the second energy level.6. The method of claim 4 wherein the first dose is different from the second dose.7. The method of claim 4 wherein the first energy level is less than 30 Kev and the first dose is less than 7*10<12 >ions/cm<2>.8. The method of claim 4 wherein the first energy level is within a range of 5 Kev to 45 Kev and the first dose is within a range of 1*10<12 >ions/cm<2 >to less than 7*10<12 >ions/cm<2>.9. The method of claim 4 wherein the first energy level is approximately 15 Kev and the first dose is approximately 2*10<12 >ions/cm<2>.10. The method of claim 4 wherein the second energy level is less than 30 Kev and the second dose is less than 1*10<13 >ions/cm<2>.11. The method of claim 4 wherein the second energy level is within a range of 5 Kev to 60 Kev and the second dose is within a range of 1*10<12 >ions/cm<2 >to 1*10<13 >ions/cm<2>.12. The method of claim 4 wherein the second energy level is approximately 20 Kev and the second dose is approximately 4*10<12 >ions/cm<2>.13. The method of claim 1 wherein the first dopant concentration is different from the second dopant concentration.14. The method of claim 1, wherein said first and second doping implants are performed by blanket ion implanting process.15. The method of claim 14, wherein sidewalls of the gate structure are oxidized prior to said second doping implant.16. The method of claim 15, wherein the sidewalls of the gate structure are oxidized by a thermal re-ox process.17. A method of forming a metal oxide semiconductor field effect transistor comprising the steps of:providing a gate structure on a surface of a semiconductor substrate; implanting a dopant within the substrate to form first and second diffusion regions underneath the surface of said substrate; diffusing a portion of said dopant to form first and second overlap regions underneath a portion of said gate structure corresponding to said first and second diffusion regions; oxidizing sidewalls of the gate structure; and implanting the dopant within the substrate at locations of the first and second diffusion regions to add additional dopant to the first and second diffusion regions, wherein each diffusion region comprises a first portion having a first dopant concentration and a second portion having a second dopant concentration. 18. A method of forming a device on a substrate, the device comprising a gate structure provided on a surface of the substrate, said method comprising the steps of:implanting a dopant at a first energy level and first does into the substrate to form first and second diffusion regions underneath the surface of the substrate on opposite sides of the gate structure; diffusing a portion of said dopant to form first and second overlap regions underneath a portion of said gate structure corresponding to said first and second diffusion regions; and implanting the dopant at a second energy level and second dose into the substrate at locations of the first and second diffusion regions to add additional dopant to the first and second diffusion regions, wherein each diffusion region comprises a first portion having a first dopant concentration and a second portion having a second dopant concentration. 19. The method of claim 18 wherein the dopant is selected from the group consisting of phosphorous, arsenic and antimony.20. The method of claim 19 wherein the dopant is phosphorous.21. The method of claim 18 wherein the first energy level is different from the second energy level.22. The method of claim 18 wherein the first dose is different from the second dose.23. The method of claim 18 wherein the first energy level is less than 30 Kev and the first dose is less than 7*10<12 >ions/cm<2>.24. The method of claim 18 wherein the first energy level is within a range of 5 Kev to 45 Kev and the first dose is within a range of 1*10<12 >ions/cm<2 >to less than 7*10<12 >ions/cm<2>.25. The method of claim 18 wherein the first energy level is approximately 15 Kev and the first dose is approximately 2*10<12 >ions/cm<2>.26. The method of claim 18 wherein the second energy level is less than 30 Kev and the second dose is less than 1*10<13 >ions/cm<2>.27. The method of claim 18 wherein the second energy level is within a range of 5 Kev to 60 Kev and the second dose is within a range of 1*10<12 >ions/cm<2 >to 1*10<13 >ions/cm<2>.28. The method of claim 18 wherein the second energy level is approximately 20 Kev and the second dose is approximately 4*10<12 >ions/cm<2>.29. The method of claim 18 wherein the first dopant concentration is different from the second dopant concentration.30. The method of claim 18, wherein said implanting steps are performed by a blanket ion implanting process.31. The method of claim 30, wherein sidewalls of the gate structure are oxidized prior to said second implanting step.32. The method of claim 31, wherein the sidewalls of the gate structure are oxidized by a thermal re-ox process.33. A method of forming a memory array device on a substrate, the device comprising a gate structure provided on a surface of the substrate, said method comprising the steps of:blanket ion implanting a dopant at a first energy level and first dose into the substrate to form first and second diffusion regions underneath the surface of the substrate on opposite sides of the gate structure; diffusing a portion of said dopant to form first and second overlap regions underneath a portion of said gate structure corresponding to said first and second diffusion regions; oxidizing sidewalls of the gate structure; and blanket ion implanting the dopant at a second energy level and second dose into the substrate at locations of the first and second diffusion regions to add additional dopant to the first and second diffusion regions, wherein each diffusion region comprises a first portion having a first dopant concentration and a second portion having a second dopant concentration. 34. The method of claim 33, wherein said oxidizing step is a thermal re-ox process. |
BACKGROUND OF THE INVENTION1. Field of the InventionThe present invention relates to the field of semiconductor memory devices and, more particularly to a structure having improved static refresh properties in dynamic random access memory devices and a method of making it.2. Description of the Related ArtMetal oxide semiconductor (MOS) structures are basic electronic devices used in many integrated circuits. One such structure is the metal oxide semiconductor field effect transistor (MOSFET), which is typically formed in a semiconductor substrate by providing a gate structure over the substrate to define a channel region, and by forming source and drain regions on opposing sides of the channel region.To keep pace with the current trend toward maximizing the number of circuit devices contained in a single chip, integrated circuit designers continue to design integrated circuit devices with smaller and smaller feature sizes. For example, not too long ago it was not uncommon to have MOSFET devices (including CMOS devices) having channel lengths of 2 microns or more. The current state of the art for production MOSFET devices includes channel lengths of less than a [1/4] micron.As the channel lengths of MOSFET devices have been reduced, MOSFETS have become more susceptible to certain problems. One common problem is increased junction leakage, a condition affecting the refresh characteristics of a dynamic random access memory (DRAM) memory cell. DRAM is a specific category of random access memory (RAM) containing an array of individual memory cells, where each cell includes a capacitor for holding a charge and a transistor for accessing the charge held in the capacitor. Due to junction leakage, the stored charge must be re-stored in the capacitor on a periodic basis through a process known as refresh. Increased junction leakage leads to a premature depletion of the capacitor's stored charge, necessitating more frequent refresh cycles. Because resources are expended in refreshing the DRAM cells, the longer the period between refresh cycles, the better. The term "pause" is often used to represent the amount of time that a DRAM cell, or group of cells, can maintain their charge without undergoing a refresh operation. That is, how long can the DRAM control circuitry pause between refresh operations and still maintain the stored state of the DRAM memory cell. It is desirable to extend the pause period of, and improve the static refresh of, the DRAM.A manufacturer may want to improve static refresh performance of the DRAM to provide customers with the capability to perform more memory operations (e.g., reads and writes) between refresh cycles. This reduces the overhead required to utilize the DRAM. Moreover, a manufacturer may want to improve static refresh performance to improve the operating specifications of the DRAM. For example, DRAMs typically have a low-power or standby specification requiring the DRAM to operate within a maximum current during a low-power mode. Since memory cells must be refreshed during the lower-power mode, reducing the frequency of the refresh operations will improve the DRAM's operational performance for the low-power mode.FIG. 1 illustrates a prior art MOSFET memory array device 5. The device 5 and its fabrication method are described in U.S. Pat. No. 5,534,449 (Dennison et al.), which is hereby incorporated by reference in its entirety. Briefly, the fabrication of the device 5 is initiated by forming a gate structure 10 on a substrate 8. The substrate 8 is typically a bulk silicon substrate, which may have a doped well therein in which transistors are formed. The gate structure 10 (referred to in the '449 patent as a gate line) typically comprises a gate oxide 12, a conductive polysilicon layer 14, an overlying WSix layer 16, an overlying novellus oxide layer 18 and a Si3N4 capping layer 20. The cross sectional width of this prior art gate structure 10 is 0.40 microns.Once the gate structure 10 is formed, the device 5 is subjected to oxidizing conditions. This process step is often referred to as a "re-ox" step or a thermal re-ox step. Oxidized sidewalls 22, 24 are formed on the gate structure 10, and oxide regions 26, 28 are formed on the substrate, as a result of the re-ox step. Subsequent to the re-ox step, a blanket phosphorous implant step is performed to form diffusion regions 30, 32. This blanket phosphorous implant is performed at an energy level ranging from 30 Kev to 60 Kev with a dose ranging from 7*10<12 >ions/cm<2 >to 1.5*10<13 >ions/cm<2 >to provide an average dopant concentration for the diffusion regions 30, 32 ranging from 1*10<17 >ions/cm<3 >to 1*10<19 >ions/cm<3>. For the prior art device 5, this blanket phosphorous implant step is performed after the re-ox step to prevent the phosphorous from diffusing too far underneath the gate structure 10, which could cause transistor leakage problems.The fabrication process of the device 5 typically includes the formation of oxide or nitride sidewall spacers 40, 42 on the sidewalls of the gate structure 10. Further processing may be performed as described in the '449 patent. Although the MOSFET memory array device 5 is a vast improvement over earlier memory array devices, it can still benefit from improved static refresh performance. Thus, it is still desirable to improve as much as possible the static refresh performance of the memory device.SUMMARY OF THE INVENTIONThe present invention provides a memory array device having improved static refresh over prior art memory array devices.The above and other features and advantages of the invention are achieved by a double blanket ion implant method for forming diffusion regions in memory array devices, such as a MOSFET access device. The method provides a semiconductor substrate with a gate structure formed on its surface. Next, a first pair of diffusion regions are formed in a region adjacent to the channel region by a first blanket ion implantation process. The first blanket ion implantation process has a first energy level and dose. The device is subjected to oxidizing conditions, which form oxidized sidewalls on the gate structure. A second blanket ion implantation process is conducted at the same location as the first ion implantation process adding additional dopant to the diffusion regions. The second blanket ion implantation process has a second energy level and dose. The resultant diffusion regions provide the device with improved static refresh performance over prior art devices. In addition, the first and second energy levels and doses are substantially lower than an energy level and dose used in a prior art single implantation process.BRIEF DESCRIPTION OF THE DRAWINGSThe foregoing and other advantages and features of the invention will become more apparent from the detailed description of the preferred embodiments of the invention given below with reference to the accompanying drawings in which:FIG. 1 is a fragmentary vertical cross-sectional view of a prior art memory array device conventional diffusion regions;FIG. 2 is a fragmentary vertical cross sectional view of an integrated circuit memory array device formed in accordance with the present invention;FIG. 3 is a fragmentary vertical cross sectional view of the device illustrated in FIG. 2 at an early stage of formation;FIG. 4 is a fragmentary vertical cross sectional view of the device illustrated in FIG. 3 at a later stage of formation;FIG. 5 is a fragmentary vertical cross sectional view of the device illustrated in FIG. 4 at a later stage of formation;FIG. 6 is a fragmentary vertical cross sectional view of the device illustrated in FIG. 5 at a later stage of formation;FIG. 7 is a fragmentary vertical cross sectional view of the device illustrated in FIG. 6 at a later stage of formation;FIG. 8 is a graph illustrating the dopant concentration of diffusion regions within the devices illustrated in FIGS. 1 and 2;FIGS. 9 and 10 are graphs illustrating the static refresh performance of the devices illustrated in FIGS. 1 and 2; andFIG. 11 is block diagram of a processor-based system including a memory device formed in accordance with the present invention.DETAILED DESCRIPTION OF PREFERRED EMBODIMENTSThe present invention will be described as set forth in the preferred embodiments illustrated in FIGS. 2-7 and 11. Other embodiments may be utilized and structural or logical changes may be made without departing from the spirit or scope of the present invention. Like items are referred to by like reference numerals.FIG. 2 illustrates a portion of an integrated circuit MOSFET memory array device 105 constructed in accordance with the present invention. The device 105 is preferably used as an access device of a DRAM memory cell. As will be described with reference to FIGS. 3 to 7, the device 105 including diffusion regions 130, 132 is fabricated using two blanket phosphorous ion implant steps sandwiched around a conventional re-ox step. Since two implant steps are performed, diffusion region 130 comprises two regions 130a, 130b having different dopant concentrations. Similarly, diffusion region 132 comprises two regions 132a, 132b having different dopant concentrations. As described with reference to FIGS. 9 and 10, the uniquely formed diffusion regions 130, 132 provide the device 105 with improved static refresh performance over the prior device 5 (illustrated in FIG. 1). Since the method uses two separate blanket phosphorous ion implant steps, it will be referred to hereinafter as a "double blanket ion implant method."Hereinafter, the terms "wafer" and "substrate" are used interchangeably and are to be understood as including silicon-on-insulator (SOI) or silicon-on-sapphire (SOS) technology, doped and undoped semiconductors, epitaxial layers of silicon supported by a base semiconductor foundation, and other semiconductor structures. Furthermore, when reference is made to a "wafer" or "substrate" in the following description, previous process steps may have been utilized to form regions or junctions in the base semiconductor structure or foundation.In addition, no particular order is required f or the method steps described below, with the exception of those logically requiring the results of prior steps, for example formation of spacers 40, 42 adjacent to the sidewalls of the gate structure 10 logically requires the prior formation of the gate structure 10 and its sidewalls. Otherwise, enumerated steps are provided below in an exemplary order which may be altered, for instance the several ion implant steps may be rearranged using masking and etching steps as is known in the art.FIG. 3 shows the integrated circuit MOSFET memory array device 105 in accordance with the present invention at an early stage of formation. A gate structure 110 is provided on the substrate 8 as is known in the art and described in the '449 patent to Dennison et al. The substrate 8 is typically a bulk silicon substrate, which may have a doped well in which access transistors are to be formed. The gate structure 110 comprises a gate oxide 12, a conductive polysilicon layer 14, an overlying WSix layer 16, an overlying oxide layer 18 and a Si3N4 capping layer 20. Unlike the gate structure 10 of the prior art device 5 illustrated in FIG. 1, the cross sectional length of the gate structure 110 may be substantially reduced. For example, the cross sectional length of the gate structure 110 can be substantially to approximately 0.20 microns. An advantage of the present invention is that the length of the gate structure 110 is reduced in comparison to the prior art due to the unique fabrication processing of the present invention (described below).Referring now to FIG. 4, diffusion regions 130a, 132a are formed in the substrate 8 adjacent the sidewalls of the gate structure 110 and extend laterally away from the gate structure 110. It should be noted that a portion of the diffusion regions 130a, 132a diffuse beneath the gate structure 110. To create the diffusion regions 130a, 132a, the substrate 8 undergoes a first blanket implant step. It is desirable that an n-type be used, which makes the device 105 an NMOS device. It is desirable that the n-type dopant be phosphorous. However, it should be noted that other dopants can be used if so desired. For example, other n-type dopants such as arsenic or antimony could be used. If it were desirable for the device 105 to be a PMOS device, a p-type dopant such as boron, boron bifluoride (BF2) or borane (B2H12) could be used. This first blanket phosphorous implant may be performed, for example, at an energy level of approximately 15 Kev with a dose of approximately 2*10<2 >ions/cm<2>. It should be appreciated that any other suitable dose and energy level can be used for this step. One exemplary range for the first blanket phosphorous implant may include an energy level between approximately 5 Kev to 45 Kev with a dose of approximately 1*10<12 >ions/cm<2 >to slightly less than 7*10<12 >ions/cm<2>.It must be noted that this blanket phosphorous implant step is performed prior to a subsequent re-ox step since the energy level and dose is substantially lower than the dose used in the prior art (i.e., energy level ranging from 30 Kev to 60 Kev with a dose ranging from 7*10<12 >ions/cm<2 >to 1.5*10<13 >ions/cm<2 >to provide an average dopant concentration for the diffusion regions 30, 32 ranging from 1*10<17 >ions/cm<3 >to 1*10<19 >ions/cm<3>). Thus, the first blanket phosphorous implant step can be performed prior to the re-ox step without having the phosphorous diffuse too far underneath the gate structure 110 and without causing subsequent transistor leakage problems.Referring now to FIG. 5, a re-ox step is performed forming oxidized sidewalls 22, 24 on the gate structure 110 and oxide regions 26, 28 on the substrate 8. It should be appreciated that any conventional re-ox process can be performed at this point, such as a thermal re-ox process or a source/drain thermal re-ox process. Referring to FIG. 6, diffusion regions 130b, 132b are formed in the substrate 8 at the same location as diffusion regions 130a, 132b. To create the second diffusion regions 130b, 132b, the substrate 8 undergoes a second blanket implant step. As with the first blanket implant step, it is desirable that the dopant used is phosphorous. However, it should be noted that other dopants can be used if so desired, particularly if a different conductivity type of the device 105 is desired. This second blanket phosphorous implant may be performed at an energy level of approximately 20 Kev with a dose of approximately 4*10<12 >ions/cm<2>. It should be appreciated that any other suitable dose and energy level can be used for this step. One exemplary range for the second blanket phosphorous implant may include an energy level between approximately 5 Kev to 60 Kev with a dose of approximately 1*10<12 >ions/cm<2 >to 1*10<13 >ions/cm<2>.The oxidized sidewalls 22, 24 on the gate structure 110 prevent the second implant from diffusing underneath the gate structure 110, which helps in the formation of the individual diffusion regions 130a, 130b, 132a, 132b. The two diffusion regions 130a, 130b combine to form one diffusion region 130. The resultant diffusion region 130 will have two different dopant concentrations, one from region 130a and one from region 130b. There will be a smooth transition between the dopant concentrations of the two regions 130a, 130b. Similarly, the two diffusion regions 132a, 132b combine to form one diffusion region 132. The resultant diffusion region 132 will have two different dopant concentrations, one from region 132a and one from region 132b. There will be a smooth transition between the dopant concentrations of the two regions 132a, 132b. As will be discussed below, these uniquely formed diffusion regions 130, 132 allow the device 105 to have substantially better static refresh performance in comparison to the prior art device 5 (FIG. 1).Referring to FIG. 7, oxide or nitride sidewall spacers 40, 42 may be formed on the on the sidewalls of the gate structure 110 (as described in the '449 patent or by any other known method). In addition, further processing may be performed to form a memory cell as described in the '449 patent. It can be seen that the device 105 has two diffusion regions 130, 132, each having a pair of diffusion regions 130a, 130b, 132a, 132b, respectively.FIG. 8 illustrates an exemplary phosphorous concentration 150 of the second diffusion region 132 with respect to its length (illustrated by arrow X). It should be noted that the first diffusion region 130 would have a similar concentration, but in a direction opposite the direction indicated by arrow X. An exemplary phosphorous concentration 152 of the prior art device is also illustrated. From the curves 150, 152 it can be seen how the second diffusion region 132 has a more graded concentration of phosphorous than the prior art diffusion regions (e.g., region 32 in FIG. 1). By more graded, we mean that the net doping concentration versus distance changes gradually. By contrast, as shown by curve 152, the diffusion region 32 (FIG. 1) of the prior art device has an abrupt change in concentration of phosphorous versus distance. That is, the net doping concentration of the prior art curve 152 undergoes a steep change with respect to distance. With a graded dopant concentration of the diffusion regions, the resistance to current flow is less than the diffusion regions of the prior art. Although the invention is not to be bound to any specific theory, it is believed that the more graded concentration of the present invention improves the static refresh of the device 105 by improving the junction at the storage node of the DRAM memory cell.Referring again to FIG. 7, it can be seen that the two diffusion regions 130, 132 slightly diffuse below the gate structure 110. That is, there is a first region 140 of the first diffusion region 130 that resides underneath a portion of the gate structure 110. Similarly, there is a second region 142 of the second diffusion region 132 that resides underneath a portion of the gate structure 110. These regions 140, 142, which can be referred to as "overlap" regions, make the device 105 more robust to reliability stressing. That is, the overlap regions 140, 142 are less likely to degrade when high voltage is applied to the device, such as the types of voltages applied during manufacturing stress testing. These regions 140, 142, which are not present in the prior art device 5 (FIG. 1), are formed by the first blanket phosphorous implant step (FIG. 4). That is, by having the first blanket phosphorous implant step (FIG. 4) prior to the re-ox step (FIG. 5) some dopant can diffuse underneath the gate structure 110 forming region 140, 142 and causing the device 105 to have the above-mentioned robustness. This is another benefit of the present invention.A standard measure of refresh performance is known as a "time to un-repairable calculation." The term "repair" is sometimes used to indicate that a memory cell or memory bit has been repaired by electrical replacement with a redundant element. The terms "un repaired" or "un-repairable" are often used to indicate that the number of failing bits exceeds the capability of repair by redundant elements. In the time to un-repairable test, data is written into the bits of memory cells in the DRAM array. Measurements are taken to determine when a predetermined number of bits have lost their charge and within what time. The time it takes for the bits to lose their charge is commonly referred to as the "time to un-repairable" (TTUR).Referring now to FIGS. 1, 2 and 9. The inventors ran experiments to compare TTUR results using the prior art device 5 (FIG. 1) with the results using the device 105 (FIG. 2) constructed in accordance with the present invention. FIG. 9 illustrates results from TTUR tests based on finding 100 bits that have lost their charge. The y-axis indicates the probability that 100 bits have lost their charge. The x-axis indicates the time when the charge was lost (and when a refresh operation became necessary). The first set of data 160 illustrates the results using the device 105 of the present invention. The second set of data 162 illustrates the results using the device 5 of the prior art. From the data 160, 162, it can be seen that 100 bits lost their charge (with 50% probability, i.e., 0.5 on the y-axis) using the prior art device 5 at approximately 120 milliseconds, while 100 bits lost their charge using the device 105 at approximately 210 milliseconds. That is, there is almost a 90 millisecond improvement in the device 105 constructed in accordance with the present invention. It is believed that this improvement is due to the uniquely formed diffusion regions 130, 132 of the device 105.Referring now to FIGS. 1, 2 and 10. FIG. 10 illustrates results from TTUR tests based on finding 200 bits that have lost their charge. The y-axis indicates the probability that 200 bits have lost their charge. The x-axis indicates the time when the charge was lost (and when a refresh operation became necessary). The first set of data 170 illustrates the results using the device 105 while the second set of data 172 illustrates the results using the device 5. From the data 170, 172, it can be seen that 200 bits lost their charge (with 50% probability, i.e., 0.5 on the y-axis) using the prior art device at approximately 240 milliseconds, while 200 bits lost their charge using the device 105 at approximately 310 milliseconds. That is, there is almost a 70 millisecond improvement.FIG. 11 illustrates a block diagram of a processor based system 200 utilizing a DRAM memory circuit 208 constructed in accordance with the present invention. That is, the memory circuit 208 utilizes the MOSFET memory array device 105 (FIG. 2) constructed in accordance with the present invention (FIGS. 3 to 7). The processor-based system 200 may be a computer system, a process control system or any other system employing a processor and associated memory. The system 200 includes a central processing unit (CPU) 202, e.g., a microprocessor, that communicates with the DRAM memory circuit 208 and an I/O device 204 over a bus 220. It must be noted that the bus 220 may be a series of buses and bridges commonly used in a processor-based system, but for convenience purposes only, the bus 220 has been illustrated as a single bus. A second I/O device 206 is illustrated, but is not necessary to practice the invention. The processor-based system 200 also includes a read-only memory (ROM) circuit 210 and may include peripheral devices such as a floppy disk drive 212 and a compact disk (CD) ROM drive 214 that also communicates with the CPU 202 over the bus 220 as is well known in the art. It should be noted that the CPU 202 can be combined on a single chip with one or more DRAM memory circuits 208 and ROM circuits 210.While the invention has been described in detail in connection with the preferred embodiments known at the time, it should be readily understood that the invention is not limited to such disclosed embodiments. Rather, the invention can be modified to incorporate any number of variations, alterations, substitutions or equivalent arrangements not heretofore described, but which are commensurate with the spirit and scope of the invention. Accordingly, the invention is not to be seen as limited by the foregoing description, but is only limited by the scope of the appended claims. |
Single Instruction, Multiple Data (SIMD) technologies are described. A method of performing a key value lookup instruction may include storing a vector of keys to a first register and storing a vector of values corresponding to the keys to a second register. A processor may receive an instruction to perform a key value lookup instruction including a vector of key input elements. The processor may compare each key input element to each key to determine matching keys. The processor may then store values corresponding to the matching keys to an output vector in the position of the key input elements. |
ClaimsWhat is claimed is:1. A processor comprising:a first register to store a key vector comprising a plurality of key elements;a second register to store a value vector comprising a plurality of value elements associated with the key elements;an execution unit coupled to the first register and the second register, the execution unit to:compare a key input element of a key input vector to each key element of the key vector; andresponsive to determining that the key input element matches a key element, generate an output vector comprising, in a position offset from a base position of the output vector equal to an offset of the key input element from a base position of the key input vector, a value element associated with the key element.2. The processor of claim 1, wherein the execution unit is further to produce a permute index vector referencing key elements, wherein an entry in the permute index vector references an offset of the key element from a base position of the key vector and has an offset from the base position of the permute index vector equal to the offset of the key input element.3. The processor of claim 2, wherein to generate an output vector, the execution unit is to: identify the value element based on the value of the entry in the permute index vector, wherein the offset of the value element from a base element of the value vector is equal to the value of the entry in the permute index vector; andstore the value element to the output vector in a position offset form the base position of the output vector equal to the offset from the base position of the permute index vector.4. The processor of claim 2, wherein the execution unit is to store, to the permute index vector, a mask value for a second key input element that does not match any key element.5. The processor of claim 1, wherein the second register stores the plurality of key elements in sorted order, wherein each key has a particular offset and comprises an integer value that is larger than the value of any key having a smaller offset from the base position of the key vector.6. The processor of claim 1, wherein the execution unit is further to compare each key element to each key input element in parallel.7. The processor of claim 6, further comprising a plurality of digital comparators coupled to the first register, wherein to compare each key element to each key input element comprises the execution unit to provide each of the plurality of key input elements and each of the plurality of key elements to the plurality of digital comparators.8. The processor of claim 7, further comprising a third register coupled to the plurality of digital comparators, wherein the third register is to store the key input vector.9. A system comprising:a processor core; anda memory element coupled to the processor core, wherein the memory element comprises microcode to cause the processor core to:store a key vector comprising a plurality of key elements in a first register;store a value vector comprising a plurality of value elements in a second register, wherein each value element is associated with a key element;receive a key input vector comprising a plurality of key input elements;compare each key element to each key input element to determine a subset of key elements, wherein each key element in the subset of key elements matches at least one of the plurality of key input elements; andstore a subset of value elements to a third register, wherein each value element in the subset of value elements in the third register is associated with a key element in the subset of key elements and in a position offset from a base position of the third register equal to an offset of an associated key input element from a base position of the key input vector.10. The system of claim 9, wherein the processor core is further to:generate a permute index vector based on key elements that match key input elements; andperform a vector permute operation using the permute index vector and the value vector.11. The system of claim 10, wherein to generate a permute index, the processor core is to store, to the permute index vector, an entry having an offset from a base position of the permute index equal to the offset of an associated key input element from the base position of the key input element and having a value referencing the position of a key element that matches the key input element.12. The system of claim 10, wherein to perform a vector permute operation, the processor is to: identify a value element based on a value of an entry in the permute index vector, wherein the offset of the value element from a base element of the value vector is equal to the value of the entry in the permute index vector; andstore the value element to the third register in a position offset form the base position of the third register equal to the offset of the entry in the permute index vector from the base position of the permute index vector.13. The system of claim 10, wherein the processor core is further to provide a mask value to the permute index vector in response to determining that a key input element of the key input vector does not match any key element in the key vector.14. The system of claim 9, wherein the processor core is further to compare each key element to each key input element in parallel using a single input multiple data register.15. A method comprising:storing a key vector comprising a plurality of key elements to a first processor register;storing a value vector comprising a plurality of value elements to a second processor register, wherein each value element is associated with a key element;receiving a plurality of key input elements;comparing, by a processor, each key input element to each key element to determine a subset of key elements, wherein each key element in the subset of key elements matches one of the key input elements;determining a subset of the plurality of value elements, wherein each element in the subset of value elements is associated with one of the key elements in the subset of key elements; andstoring, by the processor, each element in the subset of the plurality of value elements in a position in a third register offset from a base position of the third register equal to an offset of an associated key input element from a base position of the key input vector.16. The method of claim 15, further comprising generating, by the processor, a permute index vector having entries referencing the position in the value vector of value elements associated with key elements in the subset of key elements, wherein each entry in the permute index vector has an offset from a base position of the permute index vector equal to an offset of an associated key input element from a base position of the associate key input element.17. The method of claim 16, wherein storing the subset of value elements comprises: identifying, by the processor, a value element based on a value of an entry in the permute index vector; andstoring the value element to the third register in a position offset from the base position of the third register equal to the offset of the entry in the permute index vector from the base position of the permute index vector.18. The method of claim 16, further comprising storing a mask value to the index vector in response to determining that a key input element does not match any key element, wherein the position of the mask value in the index vector has an offset equal to an offset of the key input element.19. The method of claim 15, further comprising storing a mask value to the third register in response to determining that a key input element does not match any key element, wherein the position of the mask value in the third register has an offset equal to an offset of the key input element.20. The method of claim 15, wherein comparing each element of the key input vector to each element of the key vector is performed in parallel using vector registers.21. A machine readable medium including code, when executed, to cause a machine to perform the method of any one of claims 15 to 20.22. An apparatus comprising:means for storing, a key vector comprising a plurality of key elements to a first processor register and a value vector comprising a plurality of value elements to a second processor register, wherein each value element is associated with a key element;means for receiving a plurality of key input elements;means for comparing each key input element to each key element to determine a subset of key elements, wherein each key element in the subset of key elements matches one of the key input elements;means for determining a subset of the plurality of value elements, wherein each element in the subset of value elements is associated with one of the key elements in the subset of key elements; andmeans for storing each element in the subset of the plurality of value elements in a position in a third register offset from a base position of the third register equal to an offset of an associated key input element from a base position of the key input vector.23. The apparatus of claim 22, further comprising: means for generating a permute index vector having entries referencing the position in the value vector of value elements associated with key elements in the subset of key elements, wherein each entry in the permute index vector has an offset from a base position of the permute index vector equal to an offset of an associated key input element from a base position of the associate key input element.24. The apparatus of claim 23, further comprising:means for identifying, a value element based on a value of an entry in the permute index vector; andmeans for storing the value element to the third register in a position offset from the base position of the third register equal to the offset of the entry in the permute index vector from the base position of the permute index vector.25. The apparatus of claim 22, further comprising means for storing a mask value to the third register in response to determining that a key input element does not match any key element, wherein the position of the mask value in the third register has an offset equal to an offset of the key input element. |
PROCESSING DEVICES TO PERFORM A KEY VALUE LOOKUP INSTRUCTIONBackground[0001] Single Instruction, Multiple Data (SIMD) architectures can be implemented in microprocessor systems to enable one instruction to operate on several operands in parallel. SIMD architectures take advantage of packing multiple data elements within one register or contiguous memory location. With parallel hardware execution, multiple operations are performed on separate data elements by one instruction to increase a performance of the microprocessor systems.Brief Description of the Drawings[0002] Various embodiments of the present invention will be understood more fully from the detailed description given below and from the accompanying drawings of various embodiments of the invention.[0003] FIG. 1 is a block diagram illustrating a computing system that implements a key value lookup instruction according to one embodiment.[0004] FIG. 2 illustrates a diagram of a method of performing a key value lookup operation according to one embodiment.[0005] FIG. 3 illustrates example operations of a Single Instruction, Multiple Data key value lookup instruction according to one embodiment.[0006] FIG. 4A illustrates example operations of a Single Instruction, Multiple Data key value lookup instruction according to one embodiment.[0007] FIG. 4B illustrates example operations of a Single Instruction, Multiple Data key value lookup instruction according to one embodiment.[0008] FIG. 5A is a block diagram illustrating an in-order pipeline and a register renaming stage, out-of-order issue/execution pipeline according to one embodiment.[0009] FIG. 5B is a block diagram illustrating a micro-architecture for a processor that implements secure memory repartitioning according to one embodiment.[0010] FIG. 6 illustrates a block diagram of the micro-architecture for a processor that includes logic circuits to perform secure memory repartitioning according to one embodiment.[0011] FIG. 7 is a block diagram of a computer system according to one implementation.[0012] FIG. 8 is a block diagram of a computer system according to another implementation.[0013] FIG. 9 is a block diagram of a system-on-a-chip according to one implementation.[0014] FIG. 10 illustrates another implementation of a block diagram for a computing system according to one implementation.[0015] FIG. 11 illustrates another implementation of a block diagram for a computing system according to one implementation. Description of Embodiments[0016] A processor may use vector instruction sets or single instruction, multiple data (SIMD) instruction sets to perform multiple operations in parallel. A processor can perform multiple operations in parallel, simultaneously applying operations to the same piece of data or multiple pieces of data at the same time. Vectorization is an operation to convert a scalar program that only operates on one pair of operands at once to a vector program that can run multiple operations from a single instruction. For example, vectorization may involve rewriting a loop operation to perform a SIMD instruction, where instead of processing a single element of an array N times, it processes M elements of the array simultaneously N/M times.[0017] Vectorization can implement a key value lookup instruction to identify values based on a set of key inputs. Key value lookups are a frequent operation in databases, data mining, graph analytics, and other applications. A key value lookup is performed using associated arrays, dictionaries, or map data structures. The data structure used for a key value lookup has a collection of key and value pairs. Each key in the collection of keys has a single corresponding value. In some embodiments, redundant keys are stored with a corresponding value. A key value lookup instruction may accept a key input and identify the position of the key in a key index. The key index may reference a value associated with each key. So, in the process of a key value lookup instruction, a processor may identify and return a value associated with a key input. In some embodiments, the values associated with the keys may be used references that point to another value.[0018] In a non-vectorization implementation of a key value lookup instruction, one key input is read at a time and compared to keys in a collection of keys until there is a match. When the input key is found in the key index, the associated value is returned. For a set of key inputs, each input is compared to the keys in the collection of keys until there is a match. Processors implementing branch predictions for conditional statements may incur a high penalty for mispredictions when performing key value lookup operations. For example, branch prediction may be difficult because the occurrence of a match to one key element may be independent of a match on the previous key element. The number of hard to predict conditional branches may be reduced by implementing a key value lookup in an SIMD processor. For example, implementing a key value lookup instruction using SIMD registers may enable the processor to advance a pointer in the collection of keys by more than one element at a time. While this may increase the number of total number comparisons between key input elements and key elements performed in a key value lookup instruction, these comparisons may be executed in parallel by the SIMD instructions and also reduce the overhead of branch mispredictions. [0019] The embodiments described herein address the above noted deficiencies by performing key value lookup instructions with SIMD operations. Two registers may be used to store a set of keys and associated values. The processor may receive an instruction to perform a key value lookup operation on a set of key inputs in a key input vector stored in another register. The processor may then perform a comparison of each element in the key input vector to each element in the set of keys. For those key inputs matching a key in the set of keys, and associated value may be returned to an output vector. Those key inputs not matching a key in the set of keys may return a mask value. In some embodiments, when comparing key inputs to a set of keys, the processor may generate a permute index. The processor may then use the permute index to perform a permute operation on the set of values associated with the keys.[0020] Figure 1A is a block diagram illustrating a computing system 100 that implements a key value lookup instruction according to one embodiment. The computing system 100 is formed with a processor 102 that includes one or more execution units 108 to execute a key value lookup instruction 109 and a memory decoder 105 to decode a key value lookup instruction 109. The key value lookup instruction 109 implements one or more features in accordance with one or more embodiments as described herein. The computing system 100 may be any device, but the description of various embodiments described herein is directed to processors including one or more vector registers 104 and capable of performing one or more SIMD instructions.[0021] A register set 106 includes one or more registers to store data elements used by execution unit(s) 108 during performance of instructions. The register set 106 may store different types of data in various registers including integer registers, floating point registers, vector registers, banked registers, shadow registers, checkpoint registers, status registers, and instruction pointer register. In particular, register set 106 may include a vector register 104 that holds data for vector processing by SIMD instructions. For example, one or more vector registers 104 may store a set of keys, a set of associated values, or a set of key inputs for use in performance of a key value lookup instruction 109. One or more vector registers 104 may also be used for storing intermediate vectors generated in the performance of a key value lookup instruction 109. For instance, a permute index may be generated by execution unit 108 and stored to a vector register 104 for use in performing a key value lookup instruction 109.[0022] Decoder 105 may decode a key value lookup instruction 109, which may specify a set of key inputs to compare to a set of key value pairs. The execution unit 108 may then, in response to the decoded key value lookup instruction 109, store one or more of key input, keys, or associated values into one or more vector register 104. The execution unit 108 may then perform operations of the key value lookup instruction 109. For example, the key value lookup instruction may perform the methods described further below with reference to Figure 2.[0023] Execution unit 108, including logic to perform integer and floating point operations, as well as vector operations, also resides in the processor 102. It should be noted that the execution unit may or may not have a floating point unit. The processor 102, in one embodiment, includes a microcode read-only memory (ROM) to store microcode, which when executed, is to perform processes for certain macroinstructions or handle complex scenarios. For example, the microcode may include a set of operations to perform a key value lookup instruction with an execution unit 108. For example, the microcode may include a set of micro operations that implement one or more processes described with reference to Figures 2-4B. In someembodiments, microcode may be potentially updateable to handle logic bugs/fixes for processor 102. In some embodiments, another memory element may comprise microcode instructions for performing operations to implement a key value lookup instruction.[0024] In some embodiments, processor 102 includes a memory interface 107 and processor 102 is coupled to memory 120. In one embodiment, memory interface 107 may be a bus protocol for communication from processor 102 to memory 120. Memory 120 may include a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, flash memory device, or other memory device. Memory 120 stores instructions and/or data represented by data signals that are to be executed by the processor 102. For example, the memory 120 may include computer program instructions, which when compiled and decoded by decoder 105 instruct processor 102 to perform a key value lookup instruction 109. The memory 120 may also include a set of key value pairs for performing a key value lookup instruction 109, a set of key inputs, or may receive from the processor 102 results of a key value lookup instruction 109.[0025] The processor 102 is coupled to the memory 120 via a processor bus 110. A system logic chip, such as a memory controller hub (MCH) may be coupled to the processor bus 110 and memory 120. An MCH can provide a high bandwidth memory path to memory 120 for instruction and data storage and for storage of graphics commands, data and textures. The MCH can be used to direct data signals between the processor 102, memory 120, and other components in the system 100 and to bridge the data signals between processor bus 110, memory 120, and system I/O, for example. The MCH may be coupled to memory 120 through a memory interface (e.g., memory interface 107).[0026] In some embodiments, the processor 102 may include an internal cache memory 104. Depending on the architecture, the processor 102 may have a single internal cache or multiple levels of internal caches. For example, the processor 102 may include a Level 1 (LI) internal cache memory and a Level 2 (L2) internal cache memory. In some embodiments, system 100 may include a combination of both internal and external caches depending on the particular implementation and needs. The execution unit 108 may access data from an internal cache memory 104 for implementing a key value lookup instruction 109. For example, a set of key value pairs or key inputs used by a program operating on computer system 100 may include more elements than can be stored in a register in register set 106. In such circumstances, additional elements may be stored in cache memory 103 to improve the performance of processor 102 as additional elements are loaded from the memory device 120.[0027] Figure 2 illustrates a diagram of a method of performing a key value lookup instruction on an array of values according to one embodiment. The method may be at least partially performed by a processing device or processing logic that may include hardware (e.g. circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions executed by a processing device), firmware, or a combination thereof.[0028] Referring to Figure 2, the method begins with storing a vector of key elements in a first processor register in block 210. For example, the vector of key elements may be a collection of keys each paired with an associated value. In some embodiments, the collection of keys may have no repeated elements and may be stored into the first processor register in sorted order. For example, each key element may have a value greater than or equal to any key elements having a smaller offset from the base position of the vector of key elements.[0029] The method continues in block 220 to store a vector of values in a second processor register. Each of the values in the vector of values may be associated with a key in the vector of keys in the first processor register. For example, each key may be associated with a particular value in the vector of values. The values may be stored in the same positions of the second processor register as the corresponding keys in the first processor register. For example, for a key and value pair, the key may be stored in the Nth position of the first register and the associated value may be stored in the Nth position of the second register.[0030] In block 230 the method continues to receive a key value lookup instruction including a key input vector. The key input vector includes a set of key input elements for the processor to determine the associated output values. The processor may store the key input vector to a register of the processor for performing the key value lookup instruction.[0031] In blocks 240-260, the processor performs operations on the various elements received in blocks 210-230 to generate an output vector according to the keys in the key input vector. For example, a particular key input vector may have key input elements with a value KIN[i] at each position i. The value KIN[i] may be compared to each element in a key vector. If the processor identifies a key with a value Key[j] at position j such that KIN[i] = Key[j], then the process may store Value[j] to Vout[i], where Value []] is the value associated with the key Key[j], and Vout[i] is the entry in position i of the output vector. The comparison process may be repeated for each key input in the key input vector. Thus, the processor may determine a subset of the plurality of value elements. Each element in the subset of value elements may be associated with a key element that matches a key input element. The processor may then store each of the subset of value elements to an output vector. Each value element may be stored to a position having the same offset as the offset of the key input element that matches an associated key.[0032] In block 240 of Figure 2, the processor may compare each key input element from the key input vector to one or more key elements of the key vector. The comparison of key input elements to key elements may produce a subset of key elements that match a key input element. In certain circumstances, each key element may match a key input or no key elements may match a key input. Then the subset may include all of the key elements or none of the key elements. The comparison of the first key input to elements of the key input vector may be performed in parallel using SIMD architecture. For example, the execution unit maysubstantially simultaneously provide each key input element to one or more digital comparators coupled to a register storing the key input elements. The execution unit may also provide one of the key elements to each of the digital comparators such that each possible pair of a key input element and a key element is coupled to at least one digital comparator. Each of the digital comparators may then output a binary value indicating if the two inputs are equal. The digital comparators may then generate a set of outputs that indicate if each key input element is equal to a key element. Key input elements may be said to match a key if they are equal. In some embodiments, the processor may perform a series of SIMD instructions to compare the key inputs elements to the key elements. For example, for each key input element the processor may perform an instruction to compare the key input element to each element of the key vector. The processor may repeat the comparison for each key input element to determine if key input elements match any keys.[0033] In block 250, the method may continue by generating a permute index vector based on matches between the key elements and the key input elements. In some embodiments, if a key input element does not match any key element, the processor may store a mask element into a position of the permute index associated with the key input element. For example, if a key input value in the Nth position of a key input vector matches a key in the Mth position of a key vector, the processor may store a value of M in the Nth position of the permute index vector. The permute index vector may be generated based on performing this operation for each position i in the key input vector to generate an entry in each position i of the permute index vector. If there is a key input that does not match any during comparison, a mask element may be stored to the corresponding position in the permute index vector. An example process or generating a permute index vector is described further below with reference to Figure 4A.[0034] In block 260, the method continues by performing a vector permute instruction on the value vector using the permute index vector to generate an output vector. Performing the vector permute instruction pulls a value to each the output vector from a position in the value vector indicated by an entry in the permute vector index. For example, for position i in the output vector, the processor may store a value Value [Index [i]], where Index[i] is the value stored in position i of the permute index vector, and Value [Index [i]] is the value stored in position Index[i] of the value vector. The permute index vector correlates each element of the key input vector with a position of a matching key in the key vector. For example, the permute index vector may include a value at a position offset from a base position of the permute index vector equal to the offset of a corresponding key input element from a base position of the key input vector. The value of the permute index at that position may reference a position of a key that matches the key input element. For example, that value may be an integer referencing the offset of the matching key value from a base position of the key vector. Thus, the permute index vector indicates to the processor an element of the value vector corresponding to the matching key element for each element of the key input vector. Thus, performing the permute instruction by the processor using the permute index vector on the value vector uses the correlation between pairs of keys and values to pass values to the output vector according the key inputs in the key input vector. For example, for each element in the permute index vector, the processor may a value element having a position offset from the base position of the value vector equal to the value of the element of the permute index vector. Then the processor stores the value element to a position in an output vector having the same offset in the output vector as the permute element does in the permute index vector.[0035] Figure 3 illustrates an example of registers of a processor during performance of a SIMD key value lookup instruction, according to an embodiment. The SIMD instruction is an example of an implementation of the method described in Figure 2. In the example of Figure 3, the SIMD instruction operates on memory registers with 4 memory elements. In other implementations, the SIMD register may include 8 memory elements, 16 memory elements, or another number of memory elements.[0036] Key register 300 may include an array of elements 301-304 that store key values of a key vector. For consistency the registers illustrated in Figures 3, 4A and 4B are shown with the least significant element on the left and the most significant elementon the right. For example, the value in element 301 represents position 0 in the register, the value in element 302 represents position 1 having an offset of 1 from the base position of the register, the value in element 303 represents position 2 having an offset of 2 from the base position of the register, and the value in element 304 represents position 3 having an offset of 3 from the base position of the register. A processor may receive the key vector 300, a value vector 310, a key input vector 320, and an instruction to perform a key value lookup from software operating on the processor.[0037] Value vector 310 is an array of elements 311-314 that store values each associated with a key in the key vector. For example, the value in element 314 corresponds to the key in element 304, the value in element 313 corresponds to the key in element 303, the value in element 312 corresponds to the key in element 302, and the value in element 311 corresponds to the key in element 301.[0038] Key input vector 320 may include a set of key inputs to compare to keys in the key vector. The processor may perform a lookup for a corresponding value from value vector 310 for each key input in key input vector 320. Output vector 330 is an output of values associated with keys that match a key input. Output vector 330 may also include one or more mask elements for a key input that does not match any of the keys. For example, key input element 321 has a value 900 that matches a key in element 304 of key vector 300. Therefore, the value 754 from corresponding element 314 in the value vector 310 is stored into the output vector 330 at element 331 corresponding to key input vector element 321. A similar process is used to generate element 332 and 333 of the output vector 330.[0039] Element 334 of output vector 330 has a mask entry instead of a value from value vector 310. This mask entry may be generated when a key input does not match any key in the key vector 300. For example, key input element 324 has a value 4 that does not match a key in key vector 300. Therefore, there is no value in value vector 310 corresponding to the key input. Thus, a mask is stored to the corresponding position 334 of output vector 330. In some embodiments, various masks may be used to indicate that there is no match to a key input. For example, the mask may be a value that is not used by values in the value vector 310. For example the mask may be a negative value if the value vector 310 is positive numbers. In some embodiments, the mask may be a value with a binary T for every bit. The mask may also be a represented by a binary Ί ' or zero at a particular bit of the value.[0040] Figures 4A and 4B illustrate an example of registers of a processor during performance of a SIMD key value lookup instruction, according to an embodiment. The SIMD instruction is an example of an implementation of the method described in Figure 2. In particular Figure 4A illustrates SIMD instructions generating a permute index vector for use in a key value lookup instruction and Figure 4B illustrates an SIMD permute instruction applying the permute index vector to complete the key value lookup instruction. For example, the registers in Figures 4A and 4B may be generated while performing the processes described with reference to Figure 2. In the example of Figures 4A and 4B, the SIMD instruction operates on memory registers with 4 memory elements. In other implementations, the SIMD register may include 8 memory elements, 16 memory elements, or another number of memory elements. In Figures 4A and 4B, key vector 300, key input vector 320, value vector 310, and output vector 330 are labelled the same as in Figure 3, and may represent the same registers.[0041] In Figure 4A, the processor compares each elementof key vector 300 to each elementof key input vector 320. For illustration purposes, the results of the comparison are represented in a grid 406 with a binary value '0' representing that a corresponding key input and key not match and a binary value T representing that a corresponding key input and key match. For example, entry 402 indicates that the value 900 in element 321 of key input vector 320 does not match a value 5 in element 301 of key vector 300. However, entry 404 indicates that the value 5 in element 322 of key input vector 320 matches a value 5 in element 301 of key vector 300. The remaining entries in grid 406 are generated based on comparisons of values in key input vector 320 and key vector 300. The results of the comparison are used to generate a permute index vector 410. For example, the entry 404 indicates that key input 322 matches a key in element 301 of the key vector 300. Because the key input element 322 matches an entry in the least significant position of the key vector 300, the processor may store a value of zero in the corresponding entry 412 in permute index vector 410. The remaining entries in the permute index vector may be established in the same manner to generate permute index vector 410 shown in Figure 4A. In some embodiments, the processor may determine an entry for a position in the permute index vector 410 by comparing a key input element from key input vector 320 to each key element in key vector 300 to generate a binary number for each key input. For example, the binary number generated for key input element 323 would be 0010 because it matches a key in the first position of the key vector. The processor may then perform an operation to count leading zeros of the binary number in order to generate a number indicating the position of the match. For example, the binary number 0010 has two leading zeros, so the processor stores a value of 2 to element 413 of the permute vector index. Comparisons of key input elements of a key input vector 320 to keys in a key vector 300 may be performed in parallel such that elements of the grid 406 are generated substantially simultaneously by the SIMD instruction. For example, the comparisons may be performed as discussed with reference to block 240 of Figure 2. In some embodiments, the comparisons may be performed in hardware of the processor and the outputs in the grid 406 in Figure 4A may be an output from hardware elements of the processor and provided to additional hardware elements to generate the permute vector 410. In some embodiments, multiple SIMD instructions are used to generate the permute index vector 410. For example, a first SIMD instruction may generate the entries in grid 406, which may be represented as an intermediate vector such that cells in the row adjacent to a key input of key input vector 320 are stored in a cell of the vector. For example, in the example of Figure 4A, the grid may be represented as a vector [0001, 1000, 0010, 0000] with four cells, each corresponding to a key input. A second SIMD instruction may generate the permute index vector 410 by counting leading zeros of the entries in the intermediate vector.[0042] After a permute index vector 410 is generated as discussed with reference to Figure 4A, it may be used by the processor to perform an SIMD permute instruction. In the permute instruction, an output vector 330 is formed by pulling a value from value vector 310 according to each entry in permute index vector 410. For example, a value in the Nth position of permute index vector 410 may point to a value in value index 310 to store into the Nth position of output vector 330. Therefore, each entry i in output vector 330 has a value of Value[Index[i]], where Value[j] is the entry in position j of value vector 310, and Index[i] is the element in position i of permute index vector 410. Thus, for the example of Figure 4B, the processor stores a value of Value [Index [0]] = Value[3] = 754 into element 331 of the output vector 330. In the example, the processor also stores a value of Value[Index[l]] = Value[0] = 754 into element 332 of the output vector 330. In the example, the processor also stores a value of Value[Index[2]] = Value[2] = 21 into element 333 of the output vector 330. In the example, the processor also stores a mask value to Value[Index[3]] = Value[mask] = 'mask' into element 334 of the output vector 330. In the example, the permute index 410 may include a mask value for the entry in element 414 of permute index vector 410 because the key input in element 324 does not match a key in the key vector 300.[0043] Figure 5A is a block diagram illustrating a micro-architecture for a processor core 590 that implements a key value lookup instruction according to one embodiment. Specifically, processor core (also simply 'processor') 590 depicts an in-order architecture core and a register renaming logic, out-of-order issue/execution logic to be included in a processor according to at least one embodiment of the disclosure. The embodiments of the page additions and content copying can be implemented in processor 500.[0044] Processor 590 includes a front end unit 530 coupled to an execution engine unit 550, and both are coupled to a memory unit 570. The processor 590 may include a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, processor 590 may include a special-purpose core, such as, for example, a network orcommunication core, compression engine, graphics core, or the like. In one embodiment, processor 590 may be a multi-core processor or may be part of a multi-processor system. [0045] The front end unit 530 includes a branch prediction unit 532 coupled to an instruction cache unit 534, which is coupled to an instruction translation lookaside buffer (TLB) 536, which is coupled to an instruction fetch unit 538, which is coupled to a decode unit 540. The decode unit 540 (also known as a decoder) may decode instructions, and generate as an output one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decoder 540 may be implemented using various different mechanisms.Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. The instruction cache unit 534 is further coupled to the memory unit 570. The decode unit540 is coupled to a rename/allocator unit 552 in the execution engine unit 550.[0046] The execution engine unit 550 includes the rename/allocator unit 552 coupled to a retirement unit 554 and a set of one or more scheduler unit(s) 556. The scheduler unit(s) 556 represents any number of different schedulers, including reservations stations (RS), central instruction window, etc. The scheduler unit(s) 556 is coupled to the physical register file(s) unit(s) 558. Each of the physical register file(s) units 558 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point, etc., status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. The physical register file(s) unit(s) 558 is overlapped by the retirement unit554 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s), using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.).[0047] Generally, the architectural registers are visible from the outside of the processor or from a programmer's perspective. The registers are not limited to any known particular type of circuit. Various different types of registers are suitable as long as they are capable of storing and providing data as described herein. Examples of suitable registers include, but are not limited to, dedicated physical registers, dynamically allocated physical registers using register renaming, combinations of dedicated and dynamically allocated physical registers, etc. The retirement unit 554 and the physical register file(s) unit(s) 558 are coupled to the execution cluster(s) 560. The execution cluster(s) 560 includes a set of one or more execution units 562 and a set of one or more memory access units 564. The execution units 562 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and operate on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point). [0048] While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 556, physical register file(s) unit(s) 558, and execution cluster(s) 560 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster - and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of this pipeline has the memory access unit(s) 564). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of- order issue/execution and the rest in-order.[0049] The set of memory access units 564 is coupled to the memory unit 570, which may include a data prefetcher 580, a data TLB unit 572, a data cache unit (DCU) 574, and a level 2 (L2) cache unit 576, to name a few examples. In some embodiments DCU 574 is also known as a first level data cache (LI cache). The DCU 574 may handle multiple outstanding cache misses and continue to service incoming stores and loads. It also supports maintaining cache coherency. The data TLB unit 572 is a cache used to improve virtual address translation speed by mapping virtual and physical address spaces. In one exemplary embodiment, the memory access units 564 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 572 in the memory unit 570. The L2 cache unit 576 may be coupled to one or more other levels of cache and eventually to a main memory.[0050] In one embodiment, the data prefetcher 580 speculatively loads/prefetches data to the DCU 574 by automatically predicting which data a program is about to consume. Prefetching may refer to transferring data stored in one memory location (e.g., position) of a memory hierarchy (e.g., lower level caches or memory) to a higher-level memory location that is closer (e.g., yields lower access latency) to the processor before the data is actually demanded by the processor. More specifically, prefetching may refer to the early retrieval of data from one of the lower level caches/memory to a data cache and/or prefetch buffer before the processor issues a demand for the specific data being returned.[0051] The processor 590 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA). The processor 590 may support an SIMD key value lookup instruction. [0052] It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology).[0053] While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor also includes a separate instruction and data cache units and a shared L2 cache unit, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (LI) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be external to the core and/or the processor.[0054] Figure 5B is a block diagram illustrating an in-order pipeline and a register renaming stage, out-of-order issue/execution pipeline implemented by processor 590 of Figure 5A according to some embodiments of the disclosure. The solid lined boxes in figure 5B illustrate an in-order pipeline, while the dashed lined boxes illustrates a register renaming, out-of-order issue/execution pipeline. In Figure 5B, a processor pipeline 500 includes a fetch stage 502, a length decode stage 504, a decode stage 506, an allocation stage 508, a renaming stage 510, a scheduling (also known as a dispatch or issue) stage 512, a register read/memory read stage 514, an execute stage 516, a write back/memory write stage 518, an exception handling stage 522, and a commit stage 524. In some embodiments, the ordering of stages 502-524 may be different than illustrated and are not limited to the specific ordering shown in Figure 5B.[0055] Figure 6 illustrates a block diagram of the micro-architecture for a processor 600 that includes logic circuits to perform a key value lookup instruction according to one embodiment. In some embodiments, an instruction in accordance with one embodiment can be implemented to operate on data elements having sizes of byte, word, doubleword, quadword, etc., as well as datatypes, such as single and double precision integer and floating point datatypes. In one embodiment the in-order front end 601 is the part of the processor 600 that fetches instructions to be executed and prepares them to be used later in the processor pipeline. The embodiments of the page additions and content copying can be implemented in processor 600.[0056] The front end 601 may include several units. In one embodiment, the instruction pref etcher 616 fetches instructions from memory and feeds them to an instruction decoder 618 which in turn decodes or interprets them. For example, in one embodiment, the decoder decodes a received instruction into one or more operations called "micro-instructions" or "micro- operations" (also called micro op or uops) that the machine can execute. In other embodiments, the decoder parses the instruction into an opcode and corresponding data and control fields that are used by the micro-architecture to perform operations in accordance with one embodiment. In one embodiment, the trace cache 630 takes decoded uops and assembles them into program ordered sequences or traces in the uop queue 634 for execution. When the trace cache 630 encounters a complex instruction, the microcode ROM 632 provides the uops needed to complete the operation.[0057] Some instructions are converted into a single micro-op, whereas others need several micro-ops to complete the full operation. In one embodiment, if more than four micro-ops are needed to complete an instruction, the decoder 618 accesses the microcode ROM 632 to do the instruction. For one embodiment, an instruction can be decoded into a small number of micro ops for processing at the instruction decoder 618. In another embodiment, an instruction can be stored within the microcode ROM 632 should a number of micro-ops be needed to accomplish the operation. The trace cache 630 refers to an entry point programmable logic array (PLA) to determine a correct micro-instruction pointer for reading the micro-code sequences to complete one or more instructions in accordance with one embodiment from the micro-code ROM 632. After the microcode ROM 632 finishes sequencing micro-ops for an instruction, the front end 601 of the machine resumes fetching micro-ops from the trace cache 630.[0058] The out-of-order execution engine 603 is where the instructions are prepared for execution. The out-of-order execution logic has a number of buffers to smooth out and re-order the flow of instructions to optimize performance as they go down the pipeline and get scheduled for execution. The allocator logic allocates the machine buffers and resources that each uop needs in order to execute. The register renaming logic renames logic registers onto entries in a register file. The allocator also allocates an entry for each uop in one of the two uop queues, one for memory operations and one for non-memory operations, in front of the instruction schedulers: memory scheduler, fast scheduler 602, slow/general floating point scheduler 604, and simple floating point scheduler 606. The uop schedulers 602, 604, 606, determine when a uop is ready to execute based on the readiness of their dependent input register operand sources and the availability of the execution resources the uops need to complete their operation. The fast scheduler 602 of one embodiment can schedule on each half of the main clock cycle while the other schedulers can only schedule once per main processor clock cycle. The schedulers arbitrate for the dispatch ports to schedule uops for execution.[0059] Register files 608, 610, sit between the schedulers 602, 604, 606, and the execution units 612, 614, 616, 618, 620, 622, 624 in the execution block 611. There is a separate register file 608, 610, for integer and floating point operations, respectively. Each register file 608, 610, of one embodiment also includes a bypass network that can bypass or forward just completed results that have not yet been written into the register file to new dependent uops. The integer register file 608 and the floating point register file 610 are also capable of communicating data with the other. For one embodiment, the integer register file 608 is split into two separate register files, one register file for the low order 32 bits of data and a second register file for the high order 32 bits of data. The floating point register file 410 of one embodiment has 128 bit wide entries because floating point instructions typically have operands from 64 to 128 bits in width.[0060] The execution block 611 contains the execution units 612, 614, 616, 618, 620, 622, 624, where the instructions are actually executed. This section includes the register files 608, 610, that store the integer and floating point data operand values that the micro-instructions need to execute. The processor 600 of one embodiment is comprised of a number of execution units: address generation unit (AGU) 612, AGU 614, fast ALU 616, fast ALU 618, slow ALU 620, floating point ALU 622, floating point move unit 624. For one embodiment, the floating point execution blocks 612, 614, execute floating point, MMX, SIMD, and SSE, or other operations. The floating point ALU 612 of one embodiment includes a 64 bit by 64 bit floating point divider to execute divide, square root, and remainder micro-ops. For embodiments of the present disclosure, instructions involving a floating point value may be handled with the floating point hardware.[0061] In one embodiment, the ALU operations go to the high-speed ALU execution units 616, 618. The fast ALUs 616, 618, of one embodiment can execute fast operations with an effective latency of half a clock cycle. For one embodiment, most complex integer operations go to the slow ALU 610 as the slow ALU 610 includes integer execution hardware for long latency type of operations, such as a multiplier, shifts, flag logic, and branch processing. Memory load/store operations are executed by the AGUs 612, 614. For one embodiment, the integer ALUs 616, 618, 620, are described in the context of performing integer operations on 64 bit data operands. In alternative embodiments, the ALUs 616, 618, 620, can be implemented to support a variety of data bits including 16, 32, 128, 256, etc. Similarly, the floating point units 612, 614, can be implemented to support a range of operands having bits of various widths. For one embodiment, the floating point units 612, 614, can operate on 128 bits wide packed data operands in conjunction with SIMD and multimedia instructions.[0062] In one embodiment, the uops schedulers 602, 604, 606, dispatch dependent operations before the parent load has finished executing. As uops are speculatively scheduled and executed in processor 600, the processor 600 also includes logic to handle memory misses. If a data load misses in the data cache, there can be dependent operations in flight in the pipeline that have left the scheduler with temporarily incorrect data. A replay mechanism tracks and re-executes instructions that use incorrect data. Only the dependent operations need to be replayed and the independent ones are allowed to complete. The schedulers and replay mechanism of one embodiment of a processor are also designed to catch instruction sequences for text string comparison operations.[0063] The processor 600 also includes logic to implement a key value lookup instruction according to one embodiment. In one embodiment, the execution block 611 of processor 600 may include a microcontroller (MCU), to perform a key value lookup instruction according to the description herein.[0064] The term "registers" may refer to the on-board processor storage locations that are used as part of instructions to identify operands. In other words, registers may be those that are usable from the outside of the processor (from a programmer's perspective). However, the registers of an embodiment should not be limited in meaning to a particular type of circuit. Rather, a register of an embodiment is capable of storing and providing data, and performing the functions described herein. The registers described herein can be implemented by circuitry within a processor using any number of different techniques, such as dedicated physical registers, dynamically allocated physical registers using register renaming, combinations of dedicated and dynamically allocated physical registers, etc. In one embodiment, integer registers store thirty- two bit integer data. A register file of one embodiment also contains eight or sixteen multimedia SIMD registers for packed data.[0065] For the discussions herein, the registers are understood to be data registers designed to hold packed data, such as 64 bits wide MMX™ registers (also referred to as 'mm' registers in some instances) in microprocessors enabled with MMX technology from Intel Corporation of Santa Clara, California. These MMX registers, available in both integer and floating point forms, can operate with packed data elements that accompany SIMD and SSE instructions. Similarly, 128 bits wide XMM registers relating to SSE2, SSE3, SSE4, or beyond (referred to generically as "SSEx") technology can also be used to hold such packed data operands. In one embodiment, in storing packed data and integer data, the registers do not need to differentiate between the two data types. In one embodiment, integer and floating point are either contained in the same register file or different register files. Furthermore, in one embodiment, floating point and integer data may be stored in different registers or the same registers.[0066] Embodiments may be implemented in many different system types. Referring now to Figure 7, shown is a block diagram of a multiprocessor system 700 in accordance with an implementation. As shown in Figure 7, multiprocessor system 700 is a point-to-pointinterconnect system, and includes a first processor 770 and a second processor 780 coupled via a point-to-point interconnect 750. As shown in Figure 7, each of processors 770 and 780 may be multicore processors, including first and second processor cores, although potentially many more cores may be present in the processors. The processors each may include hybrid write mode logics in accordance with an embodiment of the present. The embodiments of the page additions and content copying can be implemented in the processor 770, processor 780, or both.[0067] While shown with two processors 770, 780, it is to be understood that the scope of the present disclosure is not so limited. In other implementations, one or more additional processors may be present in a given processor.[0068] Processors 770 and 780 are shown including integrated memory controller units (IMCs) 772 and 782, respectively. Processor 770 also includes as part of its bus controller units point-to- point (P-P) interfaces 776 and 788; similarly, second processor 780 includes P-P interfaces 786 and 788. Processors 770, 780 may exchange information via a point-to-point (P-P) interface 750 using P-P interface circuits 778, 788. As shown in Figure 7, IMCs 772 and 782 couple the processors to respective memories, namely a memory 732 and a memory 734, which may be portions of main memory locally attached to the respective processors.[0069] Processors 770, 780 may each exchange information with a chipset 790 via individual P-P interfaces 752, 754 using point to point interface circuits 776, 794, 786, 798. Chipset 790 may also exchange information with a high-performance graphics circuit 738 via a high- performance graphics interface 739.[0070] A shared cache (not shown) may be included in either processor or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode.[0071] Chipset 790 may be coupled to a first bus 716 via an interface 796. In one embodiment, first bus 716 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present disclosure is not so limited.[0072] As shown in Figure 7, various I/O devices 714 may be coupled to first bus 716, along with a bus bridge 718 which couples first bus 716 to a second bus 720. In one embodiment, second bus 720 may be a low pin count (LPC) bus. Various devices may be coupled to second bus 720 including, for example, a keyboard and/or mouse 722, communication devices 727 and a storage unit 728 such as a disk drive or other mass storage device which may includeinstructions/code and data 730, in one embodiment. Further, an audio I/O 724 may be coupled to second bus 720. Note that other architectures are possible. For example, instead of the point-to- point architecture of Figure 7, a system may implement a multi-drop bus or other such architecture.[0073] Referring now to Figure 8, shown is a block diagram of a third system 800 in accordance with an embodiment of the present invention. Like elements in Figures 7 and 8 bear like reference numerals, and certain aspects of Figure 6 have been omitted from Figure 8 in order to avoid obscuring other aspects of Figure 8.[0074] Figure 8 illustrates that the processors 870, 880 may include integrated memory and I/O control logic ("CL") 872 and 882, respectively. For at least one embodiment, the CL 872, 882 may include integrated memory controller units such as described herein. In addition. CL 872, 882 may also include I/O control logic. FIG. 6 illustrates that the memories 832, 834 are coupled to the CL 872, 882, and that I O devices 814 are also coupled to the control logic 872, 882. Legacy I/O devices 815 are coupled to the chipset 890. The embodiments of the page additions and content copying can be implemented in processor 870, processor 880, or both.[0075] Figure 9 is an exemplary system on a chip (SoC) 900 that may include one or more of the cores 902. Other system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, are also suitable. In general, a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable.[0076] Figure 9 is a block diagram of a SoC 900 in accordance with an embodiment of the present disclosure. Dashed lined boxes are features on more advanced SoCs. In Figure 9 an interconnect unit(s) 902 is coupled to: an application processor 917 which includes a set of one or more cores 902A-N and shared cache unit(s) 906; a system agent unit 910; a bus controller unit(s) 916; an integrated memory controller unit(s) 914; a set or one or more media processors 920 which may include integrated graphics logic 908, an image processor 924 for providing still and/or video camera functionality, an audio processor 926 for providing hardware audio acceleration, and a video processor 928 for providing video encode/decode acceleration; a static random access memory (SRAM) unit 930; a direct memory access (DMA) unit 932; and a display unit 940 for coupling to one or more external displays. The embodiments of the pages additions and content copying can be implemented in SoC 900.[0077] Turning next to Figure 10, an embodiment of a system on-chip (SoC) design in accordance with embodiments of the disclosure is depicted. As an illustrative example, SoC 1000 is included in user equipment (UE). In one embodiment, UE refers to any device to be used by an end-user to communicate, such as a hand-held phone, smartphone, tablet, ultra-thin notebook, notebook with broadband adapter, or any other similar communication device. A UE may connect to a base station or node, which can correspond in nature to a mobile station (MS) in a GSM network. The embodiments of the page additions and content copying can be implemented in SoC 1000.[0078] Here, SoC 1000 includes 2 cores— 1006 and 1007. Similar to the discussion above, cores 1006 and 1007 may conform to an Instruction Set Architecture, such as a processor having the Intel® Architecture Core™, an Advanced Micro Devices, Inc. (AMD) processor, a MIPS- based processor, an ARM-based processor design, or a customer thereof, as well as their licensees or adopters. Cores 1006 and 1007 are coupled to cache control 1008 that is associated with bus interface unit 1009 and L2 cache 1010 to communicate with other parts of system 1000. Interconnect 1011 includes an on-chip interconnect, such as an IOSF, AMBA, or other interconnects discussed above, which can implement one or more aspects of the described disclosure.[0079] Interconnect 1011 provides communication channels to the other components, such as a Subscriber Identity Module (SIM) 1030 to interface with a SIM card, a boot ROM 1035 to hold boot code for execution by cores 1006 and 1007 to initialize and boot SoC 1000, a SDRAM controller 1040 to interface with external memory (e.g. DRAM 1060), a flash controller 1045 to interface with non-volatile memory (e.g. Flash 1065), a peripheral control 1050 (e.g. Serial Peripheral Interface) to interface with peripherals, video codecs 1020 and video interface 1025 to display and receive input (e.g. touch enabled input), GPU 1015 to perform graphics related computations, etc. Any of these interfaces may incorporate aspects of the embodiments described herein.[0080] In addition, the system illustrates peripherals for communication, such as a Bluetooth module 1070, 3G modem 1075, GPS 1080, and Wi-Fi 1085. Note as stated above, a UE includes a radio for communication. As a result, these peripheral communication modules may not all be included. However, in a UE some form of a radio for external communication should be included.[0081] Figure 11 illustrates a diagrammatic representation of a machine in the example form of a computing system 1100 within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed. In alternative embodiments, the machine may be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, or the Internet. The machine may operate in the capacity of a server or a client device in a client- server network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term "machine" shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. The embodiments of the page additions and content copying can be implemented in computing system 1100.[0082] The computing system 1100 includes a processing device 1102, main memory 1104 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) (such as synchronous DRAM (SDRAM) or DRAM (RDRAM), etc.), a static memory 1106 (e.g., flash memory, static random access memory (SRAM), etc.), and a data storage device 1118, which communicate with each other via a bus 1130.[0083] Processing device 1102 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device may be complex instruction set computing (CISC) microprocessor, reduced instruction set computer (RISC) microprocessor, very long instruction word (VUW) microprocessor, or processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device 1102 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. In one embodiment, processing device 1102 may include one or processor cores. The processing device 1102 is configured to execute the processing logic 1126 for performing the operations discussed herein. In one embodiment, processing device 1102 can be part of a computing system. Alternatively, the computing system 1100 can include other components as described herein. It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology).[0084] The computing system 1100 may further include a network interface device 1108 communicably coupled to a network 1120. The computing system 1100 also may include a video display unit 1110 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device 1110 (e.g., a keyboard), a cursor control device 1114 (e.g., a mouse), a signal generation device 1116 (e.g., a speaker), or other peripheral devices. Furthermore, computing system 1100 may include a graphics processing unit 1122, a video processing unit1128 and an audio processing unit 1132. In another embodiment, the computing system 1100 may include a chipset (not illustrated), which refers to a group of integrated circuits, or chips, that are designed to work with the processing device 1102 and controls communications between the processing device 1102 and external devices. For example, the chipset may be a set of chips on a motherboard that links the processing device 1102 to very high-speed devices, such as main memory 1104 and graphic controllers, as well as linking the processing device 1102 to lower- speed peripheral buses of peripherals, such as USB, PCI or ISA buses.[0085] The data storage device 1118 may include a computer-readable storage medium 1124 on which is stored software 1126 embodying any one or more of the methodologies of functions described herein. The software 1126 may also reside, completely or at least partially, within the main memory 1104 as instructions 1126 and/or within the processing device 1102 as processing logic 1126 during execution thereof by the computing system 1100; the main memory 1104 and the processing device 1102 also constituting computer-readable storage media.[0086] The computer-readable storage medium 1124 may also be used to store instructions 1126 utilizing the processing device 1102, such as described with respect to Figure 2, and/or a software library containing methods that call the above applications. While the computer- readable storage medium 1124 is shown in an example embodiment to be a single medium, the term "computer-readable storage medium" should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term "computer-readable storage medium" shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instruction for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present embodiments. The term "computer-readable storage medium" shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media.[0087] The following examples pertain to further embodiments of the disclosure.[0088] Example 1 is processor comprising: a first register to store a key vector comprising a plurality of key elements; a second register to store a value vector comprising a plurality of value elements associated with the key elements; an execution unit coupled to the first register and the second register, the execution unit to: compare a key input element of a key input vector to each key element of the key vector; and responsive to determining that the key input element matches a key element, generate an output vector comprising, in a position offset from a base position of the output vector equal to an offset of the key input element from a base position of the key input vector, a value element associated with the key element. [0089] In Example 2, in the processor of Example 1, the execution unit is further to produce a permute index vector referencing key elements, wherein an entry in the permute index vector references an offset of the key element from a base position of the key vector and has an offset from the base position of the permute index vector equal to the offset of the key input element.[0090] In Example 3, in the processor of Example 2, to generate an output vector, the execution unit is to: identify the value element based on the value of the entry in the permute index vector, wherein the offset of the value element from a base element of the value vector is equal to the value of the entry in the permute index vector; and store the value element to the output vector in a position offset form the base position of the output vector equal to the offset from the base position of the permute index vector.[0091] In Example 4, in the processor of Example 2, the execution unit is to store, to the permute index vector, a mask value for a second key input element that does not match any key element.[0092] In Example 5, in the processor of Example 1, the second register stores the plurality of key elements in sorted order, wherein each key has a particular offset and comprises an integer value that is larger than the value of any key having a smaller offset from the base position of the key vector.[0093] In Example 5, in the processor of Example 1, the execution unit is further to compare each key element to each key input element in parallel.[0094] In Example 7, in the processor of Example 6, the processor further comprises a plurality of digital comparators coupled to the first register, wherein to compare each key element to each key input element comprises the execution unit to provide each of the plurality of key input elements and each of the plurality of key elements to the plurality of digital comparators.[0095] In Example 8, in the processor of Example 7, the processor further comprises a third register coupled to the plurality of digital comparators, wherein the third register is to store the key input vector.[0096] Example 9 is a processor comprising: a processor core; and a memory element coupled to the processor core, wherein the memory element comprises microcode to cause the processor core to: store a key vector comprising a plurality of key elements in a first register; store a value vector comprising a plurality of value elements in a second register, wherein each value element is associated with a key element; receive a key input vector comprising a plurality of key input elements; compare each key element to each key input element to determine a subset of key elements, wherein each key element in the subset of key elements matches at least one of the plurality of key input elements; and store a subset of value elements to a third register, wherein each value element in the subset of value elements in the third register is associated with a key element in the subset of key elements and in a position offset from a base position of the third register equal to an offset of an associated key input element from a base position of the key input vector.[0097] In Example 10, in the processor of Example 9, the processor core is further to: generate a permute index vector based on key elements that match key input elements; and perform a vector permute operation using the permute index vector and the value vector.[0098] In Example 11, in the processor of Example 10, to generate a permute index, the processor core is to store, to the permute index vector, an entry having an offset from a base position of the permute index equal to the offset of an associated key input element from the base position of the key input element and having a value referencing the position of a key element that matches the key input element.[0099] In Example 12, in the processor of Example 10 to perform a vector permute operation, the processor is to: identify a value element based on a value of an entry in the permute index vector, wherein the offset of the value element from a base element of the value vector is equal to the value of the entry in the permute index vector; and store the value element to the third register in a position offset form the base position of the third register equal to the offset of the entry in the permute index vector from the base position of the permute index vector.[00100] In Example 13, in the processor of Example, the processor core is further to provide a mask value to the permute index vector in response to determining that a key input element of the key input vector does not match any key element in the key vector.[00101] In Example 14, in the processor of Example 9, the processor core is further to compare each key element to each key input element in parallel using a single input multiple data register.[00102] Example 15 is a method comprising: storing a key vector comprising a plurality of key elements to a first processor register; storing a value vector comprising a plurality of value elements to a second processor register, wherein each value element is associated with a key element; receiving a plurality of key input elements; comparing, by a processor, each key input element to each key element to determine a subset of key elements, wherein each key element in the subset of key elements matches one of the key input elements; determining a subset of the plurality of value elements, wherein each element in the subset of value elements is associated with one of the key elements in the subset of key elements; and storing, by the processor, each element in the subset of the plurality of value elements in a position in a third register offset from a base position of the third register equal to an offset of an associated key input element from a base position of the key input vector. [00103] In Example 16, the method of Example 15 further comprises generating, by the processor, a permute index vector having entries referencing the position in the value vector of value elements associated with key elements in the subset of key elements, wherein each entry in the permute index vector has an offset from a base position of the permute index vector equal to an offset of an associated key input element from a base position of the associate key input element.[00104] In Example 17, in the method of Example 16, storing the subset of value elements comprises: identifying, by the processor, a value element based on a value of an entry in the permute index vector; and storing the value element to the third register in a position offset from the base position of the third register equal to the offset of the entry in the permute index vector from the base position of the permute index vector.[00105] In Example 18, the method of Example 16 further comprises storing a mask value to the index vector in response to determining that a key input element does not match any key element, wherein the position of the mask value in the index vector has an offset equal to an offset of the key input element.[00106] In Example 19, the method of Example 15 further comprises storing a mask value to the third register in response to determining that a key input element does not match any key element, wherein the position of the mask value in the third register has an offset equal to an offset of the key input element.[00107] In Example 19, the method of Example 15 comparing each element of the key input vector to each element of the key vector is performed in parallel using vector registers.[00108] Example 21 is a machine readable medium including code, when executed, to cause a machine to perform the method of any one of Examples 15 to 20.[00109] Example 22 is an apparatus comprising means for performing the method of any one of claims 15 to 20.[00110] Example 23 is a apparatus comprising a processor configured to perform the method of any one of claims 15 to 20.[00111] Example 24 is an apparatus comprising: means for storing, a key vector comprising a plurality of key elements to a first processor register and a value vector comprising a plurality of value elements to a second processor register, wherein each value element is associated with a key element; means for receiving a plurality of key input elements; means for comparing each key input element to each key element to determine a subset of key elements, wherein each key element in the subset of key elements matches one of the key input elements; means for determining a subset of the plurality of value elements, wherein each element in the subset of value elements is associated with one of the key elements in the subset of key elements; and means for storing each element in the subset of the plurality of value elements in a position in a third register offset from a base position of the third register equal to an offset of an associated key input element from a base position of the key input vector.[00112] In Example 25, in the method of Example 24, the apparatus further comprises means for generating a permute index vector having entries referencing the position in the value vector of value elements associated with key elements in the subset of key elements, wherein each entry in the permute index vector has an offset from a base position of the permute index vector equal to an offset of an associated key input element from a base position of the associate key input element; means for identifying, a value element based on a value of an entry in the permute index vector; and means for storing the value element to the third register in a position offset from the base position of the third register equal to the offset of the entry in the permute index vector from the base position of the permute index vector.[00113] In Example 26, in the method of Example 24, the apparatus further comprises means for storing a mask value to the third register in response to determining that a key input element does not match any key element, wherein the position of the mask value in the third register has an offset equal to an offset of the key input element.[00114] In Example 27, in the method of Example 24, the apparatus further comprises means for generating a permute index vector having entries referencing the position in the value vector of value elements associated with key elements in the subset of key elements, wherein each entry in the permute index vector has an offset from a base position of the permute index vector equal to an offset of an associated key input element from a base position of the associate key input element;[00115] In Example 28, in the method of Example 27, the apparatus further comprises means for identifying, a value element based on a value of an entry in the permute index vector; and means for storing the value element to the third register in a position offset from the base position of the third register equal to the offset of the entry in the permute index vector from the base position of the permute index vector.[00116] Example 29 is a system comprising: a processor core; and a memory element coupled to the processor core, wherein the memory element comprises microcode to cause the processor core to: store a key vector comprising a plurality of key elements in a first register; store a value vector comprising a plurality of value elements in a second register, wherein each value element is associated with a key element; receive a key input vector comprising a plurality of key input elements; compare each key element to each key input element to determine a subset of key elements, wherein each key element in the subset of key elements matches at least one of the plurality of key input elements; and store a subset of value elements to a third register, wherein each value element in the subset of value elements in the third register is associated with a key element in the subset of key elements and in a position offset from a base position of the third register equal to an offset of an associated key input element from a base position of the key input vector.[00117] In Example 30, in the system of Example 29, the processor core is further to: generate a permute index vector based on key elements that match key input elements; and perform a vector permute operation using the permute index vector and the value vector[00118] In Example 31, in the system of Example 30, to generate a permute index, the processor core is to store, to the permute index vector, an entry having an offset from a base position of the permute index equal to the offset of an associated key input element from the base position of the key input element and having a value referencing the position of a key element that matches the key input element.[00119] In Example 31, in the system of Example 30, to perform a vector permute operation, the processor core is to: identify a value element based on a value of an entry in the permute index vector, wherein the offset of the value element from a base element of the value vector is equal to the value of the entry in the permute index vector; and store the value element to the third register in a position offset form the base position of the third register equal to the offset of the entry in the permute index vector from the base position of the permute index vector.[00120] In Example 33, in the system of Example 30, the processor core is further to provide a mask value to the permute index vector in response to determining that a key input element of the key input vector does not match any key element in the key vector.[00121] In Example 34, in the system of Example 29, the processor core is further to compare each key element to each key input element in parallel using a single input multiple data register.[00122] While the present invention has been described with respect to a limited number of embodiments, those skilled in the art will appreciate numerous modifications and variations therefrom. It is intended that the appended claims cover all such modifications and variations as fall within the true spirit and scope of this present invention.[00123] In the description herein, numerous specific details are set forth, such as examples of specific types of processors and system configurations, specific hardware structures, specific architectural and micro architectural details, specific register configurations, specific instruction types, specific system components, specific measurements/heights, specific processor pipeline stages and operation etc. in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that these specific details need not be employed to practice the present invention. In other instances, well known components or methods, such as specific and alternative processor architectures, specific logic circuits/code for described algorithms, specific firmware code, specific interconnect operation, specific logic configurations, specific manufacturing techniques and materials, specific compilerimplementations, specific expression of algorithms in code, specific power down and gating techniques/logic and other specific operational details of computer system have not been described in detail in order to avoid unnecessarily obscuring the present invention.[00124] The embodiments are described with reference to implementing a key value lookup instruction in specific integrated circuits, such as in computing platforms or microprocessors. The embodiments may also be applicable to other types of integrated circuits and programmable logic devices. For example, the disclosed embodiments are not limited to desktop computer systems or portable computers, such as the Intel® Ultrabooks™ computers. And may be also used in other devices, such as handheld devices, tablets, other thin notebooks, systems on a chip (SoC) devices, and embedded applications. Some examples of handheld devices include cellular phones, Internet protocol devices, digital cameras, personal digital assistants (PDAs), and handheld PCs. Embedded applications typically include a microcontroller, a digital signal processor (DSP), a system on a chip, network computers (NetPC), set-top boxes, network hubs, wide area network (WAN) switches, or any other system that can perform the functions and operations taught below. It is described that the system can be any kind of computer or embedded system. The disclosed embodiments may especially be used for low-end devices, like wearable devices (e.g., watches), electronic implants, sensory and control infrastructure devices, controllers, supervisory control and data acquisition (SCAD A) systems, or the like. Moreover, the apparatuses, methods, and systems described herein are not limited to physical computing devices, but may also relate to software optimizations for energy conservation and efficiency. As will become readily apparent in the description below, the embodiments of methods, apparatuses, and systems described herein (whether in reference to hardware, firmware, software, or a combination thereof) are vital to a 'green technology' future balanced with performance considerations.[00125] Although the embodiments herein are described with reference to a processor, other embodiments are applicable to other types of integrated circuits and logic devices. Similar techniques and teachings of embodiments of the present invention can be applied to other types of circuits or semiconductor devices that can benefit from higher pipeline throughput and improved performance. The teachings of embodiments of the present invention are applicable to any processor or machine that performs data manipulations. However, the present invention is not limited to processors or machines that perform 512 bit, 256 bit, 128 bit, 64 bit, 32 bit, or 16 bit data operations and can be applied to any processor and machine in which manipulation or management of data is performed. In addition, the description herein provides examples, and the accompanying drawings show various examples for the purposes of illustration. However, these examples should not be construed in a limiting sense as they are merely intended to provide examples of embodiments of the present invention rather than to provide an exhaustive list of all possible implementations of embodiments of the present invention.[00126] Although the examples herein describe instruction handling and distribution in the context of execution units and logic circuits, other embodiments of the present disclosure can be accomplished by way of a data or instructions stored on a machine-readable, tangible medium, which when performed by a machine cause the machine to perform functions consistent with at least one embodiment of the disclosure. In one embodiment, functions associated with embodiments of the present invention are embodied in machine-executable instructions. The instructions can be used to cause a general-purpose or special-purpose processor that is programmed with the instructions to perform the steps of the present disclosure. Embodiments of the present disclosure may be provided as a computer program product or software which may include a machine or computer-readable medium having stored thereon instructions which may be used to program a computer (or other electronic devices) to perform one or more operations according to embodiments of the present disclosure. Alternatively, operations of embodiments of the present disclosure might be performed by specific hardware components that contain fixed- function logic for performing the operations, or by any combination of programmed computer components and fixed-function hardware components.[00127] Instructions used to program logic to perform embodiments of the disclosure can be stored within a memory in the system, such as DRAM, cache, flash memory, or other storage. Furthermore, the instructions can be distributed via a network or by way of other computer readable media. Thus a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), but is not limited to, floppy diskettes, optical disks, Compact Disc, Read-Only Memory (CD-ROMs), and magneto- optical disks, Read-Only Memory (ROMs), Random Access Memory (RAM), ErasableProgrammable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), magnetic or optical cards, flash memory, or a tangible, machine-readable storage used in the transmission of information over the Internet via electrical, optical, acoustical or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.). Accordingly, the computer-readable medium includes any type of tangible machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer).[00128] A design may go through various stages, from creation to simulation to fabrication. Data representing a design may represent the design in a number of manners. First, as is useful in simulations, the hardware may be represented using a hardware description language or another functional description language. Additionally, a circuit level model with logic and/or transistor gates may be produced at some stages of the design process. Furthermore, most designs, at some stage, reach a level of data representing the physical placement of various devices in the hardware model. In the case where conventional semiconductor fabrication techniques are used, the data representing the hardware model may be the data specifying the presence or absence of various features on different mask layers for masks used to produce the integrated circuit. In any representation of the design, the data may be stored in any form of a machine readable medium. A memory or a magnetic or optical storage such as a disc may be the machine readable medium to store information transmitted via optical or electrical wave modulated or otherwise generated to transmit such information. When an electrical carrier wave indicating or carrying the code or design is transmitted, to the extent that copying, buffering, or re-transmission of the electrical signal is performed, a new copy is made. Thus, a communication provider or a network provider may store on a tangible, machine-readable medium, at least temporarily, an article, such as information encoded into a carrier wave, embodying techniques of embodiments of the present invention.[00129] A module as used herein refers to any combination of hardware, software, and/or firmware. As an example, a module includes hardware, such as a micro-controller, associated with a non-transitory medium to store code adapted to be executed by the micro-controller. Therefore, reference to a module, in one embodiment, refers to the hardware, which is specifically configured to recognize and/or execute the code to be held on a non-transitory medium. Furthermore, in another embodiment, use of a module refers to the non-transitory medium including the code, which is specifically adapted to be executed by the microcontroller to perform predetermined operations. And as can be inferred, in yet another embodiment, the term module (in this example) may refer to the combination of the microcontroller and the non- transitory medium. Often module boundaries that are illustrated as separate commonly vary and potentially overlap. For example, a first and a second module may share hardware, software, firmware, or a combination thereof, while potentially retaining some independent hardware, software, or firmware. In one embodiment, use of the term logic includes hardware, such as transistors, registers, or other hardware, such as programmable logic devices.[00130] Use of the phrase 'configured to,' in one embodiment, refers to arranging, putting together, manufacturing, offering to sell, importing and/or designing an apparatus, hardware, logic, or element to perform a designated or determined task. In this example, an apparatus or element thereof that is not operating is still 'configured to' perform a designated task if it is designed, coupled, and/or interconnected to perform said designated task. As a purely illustrative example, a logic gate may provide a 0 or a 1 during operation. But a logic gate 'configured to' provide an enable signal to a clock does not include every potential logic gate that may provide a 1 or 0. Instead, the logic gate is one coupled in some manner that during operation the 1 or 0 output is to enable the clock. Note once again that use of the term 'configured to' does not require operation, but instead focus on the latent state of an apparatus, hardware, and/or element, where in the latent state the apparatus, hardware, and/or element is designed to perform a particular task when the apparatus, hardware, and/or element is operating.[00131] Furthermore, use of the phrases 'to,' 'capable of/to,' and or 'operable to,' in one embodiment, refers to some apparatus, logic, hardware, and/or element designed in such a way to enable use of the apparatus, logic, hardware, and/or element in a specified manner. Note as above that use of to, capable to, or operable to, in one embodiment, refers to the latent state of an apparatus, logic, hardware, and/or element, where the apparatus, logic, hardware, and/or element is not operating but is designed in such a manner to enable use of an apparatus in a specified manner.[00132] A value, as used herein, includes any known representation of a number, a state, a logical state, or a binary logical state. Often, the use of logic levels, logic values, or logical values is also referred to as l's and 0's, which simply represents binary logic states. For example, a 1 refers to a high logic level and 0 refers to a low logic level. In one embodiment, a storage cell, such as a transistor or flash cell, may be capable of holding a single logical value or multiple logical values. However, other representations of values in computer systems have been used. For example the decimal number ten may also be represented as a binary value of 1110 and a hexadecimal letter A. Therefore, a value includes any representation of information capable of being held in a computer system.[00133] Moreover, states may be represented by values or portions of values. As an example, a first value, such as a logical one, may represent a default or initial state, while a second value, such as a logical zero, may represent a non-default state. In addition, the terms reset and set, in one embodiment, refer to a default and an updated value or state, respectively. For example, a default value potentially includes a high logical value, i.e. reset, while an updated value potentially includes a low logical value, i.e. set. Note that any combination of values may be utilized to represent any number of states.[00134] The embodiments of methods, hardware, software, firmware or code set forth above may be implemented via instructions or code stored on a machine-accessible, machine readable, computer accessible, or computer readable medium which are executable by a processing element. A non-transitory machine-accessible/readable medium includes any mechanism that provides (i.e., stores and/or transmits) information in a form readable by a machine, such as a computer or electronic system. For example, a non-transitory machine-accessible medium includes random-access memory (RAM), such as static RAM (SRAM) or dynamic RAM(DRAM); ROM; magnetic or optical storage medium; flash memory devices; electrical storage devices; optical storage devices; acoustical storage devices; other form of storage devices for holding information received from transitory (propagated) signals (e.g., carrier waves, infrared signals, digital signals); etc., which are to be distinguished from the non-transitory mediums that may receive information there from.[00135] Instructions used to program logic to perform embodiments of the invention may be stored within a memory in the system, such as DRAM, cache, flash memory, or other storage. Furthermore, the instructions can be distributed via a network or by way of other computer readable media. Thus a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), but is not limited to, floppy diskettes, optical disks, Compact Disc, Read-Only Memory (CD-ROMs), and magneto- optical disks, Read-Only Memory (ROMs), Random Access Memory (RAM), ErasableProgrammable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), magnetic or optical cards, flash memory, or a tangible, machine-readable storage used in the transmission of information over the Internet via electrical, optical, acoustical or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.). Accordingly, the computer-readable medium includes any type of tangible machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer)[00136] Reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment.Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.[00137] In the foregoing specification, a detailed description has been given with reference to specific exemplary embodiments. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. Furthermore, the foregoing use of embodiment and other exemplarily language does not necessarily refer to the same embodiment or the same example, but may refer to different and distinct embodiments, as well as potentially the same embodiment.[00138] Some portions of the detailed description are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers or the like. The blocks described herein can be hardware, software, firmware or a combination thereof.[00139] It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the above discussion, it is appreciated that throughout the description, discussions utilizing terms such as "defining," "receiving," "determining," "issuing," "linking," "associating," "obtaining," "authenticating," "prohibiting," "executing," "requesting," "communicating," or the like, refer to the actions and processes of a computing system, or similar electronic computing device, that manipulates and transforms data represented as physical (e.g., electronic) quantities within the computing system's registers and memories into other data similarly represented as physical quantities within the computing system memories or registers or other such information storage, transmission or display devices.[00140] The words "example" or "exemplary" are used herein to mean serving as an example, instance or illustration. Any aspect or design described herein as "example' or "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects or designs.Rather, use of the words "example" or "exemplary" is intended to present concepts in a concrete fashion. As used in this application, the term "or" is intended to mean an inclusive "or" rather than an exclusive "or." That is, unless specified otherwise, or clear from context, "X includes A or B" is intended to mean any of the natural inclusive permutations. That is, if X includes A; X includes B; or X includes both A and B, then "X includes A or B" is satisfied under any of the foregoing instances. In addition, the articles "a" and "an" as used in this application and the appended claims should generally be construed to mean "one or more" unless specified otherwise or clear from context to be directed to a singular form. Moreover, use of the term "an embodiment" or "one embodiment" or "an implementation" or "one implementation" throughout is not intended to mean the same embodiment or implementation unless described as such. Also, the terms "first," "second," "third," "fourth," etc. as used herein are meant as labels to distinguish among different elements and may not necessarily have an ordinal meaning according to their numerical designation. |
An apparatus comprising:a memory array comprising memory cells, the memory array configured to store data and corresponding error correction codes;an error correction encoder/decoder (290) configured to generate error correction codes, to identify errors in the data, and to correct errors in the data;a reference circuit (240) configured to generate a reference signal;a self-reference circuit (230) configured to receive a value read from a selected memory cell of the memory array associated with a first read operation, and to generate a self-reference signal based on the received value; anda sense output circuit (226) configured to:perform a first comparison of the value read from the selected memory cell of the memory array associated with the first read operation with the reference signal; andin response to detecting an error which is uncorrectable via error correction codes, perform a second comparison of the a value read from the selected memory cell of the memory array associated with a second read operation with the self-reference value, the second read operation occurring subsequent to the first read operation. |
An apparatus comprising:a memory array comprising memory cells, the memory array configured to store data and corresponding error correction codes;an error correction encoder/decoder (290) configured to generate error correction codes, to identify errors in the data, and to correct errors in the data;a reference circuit (240) configured to generate a reference signal;a self-reference circuit (230) configured to receive a value read from a selected memory cell of the memory array associated with a first read operation, and to generate a self-reference signal based on the received value; anda sense output circuit (226) configured to:perform a first comparison of the value read from the selected memory cell of the memory array associated with the first read operation with the reference signal; andin response to detecting an error which is uncorrectable via error correction codes, perform a second comparison of the a value read from the selected memory cell of the memory array associated with a second read operation with the self-reference value, the second read operation occurring subsequent to the first read operation.The apparatus of claim 1, wherein a first latency in providing valid data read from the memory cells to a processor associated with the first comparison is shorter than a second latency in providing valid data read from the same memory cells to the processor in the second comparison.The apparatus of claim 1, wherein a first latency in providing valid data read from the memory cells to a processor associated with the first comparison is the same as a second latency in providing valid data read from the same memory cells to the processor in the second comparison.The apparatus of claim 1, wherein the memory cells comprise magnetoresistive random access memory (MRAM) cells.The apparatus of claim 1, wherein the second comparison further comprises re-programming the selected memory cell when comparing the value read from the selected memory cell to another value read from the selected memory cell indicates that the compared values correspond to different states of the selected memory cell.The apparatus of claim 1, wherein the error correction encoder/decoder is further configured to correct correctable errors in the data read from the memory cells via decoding of the error correction codes.The apparatus of claim 1, further comprising a processor, wherein the sense output circuit is further configured to provide data from the first comparison to the processor in response to determining that the errors in the data are correctable via the error correction codes.An apparatus comprising:a memory array comprising memory cells, the memory array configured to store data and corresponding error correction codes;an error correction encoder/decoder (290) configured to generate error correction codes, to identify errors in the data, and to correct errors in the data; anda reference circuit (240) configured to generate a reference signal;a self-reference circuit (230) configured to receive a value read from a selected memory cell of the memory array associated with a first read operation, and to generate a self-reference signal based on the received value;a sense output circuit (226) configured to:perform a first comparison of the value read from the selected memory cell of the memory array associated with the first read operation with the reference signal; andin response to detecting a condition associated with the first comparison, perform a second comparison of the a value read from the selected memory cell of the memory array associated with a second read operation with the self-reference value, the second read operation occurring subsequent to the first read operation, anda processor to access data from the memory array with a variable latency, wherein accessing data associated with the first read operation has a shorter latency than accessing data associated with the second read operation.The system of claim 8, wherein the condition indicates an error in the data read from the memory cells as being uncorrectable via the error correction codes.The system of claim 8, wherein the condition indicates a suspected error in the data read from the memory cells associated with the first read operation.The system of claim 8, wherein the condition indicates a number of errors in the data read from the memory cells associated with the first comparison including a threshold number of errors greater than 1.The system of claim 8, wherein each of the memory cells comprises a memory element having a different resistance in a first state than in a second state.The system of claim 12, wherein performing the second comparison comprises:programming the selected memory cell to the first state; andre-programming the selected memory cell to the second state when comparing the value read from the selected memory cell to another value read from the selected memory cell indicates that one of the compared values corresponds to the first state and the other of the compared values corresponds to the second state.The system of claim 8, wherein the memory cells comprise magnetic tunnel junction spin torque transfer magnetoresistive random access memory (MTJ STT-MRAM) cells.The system of claim 8, wherein the error correction encoder/decoder is configured to correct one or more errors in the data read from the memory cells via the error correction codes when the condition is not detected. |
BackgroundTechnical FieldThis disclosure generally relates to electronics, and, in particular, to memory devices.Description of the Related TechnologyRead errors can occur in various types of memories, such as magnetoresistive random access memories (MRAMs). MRAM is a form of non-volatile memory in which data can be stored by adjusting a resistance in a magnetic tunneling junction (MTJ) of a memory cell. For instance, the resistance of an MTJ can be switched between a high resistance state and a low resistance state. In an MRAM, a current induced magnetic field can switch the magnetization of the MTJ to switch between states.Certain types of memories can encounter relatively high read error rates. Such error rates can be caused by several different sources or mechanisms or non-uniformities in the memories. Due to non-uniformities in manufacturing, different memory cells in the same memory array may not be matched with each other. For instance, in some MRAMs that store binary states, the variability in the memory cells can cause a relatively high variation in the distribution in resistance for both the low resistance states and high resistance states for memory cells in the same memory array. Some ways of reading from an MRAM, such as a self-reference read, can encounter fewer errors but consume higher power and can also increase the latency for accessing data from the memory.Accordingly, a need exists for accurately and efficiently reading from memories, such as MRAMs.Brief Description of the DrawingsThese drawings and the associated description herein are provided to illustrate specific embodiments of the invention and are not intended to be limiting.Figure 1 is a flow diagram of an illustrative method of reading data from a memory according to an embodiment.Figure 2 is a schematic diagram of an illustrative memory according to an embodiment.To avoid repetition of description, components having the same or similar function may be referenced by the same reference number.Detailed Description of Certain EmbodimentsAlthough particular embodiments are described herein, other embodiments, including embodiments that do not provide all of the benefits and features set forth herein, will be apparent to those of ordinary skill in the art.As discussed above, memories can encounter read errors. For instance, MRAM cells can have a relatively small difference between resistances in different states, such as a high resistance state and a low resistance state. Variations in MRAMs and other memories can contribute to relatively high read error rates. For example, some magnetic tunnel junction spin torque transfer magnetoresitive random access memory (MTJ STT-MRAM) cells in the same memory array can have a relatively high distribution of resistances in both low resistance states and high resistance states. In certain instances, there can be MTJ STT-MRAM cells that have a low state resistance that overlaps with the distribution of high resistance states of other cells in the same memory array. Alternatively or additionally, variations in an effective resistance in a signal path can cause read errors. Variations in an access transistor in a memory cell and/or variations in the digit line resistance can cause variation in effective resistance in the signal path. Read errors resulting from a variation in resistance of the signal path can even occur when the resistances of the MTJ cells in the same state are within a tight distribution.While the disclosure may describe examples in connection with MRAMs for illustrative purposes, the principles and advantages described herein may be applied to other suitable types of memory. The principles and advantages described herein can be applied to any memory in which there is a variation in parasitic resistances in memory cells and/or signal paths that can result in a read error. For example, any combination of features described herein can be applied to any memory cells that include a memory element that has different resistances in different states, which can be detected when determining data read from such memory cells. Some examples of memory cells that have memory elements with different resistances in different states include MRAM cells including STT-MRAM cells and orthogonal spin transfer magnetoresistive random access memory (ST-MRAM) cells, resistive random-access memory (RRAM or ReRAM) cells including conductive bridging random access memory (CBRAM), ferroelectric random access memory (F-RAM) cells, complementary method oxide memory (CMOx) cells, phase change memory (PCM or PRAM) cells, and the like.The state of an MRAM cell can be determined by comparing a value from a memory array to a reference value. The reference value may be obtained from a reference cell that is programmed to a state such that the reference cell returns a value between values associated with different states of a memory cell, such as a high resistance state and a low resistance state. Reading from a MRAM by comparing a value associated with a selected memory cell with a reference value can be referred to as a standard reference read. In certain instances, a single reference value may not be sufficient to accurately read from all of the memory cells, for example, due to the variations discussed above.Another way to determine a state of an MRAM cell, such as an MTJ STT-MRAM cell is a self-reference read. Self-reference reads can reduce errors compared to standard reference reads. In a self-reference read, a memory cell is compared to itself. Self-reference reads can involve comparing a value read from a memory cell to another value read from the same memory cell. This can reduce and/or eliminate read errors that result from differences in cell-to-cell MTJ resistance and/or differences in resistances of signal paths associated with different cells in a memory array since the same cell and signal path are used in comparing memory cell resistance values. An example self-reference read can involve (1) performing a standard reference read from a memory cell, (2) programming the memory cell to a reference state, (3) reading the memory cell programmed at the reference state, and (4) comparing the values of the two separate reads from the memory cell with a differential sense amplifier. In this example, if the two values read from the memory cell are approximately the same, the memory cell is determined to be in the reference state. On the other hand, in this example, if the two values read from the memory cell are sufficiently different, the memory cell is in a non-reference state and the memory cell is subsequently re-written to the non-reference state.Self-reference reads can increase latency and power compared to standard reference reads. The latency between when data is requested and returned can be increased with a self-reference read compared to a single read because the self-reference read can involve more than one read operation and an additional programming operation. The additional programming and reading associated with a self-reference read can significantly increase power consumption compared to a single read.To accurately read from memory cells in a power efficient manner, self-reference reads can be selectively performed in one or more conditions in which read errors are suspected to have occurred and/or likely to occur. As such, a combination of standard reference reads and self-reference reads can be performed to accurately read data from a memory and maintain relatively low power consumption for reading from the memory. Furthermore, in some instances, the average latency of accessing data from the memory can be reduced compared to performing only self-reference reads.Figure 1 is a flow diagram of an illustrative method 100 of reading data from a memory according to an embodiment. In the method 100, data is read from a memory, such as an MRAM, with a combination of standard reference reads and self-reference reads. At block 110, data can be read from a memory. The read at block 110 can involve a standard reference read. Alternatively, data can be read in accordance with any other suitable low latency, low power method. Data can be read from a single memory cell or a plurality of memory cells, for example, to read a codeword or a byte of data. A codeword is a combination of data and its corresponding error correction codes (ECC). The data and the corresponding ECC do not need to be adjacent in a storage device. The memory device can include an ECC encoder/decoder to perform error correction code encoding and decoding.More intensive reads, such as self-reference reads, that involve higher power consumption and/or a longer latency can be performed in response to detecting a condition. While self-reference reads are described for illustrative purposes, the principles and advantages described herein can be applied to selectively performing any read operation with increased accuracy compared to a standard read operation, such as a standard reference read. For instance, any combination of features described herein with reference to a self-reference read can be applied to any read operation that involves multiple reads from the same memory cell.A condition for performing the self-reference read can be associated with the read at block 110. For example, it can be determined whether all error(s) in the read at block 110 are correctable via error correction codes (ECC). Examples of ECC include Hamming codes, Bose Chaudhuri Hocquenghem (BCH) codes, and the like. ECC bits can be used to detect bits that fail the read at block 110 and/or codewords that are uncorrectable via ECC.In one embodiment, the process analyzes the data read from memory for the errors at decision block 120 and attempts to correct errors using ECC. If no errors are detected at decision block 120, the data read from memory at block 110 can be provided to a processor at block 128. When errors are detected at block 120, the process initially attempts to correct the errors using the ECC at block 122. The read data can be corrected via ECC on the same die and/or chip as the memory. Alternatively or additionally, ECC correction can be performed external to the die and/or chip on which the memory is included. However, when the number of errors is greater than the number of errors correctable by the ECC, the codeword is uncorrectable via ECC. At decision block 124, it is determined whether all errors are correctable by ECC.Detecting uncorrectable ECC errors at block 124 is one illustrative example of detecting a condition for which self-reference reads are performed. Self-reference reads can be performed in response to detecting a condition associated with a read from a memory. For instance, a self-reference read can be performed in response to detecting a condition indicative of at least one suspected error in data read from a memory. As another example, a self-reference read can be performed in response to detecting a condition indicative of data being read from memory having at least a threshold number of errors. As yet another example, a self-reference read can be performed in response to detecting a condition indicative of one or more memory cells having a relatively large variation in resistance in the memory cell and/or the signal path associated with reading from the memory cell.In some embodiments, a self-reference read is performed only in response to detecting a condition, such as one or more of the conditions described herein. For instance, according to one embodiment, self-reference reads are performed only in response to determining that data read from the memory is otherwise uncorrectable via ECC.Referring back to Figure 1 , when it is determined at decision block 124 that all errors are corrected via ECC, the ECC corrected data can be provided to a processor at block 128. In this way, data read from memory that is correctable via ECC can be provided to the processor with a relatively low power consumption and/or a relatively low latency. Memory cells associated with failing data digits can be validated after the read at block 110 without causing delay in providing the read data to the processor. In one embodiment, there are more than one codeword in a data read. In some embodiments, ECC can be used to identify particular codewords having uncorrectable errors and perform a self-reference read on only the particular data digits and/or ECC digits of the identified codewords to validate the memory cells. Other suitable methods can be used to validate the memory cells.When it is determined that errors in data read from memory are not correctable via ECC at decision block 124, a self-reference read can be performed at block 126. Similarly, the self-reference read can be performed at block 126 in response to detecting a number of conditions associated with a read, for example, one or more of the conditions described herein. In this way, some reads from the memory involve a single read operation and other reads from the memory involve a plurality of read operations when a condition is detected. The self-reference read can involve the operations described above in connection with the example self-reference read. Any other suitable self-reference read operations can alternatively or additionally be performed. By performing a self-reference read, correct data can be read from the memory when data previously read from the same memory cells encountered an error that is uncorrectable via ECC alone. The self-reference read can be performed on memory cells associated with each digit of a codeword associated with the uncorrectable ECC errors. In certain embodiments, one or more errors in data read from the self-reference read can be further detected and corrected as necessary via ECC. The data read from memory via a self-reference read at block 126 can be provided to a processor at block 128. Data can be provided to a processor at block 128 via a memory controller, for example.A self-reference read typically involves a longer latency for providing valid data than a single read operation such as a standard reference read. A memory controller receiving data read from the memory can detect and account for such a delay. In certain embodiments, selectively performing self-reference reads can result in some or all of the other memory accesses to have a lower latency than the self-reference reads. This should reduce the average latency of memory accesses. By selectively performing a self-reference read, some or all of the other memory accesses can consume lower power than the self-reference read. Such a reduction in power savings can be significant. As self-reference reads should be less frequently performed, the reduction in power consumption and average latency also increases. When the self-reference read is performed when error correction via ECC fails, the underlying bit fail rate should be as good as if self-reference reads were performed on every memory access.In certain embodiments, the process 100 can provide data read from memory with a variable latency. The data provided by a standard reference read can be provided with a lower latency than data read by a self-reference read. A data ready signal can be provided to a memory controller as an indication of valid read data being ready for further processing. A dedicated pin can be included on the memory controller to receive the data ready signal in one embodiment. Additional circuitry can be included to determine when valid read data is ready for further processing, for example, in a managed memory solution. In this way, a dedicated pin may not be needed for the data ready signal. In some embodiments, the additional circuitry can implement a variable latency read in connection with a double data rate type 3 (DDR3) memory controller. With a variable latency read, a memory can provide valid data with lower power and lower average latency by selectively performing self-reference reads compared to only performing self-reference reads. In certain embodiments, most reads in such a method can provide valid read data with lower latency than self-reference reads.According to some embodiments, data read from memory can be provided with a fixed latency. In such embodiments, the data read by a standard read can be provided to a memory controller with approximately the same latency as a self-reference read. Using approximately the same latency for all read accesses can simplify the design of a memory controller. With a fixed latency for reading data from memory, selectively performing self-reference reads should consume lower power than only performing self-reference reads.The methods of selectively performing a self-reference read herein can be implemented a variety of ways in hardware and/or firmware. For instance, selectively performing self-reference reads can be implemented in a context of memory cells that are read with relatively low swing signals. The principles and advantages described herein can be applied to memories with variations in resistance among memory cells in the same memory array and/or with variations in resistance in signal paths among memory cells in the same memory array. High density MRAM is one example of such a memory. MRAMs can be highly scalable, high density, have relatively low power consumption, have relatively low latency for programming and reading, and have high endurance.Figure 2 is a schematic diagram of an example memory 200 according to an embodiment. As illustrated in Figure 2 , the memory 200 can include a memory array 216 and a sense circuit 225 to sense a value read from a memory cell 220 in the memory array 216. The memory 200 can also include an error detection circuit 290, which can detect errors associated with data read from the memory array 216 and/or any of the conditions described herein. The error detection circuit 290 can include an ECC encoder/decoder. The memory 200 can include fewer or more components than illustrated. The memory 200 can implement any combination of features describe in reference to the method 100.The memory array 216 includes a plurality of memory cells 220. The memory cells 220 can store data digits, such as bits of a codeword that includes data and corresponding error correction codes. The memory cells 220 can store binary data digits in one embodiment. In another embodiment, the memory cells 220 can store multi-level data digits that correspond to three or more different states of a particular memory cell 220.The illustrated memory cell 220 is a MTJ STT-MRAM cell. The illustrated memory cell 220 includes a spin-transfer torque (STT) MTJ memory element 222 that is electrically connected in series with an access transistor 224. The access transistor 224 can be a field effect transistor (FET), such as an NMOS transistor or more generally, an insulated gate FET. It will be understood that these FETs can have gates made out of materials other than metals, such as polycrystalline silicon, and can have dielectric "oxide" regions made from dielectrics other than silicon oxide, such as from silicon nitride or high-k dielectrics. A first end of the STT MTJ memory element 222 can be electrically connected to a drain of the transistor 224. A second end of the STT MTJ memory element 222 can be electrically connected to a digit line. The access transistor 224 can also have a source electrically coupled to a source line and a gate electrically coupled to a word line. The STT MTJ memory element 222 can be modeled as a variable resistor. Changing a state of the STT MTJ memory element 222 via spin transfer can occur when a current passing through a magnetic layer of the STT MTJ memory element 222 becomes spin polarized and imparts a spin torque on a free layer of the STT MTJ memory element 222. When a sufficient spin torque is applied to the free layer, the magnetization orientation of the free layer can be switched between two opposite directions. Depending on the direction of the current, the STT MTJ memory element 222 can be switched between a low resistance state and a high resistance state.MRAMs can encounter difficulties in reading data due to variations in resistance. For example, in the memory 200, the variation of resistances between MTJ memory elements 222 of different memory cells 220 can cause difficulties in accurately determining data stored in the memory cells 220. Alternatively or additionally, the variation in resistance between access transistors 224 of different memory cells 220 and/or variation in parasitic resistances between digit lines associate with different memory cells 220 can cause difficulties in accurately determining data stored in the memory cells 220. The sense circuit 225 can efficiently and reliably determine valid data digits read from memory cells 220 of the memory array 216 in the presence of one of more of these variations in resistance.A stored data digit can be read out of a memory cell 220 by measuring a resistance of the memory cell 220. An example signal path is shown in Figure 2 for one memory cell 220. A value read from the memory cell 220 can be provided to the sense circuit 225. As illustrated, the sense circuit 225 includes a sense output circuit 226, a self-reference circuit 230, a reference circuit 240, a pass transistor 260, and a storage element 270. While the sense circuit 225 is illustrated for one digit line in Figure 2 , the sense circuit 225 can include a dedicated sense output circuit 226, self-reference circuit 230, pass transistor 260, and storage element 270. In certain embodiments, any combination of the sense output circuit 226, the self-reference circuit 230, the pass transistor 260, and storage element 270 can be provided in connection with each digit line in the memory array 216.The sense circuit 225 can operate in a first mode and a second mode. In one embodiment, the second mode can be activated only when errors in a codeword are determined to be uncorrectable. The sense output circuit 226 can compare a value read from a selected memory cell of the memory array associated with a first read operation with a reference signal in the first mode, or compare a value read from the selected memory cell of the memory array associated with a second read operation with a self-reference value in the second mode, based on a select signal. The select signal can be indicative of any combination of the conditions associated with reading from a memory described herein. For instance, the select signal can be indicative of an error in data read from the memory being uncorrectable via ECC.With reference to Figure 2 , a value read from the memory cell 220 can be provided via a pass transistor 260 to a storage element 270, such as a capacitor. The pass transistor 260 can pass the value read from the memory cell 220 to the storage element 270 when a read enable signal is asserted. The value stored by the capacitor can be provided to an input of a sense amplifier 280.The value read from the memory cell 220 can also be provided to a self-reference circuit 230. The self-reference circuit 230 can store a value read from the memory cell 220 for a comparison with a subsequent value read from the memory cell. The self-reference circuit 230 can provide a self-reference value to a sense output circuit 226 during a subsequent read operation from the memory cell 220. The self-reference value can represent a value previously read from the memory cell 220.A reference circuit 240 can provide a reference value to the sense output circuit 226. The reference circuit 240 can be any suitable circuit configured to provide a reference value for determining a state of a memory cell 220. As one example, the reference circuit 240 can include a reference memory cell functionally similar to the memory cell 220. Such a reference cell can be configured to generate a high state value, a low state value, or a value between the high state and the low state. In one embodiment, one reference circuit 240 can be implemented with the memory array 216 and one self-reference circuit 230 can be implemented with each digit line of the memory array 216. The reference value can then be used to determine a value of a data digit stored in the memory cell 220 in a standard reference read.In certain embodiments, the sense output circuit 226 includes a multiplexer 250 and a sense amplifier 280. The multiplexer 250 can receive the reference signal and the self-reference signal. The multiplexer 250 can be implemented by any suitable circuit, such as combinational logic and/or switch(es). The multiplexer 250 can output either the reference value or the self-reference value based on a select signal. The select signal can be indicative one or more of the conditions described herein, for example, whether an error uncorrectable via ECC has been detected. An output of the multiplexer 250 can be provided to the sense amplifier 280. In this way, the multiplexer 250 can selectively provide the reference value to the sense amplifier 280 for a standard reference read or the self-reference value to the sense amplifier 280 for a self-reference read.The sense amplifier 280 can determine a data digit Data_Out based on comparing a value read from the memory cell 220 with either the reference value or the self-reference value. The data digit Data_Out can be output from the memory 200. For instance, the data digit Data Out can be provided to an ECC engine in connection with a standard reference read. The ECC engine can be implemented on the same die as the memory 200 and/or external to a die that includes the memory 200. The ECC engine can include an error correction encoder/decoder configured to generate error correction codes, to identify errors in codewords, and to correct errors in codewords. In the embodiment shown in Figure 2 , the ECC engine is included in the error detection circuit 290.In another embodiment (not illustrated), the sense output circuit 226 can include separate sense amplifiers for a standard reference read and a self-reference read. The separate sense amplifiers can be separately activated based on one or more of the conditions described herein. Alternatively or additionally, the outputs of the separate amplifiers can be provided to additional circuitry to determine which output of the sense amplifiers to output as the data digit.The error detection circuit 290 can include logic to generate a data ready signal, which can be provided to a memory controller to indicate whether valid data read from the memory is ready to for further processing. The logic can be implemented by any suitable circuitry. Alternatively, the data ready signal can be generated by the sense circuit 225. The data ready signal can be used to implement variable latency reads from the memory array 216 in which a standard read has a lower latency than a self-referenced read.In one embodiment, a method of reading data from a memory array includes reading data from memory cells of the memory array. The method also includes performing a self-reference read from the same memory cells in response to determining that an error in the data read from the memory cells is uncorrectable via error correction codes. The self-reference read includes comparing a value read from a memory cell to another value read from the same memory cell.In another embodiment, a method of reading data from a memory array includes reading data from memory cells of the memory array by comparing values associated with the memory cells of the memory array with a reference value. The method also includes performing a self-reference read from at least one of the memory cells in response to detecting a condition associated with the reading data from the memory array. The self-reference read includes comparing a value read from a memory cell to another value read from the same memory cell.In another embodiment, an apparatus includes a memory array, an error correction encoder/decoder, and a sense circuit. The memory array includes memory cells and is configured to store codewords that include data and corresponding error correction codes. The error correction encoder/decoder is configured to generate error correction codes, to identify errors in codewords, and to correct errors in codewords. The sense circuit has a first mode and a second mode. The second mode is activated only when errors in a codeword are determined to be uncorrectable. The sense circuit includes a reference circuit, a self-reference circuit, and a sense output circuit. The reference circuit is configured to generate a reference signal for the first mode. The self-reference circuit is configured to receive a value read from a selected memory cell of the memory array associated with a first read operation, and to generate a self-reference signal based on the received value for the second mode. The sense output circuit is configured to perform a first comparison of the value read from the selected memory cell of the memory array associated with the first read operation with the reference signal. The sense output circuit is also configured to perform a second comparison of the a value read from the selected memory cell of the memory array associated with a second read operation with the self-reference value, the second read operation occurring subsequent to the first read operation. The sense circuit is also configured to output a data digit based on a select signal and at least one of the first comparison or the second comparison. The data digit represents data stored in the selected memory cell.In another embodiment, a method of reading data from a memory array includes performing a standard reference read operation that includes reading data from selected memory cells of the memory array by comparing values associated with the selected memory cells of the memory array with at least one reference value. Each of the selected memory cells includes a memory element configured to have a different resistance in a first state than in a second state. The method also includes performing a read operation with increased accuracy compared to the standard reference read operation to read data from one or more of the selected memory cells, in response to detecting a condition associated with performing the standard reference read operation.Self-reference reads can be selectively performed by a variety of memories in accordance with the principles and advantages described herein. A memory device, such as an MRAM device, according to the embodiments described above can be incorporated in various electronic devices. Examples of the electronic devices can include, but are not limited to, consumer electronic products, electronic circuits, electronic circuit components, parts of the consumer electronic products, electronic test equipments, etc. Examples of the consumer electronic products include, but are not limited to, a mobile phone, a telephone, a television, a computer monitor, a computer, a hand-held computer, a laptop computer, a tablet computer, a personal digital assistant (PDA), a microwave, a refrigerator, a stereo system, a cassette recorder or player, a DVD player, a CD player, a VCR, an MP3 player, a radio, a camcorder, an optical camera, a digital camera, a washer, a dryer, a washer/dryer, a copier, a facsimile machine, a scanner, a multi functional peripheral device, a wrist watch, a clock, etc. Further, the electronic device can include unfinished products.The foregoing description and claims may refer to elements or features as being "connected" or "coupled" together. As used herein, unless expressly stated to the contrary, "connected" means that one element/feature is directly or indirectly connected to another element/feature, and not necessarily mechanically. Likewise, unless expressly stated to the contrary, "coupled" means that one element/feature is directly or indirectly coupled to another element/feature, and not necessarily mechanically. Thus, although the drawings illustrate various examples of arrangements of elements and components, additional intervening elements, devices, features, or components may be present in an actual embodiment.Any combination of the features of the methods described herein may be embodied in code stored in a non-transitory computer readable medium. When executed, the non-transitory computer readable medium may cause some or all of any of the methods described herein to be performed. It will be understood that any of the methods discussed herein may include greater or fewer operations and that the operations may be performed in any order, as appropriate.Various embodiments have been described above. Although described with reference to these specific embodiments, the descriptions are intended to be illustrative and are not intended to be limiting. Various modifications and applications may occur to those skilled in the art.Further embodiments are set out in the following clauses in which:1. A method of reading data from a memory array, the method comprising:reading data from memory cells of the memory array; andin response to determining that an error in the data read from the memory cells is uncorrectable via error correction codes, performing a self-reference read from the same memory cells, wherein the self-reference read comprises comparing a value read from a memory cell to another value read from the same memory cell.2. The method of Clause 1, wherein a first latency in providing valid data read from the memory cells to a processor associated with said reading is less than a second latency in providing valid data read from the same memory cells to the processor in said performing the self-reference read.3. The method of Clause 1, wherein a first latency in providing valid data read from the memory cells to a processor associated with said reading is approximately the same as a second latency in providing valid data read from the same memory cells to the processor in said performing the self-reference read.4. The method of Clause 1, wherein the memory cells comprise magnetoresitive random access memory (MRAM) cells.5. The method of Clause 1, wherein more than one codeword is read at a time, wherein performing the self-reference read further comprises performing a self-reference read only on digits of a codeword identified as having been read with an uncorrectable error.6. The method of Clause 1, wherein preforming the self-reference read further comprises re-programming the memory cell when comparing the value read from the memory cell to the another value read from the same memory cell indicates that the compared values correspond to different states of the memory cell.7. The method of Clause 1, further comprising correcting correctable errors in the data read from the memory cells via decoding of error correction codes.8. The method of Clause 1, further comprising providing the data from said reading to a processor in response to determining that all errors in the data are correctable via error correction codes.9. A method of reading data from a memory array, the method comprising:reading data from memory cells of the memory array by comparing values associated with the memory cells of the memory array with a reference value; andin response to detecting a condition associated with said reading, performing a self-reference read from at least one of the memory cells, wherein the self-reference read comprises comparing a value read from a memory cell to another value read from the same memory cell.10. The method of Clause 9, wherein the condition is indicative of an error in the data read from the memory cells being uncorrectable via error correction codes.11. The method of Clause 9, wherein the condition is indicative of at least one suspected error in the data read from the memory cells.12. The method of Clause 9, wherein the condition is indicative of a number of errors in the data read from the memory cells having at least a threshold number of errors, and wherein the threshold number of errors is greater than 1.13. The method of Clause 9, further comprising accessing, by a processor, data from the memory array with a variable latency, wherein data accessed by said reading has a lower latency than data accessed by said performing the self-reference read.14. The method of Clause 9, wherein each of the memory cells comprises a memory element configured to have a different resistance in a first state than in a second state.15. The method of Clause 14, wherein performing the self-reference read comprises:programming a selected memory cell to the first state; andre-programming the selected memory cell to the second state when comparing the value read from the selected memory cell to the another value read from the selected memory cell indicates that one of the compared values corresponds to the first state and the other of the compared values corresponds to the second state.16. The method of Clause 9, wherein the memory cells comprise magnetic tunnel junction spin torque transfer magnetoresitive random access memory (MTJ STT-MRAM) cells.17. The method of Clause 9, further comprising correcting one or more errors in the data read from the memory cells via error correction codes when the condition is not detected.18. The method of Clause 9, further comprising correcting errors in the data from said reading via error correction codes, and providing the corrected data to a processor when no errors are detected in the corrected data.19. The method of Clause 9, wherein the reference value is generated by a reference cell comprising a memory cell having a resistive circuit element.20. An apparatus comprising:a memory array comprising memory cells, the memory array configured to store codewords comprising data and corresponding error correction codes;an error correction encoder/decoder configured to generate error correction codes, to identify errors in codewords, and to correct errors in codewords; anda sense circuit having a first mode and a second mode, wherein the second mode is activated only when errors in a codeword are determined to be uncorrectable, the sense circuit comprising:a reference circuit configured to generate a reference signal for the first mode;a self-reference circuit configured to receive a value read from a selected memory cell of the memory array associated with a first read operation, and to generate a self-reference signal based on the received value for the second mode; anda sense output circuit configured to:perform a first comparison of the value read from the selected memory cell of the memory array associated with the first read operation with the reference signal;perform a second comparison of the a value read from the selected memory cell of the memory array associated with a second read operation with the self-reference value, the second read operation occurring subsequent to the first read operation; andoutput a data digit based on a select signal and at least one of the first comparison or the second comparison, the data digit representing data stored in the selected memory cell.21. The apparatus of Clause 20, wherein the sense output circuit is configured to perform either the first comparison or the second comparison based on the select signal.22. The apparatus of Clause 20, wherein the sense output circuit comprises a sense amplifier configured to determine the data digit.23. The apparatus of Clause 22, wherein the sense output circuit further comprises a multiplexer configured to provide either the reference signal or the self-reference signal to the sense amplifier based on the select signal.24. The apparatus of Clause 20, wherein the memory cells comprise magnetoresitive random access memory (MRAM) cells.25. The apparatus of Clause 20, wherein the error correction code encoder/decoder is further configured to correct one or more errors in the data digits read by the sense circuit.26. The apparatus of Clause 25, wherein the error correction code encoder/decoder and the memory array are included on a single die.27. The apparatus of Clause 20, wherein the apparatus is further configured to generate a data ready signal indicative of whether the data digit is valid.28. A method of reading data from a memory array, the method comprising:performing a standard reference read operation comprising reading data from selected memory cells of the memory array by comparing values associated with the selected memory cells of the memory array with at least one reference value, wherein each of the selected memory cells comprises a memory element configured to have a different resistance in a first state than in a second state; andin response to detecting a condition associated with said performing the standard reference read operation, performing a read operation with increased accuracy compared to the standard reference read operation to read data from one or more of the selected memory cells.29. The method of Clause 28, wherein the read operation with increased accuracy comprises two or more reads from the same memory cell of the selected memory cells.30. The method of Clause 28, wherein the read operation with increased accuracy involves a higher power consumption compared to the standard reference read operation.31. The method of Clause 28, wherein the read operation with increased accuracy comprises a self-reference read, wherein the self-reference read comprises comparing a value read from a memory cell to another value read from the same memory cell.32. The method of Clause 28, wherein the condition is indicative of an error in the data read from the memory cells in the standard reference read operation being uncorrectable via error correction codes.33. The method of Clause 28, wherein the memory array comprises magnetoresitive random access memory (MRAM) cells. |
System, methods and apparatus are described that offer improved performance of an Inter-Integrated Circuit (I2C) bus. A method of testing a spike filter in a legacy I2C device includes generating a command to be transmitted on a serial bus in accordance with an I2C protocol, where the command includes an address corresponding to the legacy slave device, merging the command with a sequence of pulses to obtain a test signal, transmitting the test signal on the serial bus, and determining the efficacy of a spike filter in the first slave device based on whether the legacy slave device acknowledges the test signal. Each pulse in the sequence of pulses has a duration that is less than 50 ns. The spike filter is expected to suppress pulses that have a duration of less than 50 ns. |
1.A method for detecting the capabilities of a device coupled to a serial bus includes:Generating a command to be transmitted on the serial bus according to an Inter-Integrated Circuit (I2C) protocol, wherein the command includes an address corresponding to the first slave device;Merging the command with a pulse train to obtain a test signal, wherein each pulse in the train of pulses has a duration of less than 50 nanoseconds;Transmitting the test signal on the serial bus; andDetermining the efficacy of a spike filter in the first slave device based on whether the first slave device responds to the command correctly,Where the spike filter is expected to suppress time-lapse pulses with less than 50 nanoseconds.2.The method of claim 1, further comprising:Receiving an acknowledgement in response to the command from the first slave device, wherein the acknowledgement indicates that the spike filter in the first slave device is operating effectively.3.The method of claim 1, wherein determining the efficiency of the spike filter comprises:Make the first value be written to the register of the first slave device;Reading a second value from the register in the first slave device; andThe spike filter is determined to be valid when the first value is equal to the second value.4.The method of claim 1, further comprising:Determining the presence of the first slave device by transmitting the command at one or more clock frequencies without the pulse sequence, wherein the first slave device is configured to exist at the first device The command is acknowledged on the serial bus and adapted to communicate using at least one of the one or more clock frequencies.5.The method of claim 4, wherein the test signal is transmitted at a clock frequency corresponding to a lowest one of the one or more clock frequencies.6.The method of claim 1, wherein merging the command with the pulse sequence comprises:The pulse train is incorporated into each of a plurality of intervals when the clock signal transmitted on the serial bus is in a low state.7.The method of claim 1, wherein merging the command with the pulse sequence comprises:The pulse train is incorporated into each of a plurality of intervals when the clock signal transmitted on the serial bus is in a high state.8.The method of claim 1, wherein each pulse comprises a 40 nanosecond time period during which each pulse is in a high state.9.The method of claim 1, wherein the pulse sequence is transmitted on a serial clock line (SCL) of the serial bus.10.The method of claim 1, wherein the pulse train is transmitted on a serial data line (SDA) of the serial bus.11.A device coupled to a serial bus, including:An apparatus for generating a command to be transmitted on said serial bus according to an Inter-Integrated Circuit (I2C) protocol, wherein said command includes an address corresponding to a first slave device;Means for combining the command with a pulse train to obtain a test signal, wherein each pulse in the train of pulses has a duration of less than 50 nanoseconds;An apparatus for transmitting the test signal on the serial bus; andAn apparatus for determining the efficacy of a spike filter in the first slave device based on whether the first slave device responds to the command correctly,Where the spike filter is expected to suppress time-lapse pulses with less than 50 nanoseconds.12.The apparatus of claim 11 wherein said means for determining efficacy is configured to:Receiving confirmation from the first slave device in response to the command; andDetermining that the spike filter in the first slave device is operating efficiently based on receiving the acknowledgement.13.The apparatus of claim 11 wherein the means for determining the efficacy of the spike filter is configured to:Make the first value be written to the register of the first slave device;Reading a second value from the register in the first slave device; andThe spike filter is determined to be valid when the first value is equal to the second value.14.The apparatus of claim 11 further comprising:An apparatus for determining the presence of the first slave device by transmitting the command at one or more clock frequencies without the pulse sequence, wherein the first slave device is configured as the first slave device The acknowledgement of the command occurs when a slave device is present on the serial bus and the first slave device is adapted to communicate using at least one of the one or more clock frequencies.15.The apparatus of claim 14, wherein the test signal is transmitted at a clock frequency corresponding to a lowest one of the one or more clock frequencies.16.The apparatus of claim 11 wherein said means for merging said command with said pulse sequence is configured to:The pulse train is incorporated into each of a plurality of intervals when the clock signal transmitted on the serial bus is in a low state.17.The apparatus of claim 11 wherein said means for merging said command with said pulse sequence is configured to:The pulse train is incorporated into each of a plurality of intervals when the clock signal transmitted on the serial bus is in a high state.18.The apparatus of claim 11 wherein each pulse comprises a 40 nanosecond time period during which said each pulse is in a high state.19.The apparatus of claim 11 wherein said pulse train is transmitted on a serial clock line (SCL) of said serial bus.20.The apparatus of claim 11 wherein said pulse sequence is transmitted on a serial data line (SDA) of said serial bus.21.An apparatus for detecting the capabilities of a device coupled to a serial bus, comprising:The processing system, which is configured to:Generating a command to be transmitted on the serial bus according to an Inter-Integrated Circuit (I2C) protocol,Where the command includes an address corresponding to the first slave device;Merging the command with a pulse train to obtain a test signal, wherein each pulse in the train of pulses has a duration of less than 50 nanoseconds;Transmitting the test signal on the serial bus; andDetermining the efficacy of a spike filter in the first slave device based on whether the first slave device responds to the command correctly,Where the spike filter is expected to suppress time-lapse pulses with less than 50 nanoseconds.22.The apparatus of claim 21, wherein the first slave correctly responds to the command by recognizing the command.23.The apparatus of claim 21, wherein the processing system is configured to:Make the first value be written to the register of the first slave device;Reading a second value from the register in the first slave device; andThe spike filter is determined to be valid when the first value is equal to the second value.24.The apparatus of claim 21, wherein the processing system is configured to:The pulse train is incorporated into each of a plurality of intervals when the clock signal transmitted on the serial bus is in a low state.25.The apparatus of claim 21, wherein the processing system is configured to:The pulse train is incorporated into each of a plurality of intervals when the clock signal transmitted on the serial bus is in a high state.26.A processor-readable storage medium includes code for:Generating a command to be transmitted on the serial bus according to an Inter-Integrated Circuit (I2C) protocol, wherein the command includes an address corresponding to the first slave device;Merging the command with a pulse train to obtain a test signal, wherein each pulse in the train of pulses has a duration of less than 50 nanoseconds;Transmitting the test signal on the serial bus; andDetermining the efficacy of a spike filter in the first slave device based on whether the first slave device responds to the command correctly,Where the spike filter is expected to suppress time-lapse pulses with less than 50 nanoseconds.27.The processor-readable storage medium of claim 26, wherein the first slave correctly responds to the command by recognizing the command.28.The processor readable storage medium of claim 26, further comprising code for:Make the first value be written to the register of the first slave device;Reading a second value from the register in the first slave device; andThe spike filter is determined to be valid when the first value is equal to the second value.29.The processor readable storage medium of claim 26, further comprising code for:The pulse train is incorporated into each of a plurality of intervals when the clock signal transmitted on the serial bus is in a low state.30.The processor readable storage medium of claim 26, further comprising code for:The pulse train is incorporated into each of a plurality of intervals when the clock signal transmitted on the serial bus is in a high state. |
Test for 50 nanosecond spike filterCross-reference to related applicationsThis application claims the benefit of provisional application No. 62/175,723 filed on June 15, 2015 in the U.S. Patent Office, and non-provisional application No. 15/179,470 filed in the U.S. Patent and Trademark Office on June 10, 2016. The full content of the application is included here.backgroundfieldThe present disclosure relates generally to an interface between a processor and a peripheral device, and more particularly to improving the data communication capabilities of the serial bus.Background techniqueAn inter-integrated serial bus (which may also be referred to as an I2C bus or an I2C bus) is a serial single-ended computer bus that is intended to connect low-speed peripherals to a processor. The I2C bus is a multi-master bus in which each device can act as a master and a slave for different messages transmitted on the I2C bus. The I2C bus can use only two bi-directional open-drain connectors, including a serial data line (SDA) and a serial clock line (SCL). These connectors typically include signal conductors terminated by pull-up resistors. The I2C's original implementation supports data signaling rates up to 100 kbit/s (100 kbps) in standard mode (Sm) operation, with the more recent standard supporting 400 kbps in fast mode (Fm) operation and fast Supports 1 megabit per second (Mbps) speeds in Mode + (Fm+) operation.However, in some systems and devices, higher bandwidth is needed to support communication between certain types of devices. For example, a mobile communication device, such as a cell phone, may employ multiple devices (including cameras, displays, and various communication interfaces) that consume significant bandwidth. When mixed signaling (including signaling according to the conventional I2C protocol) is to be used to maintain compatibility with legacy devices, it may be difficult to obtain higher bandwidth. For example, it may be difficult to determine if the I2C device can coexist on a serial bus that the enhanced device uses to transmit data and commands at a bit rate higher than these I2C can handle. Therefore, there has been a need to provide optimized communications on a serial interface of a bus that is configured to connect a master component and a slave component within a mobile device.OverviewThe embodiments disclosed herein provide systems, methods, and apparatus that can determine whether legacy I2C devices can coexist with enhanced devices on a common serial bus. In one example, a spike filter is tested to determine if the spike filter can suppress a time-lapse pulse sequence with less than 50 nanoseconds (50 ns).In aspects of the present disclosure, a data communication method may be performed by a master device. The method includes: generating a command to be transmitted on a serial bus according to the I2C protocol, wherein the command includes an address corresponding to a first slave device; merging the command and the pulse sequence to obtain a test signal; transmitting on the serial bus The test signal; and the power or effectiveness of the spike filter in the first slave device is determined based on whether the first slave device acknowledges the command. Each pulse in the pulse train has a duration of less than 50 ns. This spike filter is expected to suppress the duration of pulses with less than 50 ns.In one aspect, the first slave device can respond to the command correctly by confirming the command. The master device may receive acknowledgment from the slave device in response to the command. The acknowledgement may be an indication that the spike filter in the first slave device is operating effectively.In one aspect, determining the efficacy of the spike filter includes reading a first value to a register of the first slave device, and reading the second value from the register of the first slave device. The master device may determine that the spike filter is valid when the first value is equal to the second value.In some aspects, the master device may determine the presence of the first slave device by transmitting the command at one or more clock frequencies without the pulse sequence. The first slave device may be configured to acknowledge the command when the first device is present on the serial bus and is adapted to communicate using at least one of the one or more clock frequencies. The test signal may be transmitted at a clock frequency corresponding to the lowest one of the one or more clock frequencies.In one aspect, the master device can merge the command with the pulse sequence by merging the pulse sequence into each of a plurality of intervals when the clock signal transmitted on the serial bus is in a low state.In another aspect, the master device can merge the command with the pulse sequence by incorporating a pulse train into each of a plurality of intervals when the clock signal transmitted on the serial bus is in a high state.In aspects, each pulse includes a 40 nanosecond time period during which the pulse is in a high state. The pulse sequence is transmitted on the serial bus line of the serial bus or the serial data line of the serial bus.In aspects of the present disclosure, an apparatus coupled to a serial bus includes means for generating a command to be transmitted on a serial bus according to the I2C protocol, wherein the command includes an address corresponding to a first slave device; a means for merging the command with a pulse sequence to obtain a test signal; means for transmitting a test signal on a serial bus; and means for determining a spike in the first slave device based on whether the first slave device acknowledges the command The efficiency or effectiveness of the filter. Each pulse in the pulse train has a duration of less than 50 ns. This spike filter is expected to suppress the duration of pulses with less than 50 ns.In aspects of the present disclosure, an apparatus for detecting a capability of a device coupled to a serial bus includes a processing system configured to generate a command to be transmitted on a serial bus according to the I2C protocol, wherein the command includes corresponding to Address of the first slave device; combining the command with a pulse train to obtain a test signal, wherein each pulse in the pulse train has a duration of less than 50 nanoseconds; the test signal is transmitted on the serial bus; and based on the first Whether the slave device responds to the command correctly determines the power or effectiveness of the spike filter in the first slave device. The spike filter may be expected to suppress a duration pulse having less than 50 ns.In various aspects of the present disclosure, a processor-readable storage medium is disclosed. The storage medium may be a non-transitory storage medium and may store code that can be executed by one or more processors. In various examples, the processor-readable storage medium has code for: generating commands to be transmitted on a serial bus according to an Inter-Integrated Circuit (I2C) protocol, wherein the commands include corresponding to a first slave device Address; this command is combined with a pulse train to obtain a test signal where each pulse in the pulse train has a duration of less than 50 nanoseconds; the test signal is transmitted on the serial bus; and based on whether the first slave is correct or not The response to the command determines the power or effectiveness of the spike filter in the first slave device. The spike filter may be expected to suppress a duration pulse having less than 50 ns.Brief description of the drawingsFIG. 1 depicts an apparatus that employs a data link between various integrated circuit (IC) devices, the data link selectively operating according to one of a plurality of available standards.FIG. 2 illustrates certain aspects of a device connected to an I2C communication bus.Figure 3 illustrates the configuration of an I2C connected to a common serial bus.FIG. 4 illustrates some aspects of the timing relationship between SDA conductors and SCL conductors on a conventional I2C bus.FIG. 5 is a timing diagram illustrating the timing associated with multiple frames transmitted on the I2C bus.FIG. 6 illustrates the timing associated with sending a command word to a slave device according to the I2C protocol.FIG. 7 illustrates the timing of pulses that can be filtered by the I2C device.FIG. 8 illustrates some aspects associated with the operation of spike filters in legacy I2C devices.FIG. 9 illustrates a first example of a test transmission according to certain aspects disclosed herein.FIG. 10 illustrates a second example of a test transmission according to certain aspects disclosed herein.FIG. 11 illustrates a process for testing a spike filter in an legacy I2C device in accordance with certain aspects disclosed herein.FIG. 12 illustrates an example of a hardware implementation of a receiving device that communicates over an I2C bus according to one or more aspects disclosed herein.FIG. 13 is a flowchart of a method for detecting the capabilities of a device coupled to a serial bus in accordance with one or more aspects disclosed herein.14 illustrates an example of a hardware implementation of a device that employs processing circuitry adapted in accordance with certain aspects disclosed herein.A detailed descriptionVarious aspects will now be described with reference to the drawings. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of one or more aspects. However, it is obvious that this aspect(s) can be practiced without these specific details.As used in this application, the terms "component," "module," "system," and similar terms are intended to include computer related entities such as but not limited to hardware, firmware, a combination of hardware and software, software, or in execution. Software. For example, a component may be, but is not limited to, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. As an illustration, both an application running on a computing device and the computing device can be a component. One or more components can reside within a process and/or thread of execution, and a component can be localized on one computer and/or distributed between two or more computers. In addition, these components can execute from various computer readable media having various data structures stored thereon. These components may communicate by means of local and/or remote processes, such as communicating according to a signal having one or more data packets, such as data packets from other components in the distributed system, such as from the signal and the local system. Data of one component that interacts with and/or interacts with other systems across a network such as the Internet.In addition, the term "or" is intended to mean an inclusive "or" rather than an exclusive "or." That is, unless otherwise specified or clearly apparent from the context, the phrase "X employs A or B" is intended to mean any natural concurrence. That is, the phrase "X employs A or B" is satisfied by any of the following instances: X employs A; X employs B; or X employs both A and B. In addition, the articles "a" and "an" as used in this application and the appended claims should generally be construed to mean "one or more" unless specified otherwise or clear from context to be directed to a singular form.Certain aspects of the present invention may be applicable to communication links between electronic devices deployed as subcomponents of a mobile device such as a cellular phone, a smart phone, a Session Initiation Protocol (SIP) phone, a laptop, Devices, laptops, netbooks, smartbooks, personal digital assistants (PDAs), satellite radios, global positioning system (GPS) devices, smart home devices, smart lighting devices, multimedia devices, video devices, digital audio players (eg, MP3 players ), cameras, game consoles, entertainment devices, automotive components, wearable computing devices (eg, smart watches, health or fitness trackers, glasses, etc.), appliances, sensors, security devices, vending machines, smart meters, remote controls Aircraft, multi-rotor helicopters, or any other similarly functional device.FIG. 1 illustrates an apparatus 100 that employs a communication link between IC devices. In one example, the apparatus 100 may operate as a communication device that uses radio frequency (RF) radios and/or transceivers 106 to perform with a radio access network (RAN), a core access network, the Internet, and/or another network. Communication. Transceiver 106 may be implemented in processing circuit 102, or may be operably coupled to processing circuit 102. The processing circuitry 102 may include one or more IC devices, such as an application specific IC (ASIC) 108 . The ASIC 108 may include one or more processing devices 110, logic circuits 112, and the like. The processing circuitry 102 may include and/or be coupled to a processor-readable storage medium 114, such as a memory device, which may store and maintain data and instructions for execution by the processing circuitry 102 or for other uses. The processing circuitry 102 may be controlled by an operating system, and an application programming interface (API) layer may be provided to support and enable execution of software modules resident in the storage medium 114 . Storage medium 114 may include ROM or RAM, EEPROM, flash memory cards, and/or any memory device that may be used in processing systems and computing platforms. The processing circuitry 102 may include or access a local database that may maintain operating parameters and other information for configuring and operating the device 100. The local database may be implemented using one or more of a database module, a flash memory, a magnetic media, an EEPROM, an optical media, a magnetic tape, a floppy disk, a hard disk, or the like. Processing circuitry may also be operatively coupled to other devices such as antenna 122, display 124, operator controls such as buttons 128 and keypad 126, and other components.FIG. 2 is a schematic block diagram illustrating certain aspects of a device 200 including a plurality of devices 202, 220, and 222a-222n connected to a shared bus, such as a serial bus 230. The device 200 may be implemented in, for example, a mobile processing/communication device. Device 200 includes devices 202, 220, and 222a-222n that communicate using serial bus 230. In some implementations, serial bus 230 supports one or more protocols (which may include the I2C protocol). In some examples, slave devices 202, 222a-222n coupled to serial bus 230 include or are coupled to sensors. In another example, the slave device 202 includes a sensor control function 204 that manages or communicates with the sensor. The sensor can be an environmental sensor, a position location sensor, a motion sensor, or the like. In another example, slave device 202 may be an imaging device that includes an imaging sensor. Slave device 202 may include configuration registers 206, control logic 212, transceiver 210, and line drivers/receivers 214a and 214b. Control logic 212 may include a processor, such as a state machine, a sequencer, a signal processor, or a general-purpose processor. Transceiver 210 may include a receiver 210a, a transmitter 210c, and a common circuit 210b (including timing, logic, and storage circuits and/or devices). In one example, transmitter 210c encodes and transmits data based on the timing provided by clock generation circuit 208.Two or more of devices 202, 220 and/or 222a-222n may be adapted in accordance with certain aspects and features disclosed herein to extend the bandwidth and other capabilities provided by a shared bus operating in accordance with conventional I2C protocols. . In one example, the devices 202, 220 and/or 222a-222n may be adapted to support a derivative protocol of the I2C protocol or a protocol other than the I2C protocol. In another example, the devices 202, 220, and/or 222a-222n may be adapted to support a bit rate higher than that typically achievable when the conventional I2C protocol is used to manage communications over the serial bus 230. . The I2C protocol may follow the actual I2C standard and may include specifications defining the electrical and timing aspects of the I2C signal in addition to the data format and I2C bus control and timing.FIG. 3 illustrates the configuration of devices 304, 306, 308, 310, 312, 314, and 316 connected to serial bus 302, whereby three enhanced devices 304, 314, and 316 are adapted or configured to be on the serial bus. A higher data transmission rate is obtained at 302. Enhanced devices 304, 314, and 306 may coexist with conventionally configured I2C devices 306, 308, 310, and 312. Enhanced devices 304, 314, and 316 may alternatively or additionally communicate as desired or as needed using conventional I2C protocols.When the enhanced master device 304 operates as a bus master that controls the serial bus 302, the serial bus 302 can operate at a higher data transfer rate. In the depicted example, a single master device 304 may be used as bus master in both I2C mode and enhanced mode, which supports data transfer beyond the data transfer rate achieved when the serial bus 302 is operated in accordance with the conventional I2C protocol. rate. Signaling for higher data rate traffic may utilize certain features of the I2C protocol to enable higher data rate traffic on the serial bus 302 without compromising the legacy I2C device 306 coupled to the serial bus 302 Functionality of 308, 310, and 312.FIG. 4 includes timing diagrams 400 and 420 illustrating the relationship between SDA wire 402 and SCL wire 404 on a conventional I2C bus. The first timing diagram 400 illustrates the timing relationship between the SDA conductor 402 and the SCL conductor 404 as data is transmitted over a conventionally configured I2C bus. SCL wire 404 provides a series of pulses that can be used to sample the data in SDA wire 402 . These pulses (including, for example, pulse 412) may be defined as the time at which it is determined at the receiver that the SCL wire 404 is in a high logic state. When the SCL wire 404 is in a high logic state during data transfer, the data on the SDA wire 402 is required to be stable and valid; when the SCL wire 404 is in a high logic state, the state of the SDA wire 402 is not allowed to change.The specification implemented by the conventional I2C protocol (which may be referred to as the "I2C specification") defines a high period of minimum duration 410 (t high) of the pulses 412 on the SCL conductor 404 . The I2C specification also defines the minimum duration of the settling time 406 (tSU) before the occurrence of the pulse 412, and the minimum duration of the hold time 408 (t hold) after the pulse 412 is terminated. The signaling state of SDA wire 402 is expected to remain stable during setup time 406 and hold time 408 . Settling time 406 defines the maximum time period after transition 416 between signaling states on SDA conductor 402 until the rising edge of pulse 412 on SCL conductor 404 arrives. The hold time 408 defines the minimum time period after the falling edge of the pulse 412 on the SCL wire 404 up to the next transition 418 between the signaling states on the SDA wire 402 . The I2C specification also defines a low duration (t low) duration 414 of the SCL conductor 404. The data on the SDA wire 402 is generally stable and/or may be captured 410 (t high) during the high logic state of the SCL wire 404 after the leading edge of the pulse 412.The second timing diagram 420 of FIG. 4 illustrates signaling states on the SDA conductor 402 and the SCL conductor 404 between data transmissions over a conventional I2C bus. The I2C protocol provides transmission of 8-bit data (bytes) and 7-bit addresses. The receiver can acknowledge transmission by driving the SDA conductor 402 to a low logic state for one clock cycle. A low signaling state indicates that an acknowledgment (ACK) was successfully received and a high signaling state indicates a negative acknowledgement (NACK) indicating a reception failure or a reception error.The start condition 422 is defined as permitting the current bus master to signal that data will be transmitted. The start condition 422 occurs when the SDA conductor 402 transitions from high to low while the SCL conductor 404 is high. The I2C bus master initially transmits a start condition 422 (which may also be referred to as a start bit) followed by a 7-bit address of the I2C slave device that the I2C bus master wishes to exchange data with. This address is followed by a single bit indicating whether a read or write operation is to be performed. The addressed I2C slave (if available) responds with an ACK bit. If no I2C slave responds, the I2C bus master can interpret the high logic state of the SDA conductor 402 as a NACK. The master and slave devices may then exchange information bytes in frames, where the bytes are serialized such that the most significant bit (MSB) is transmitted first. When the I2C master transmits a stop condition 424, the byte transfer is completed. Stop condition 424 occurs when SDA wire 402 transitions from low to high while SCL wire 404 is high. The I2C specification requires that all transitions of the SDA wire 402 occur when the SCL wire 404 is low, and the exception can be treated as a start condition 422 or a stop condition 424.FIG. 5 includes diagrams 500 and 520 illustrating timing associated with data transmission on an I2C bus. As illustrated in the first diagram 500, the idle period 514 may occur between a stop condition 508 and a consistent start condition 510. This idle period 514 can be extended and can result in reduced data throughput when the conventional I2C bus remains idle between a stop condition 508 and a consistent start condition 510 . In operation, the busy period 512 begins when the I2C bus master transmits the first start condition 506 followed by data. The busy period 512 ends when the I2C bus master transmits the stop condition 508 and the idle period 514 occurs. The idle period 514 ends when the second start condition 510 is transmitted.The second timing diagram 520 illustrates a method by which the number of idle periods 514 can be reduced. In the illustrated example, data is available for transmission before the end of the first busy period 532. The I2C bus master can transmit a repeated start condition 528 (Sr) instead of a stop condition. The repeated start condition 528 terminates the previous data transmission and simultaneously indicates the start of the next data transmission. The state transition on the SDA wire 522 that corresponds to the repeated start condition 528 is equivalent to the state transition on the SDA wire 522 that corresponds to the start condition 526 that occurred after the idle period 530 . For both the starting condition 526 and the repeating starting condition 528, the SDA conductor 522 transitions from high to low while the SCL conductor 524 is high. When a repeated start condition 528 is used between data transmissions, the second busy period 534 immediately follows the first busy period 532 .6 is a diagram 600 illustrating an example of timing associated with sending a command word to a slave device according to the I2C protocol. In this example, the master initiates the transaction with the start condition 606, whereby the SDA conductor 602 is driven low from high while the SCL conductor remains high. The master device then transmits a clock signal on the SCL conductor 604. The 7-bit address 610 of the slave is then transmitted on the SDA conductor 602. The 7-bit address 610 is followed by a write/read command bit 612, which indicates "write" when low and "read" when high. The slave device may respond with an acknowledgement (ACK) in the next clock period 614 by driving the SDA line 602 low. If the slave does not respond, the SDA conductor 602 is pulled high and the master regards the lack of a response as a NACK. The master can terminate the transaction with stop condition 608 by driving SDA conductor 602 high from low while SCL conductor 604 is high. This transaction can be used to determine if the slave device with the transferred address coupled to the I2C bus is active.With continued reference to FIG. 3, certain aspects relate to implementations in which higher data rates are provided between enhanced devices 304, 314, 316 that are higher than the data rates supported by the I2C protocol. For example, the increased data rate for communication between the enhanced devices 304, 314, 316 coupled to the serial bus 302 may be achieved by increasing the clock rate on the serial bus 302. Legacy I2C devices 306, 308, 310, 312 may not handle increased clock frequencies and/or may misjudge the signaling transmitted between enhanced devices 304, 314, 316. According to certain aspects, the increased data rate for communication between enhanced devices 304, 314, 316 may be achieved using a shortened clock signal pulse width. Due to the presence of spike filters in the receivers of legacy I2C devices 306, 308, 310, 312, pulses with shortened pulse widths may be ignored by legacy I2C devices 306, 308, 310, 312.FIG. 7 is a timing diagram 700 illustrating the timing of pulses that may be filtered by legacy I2C devices 306, 308, 310, 312. The SCL wire 704 may carry one or more pulses 706 that follow or conform to the I2C protocol. That is, pulse 706 has a high period of time 708 that exceeds the minimum pulse duration specified by the I2C protocol. The low period 718 before the pulse and the low period 720 after the pulse have durations that exceed the minimum low duration specified by the I2C protocol. In timing diagram 700, shorter positive transition pulses 710 and 712 may be filtered out by spike filters provided in receivers of legacy I 2 C devices 306, 308, 310, and 312. The spike filter can also filter out shorter negative transition pulses 714.The I2C specification defines the spike width (tSP) that the input filter of a conventional I2C receiver must reject in certain modes of operation. In one example, tSP = 50 ns, then a duration pulse with less than 50 ns is expected to be intercepted by the I2C-compatible spike filter. Applying this example to FIG. 7, any pulse shorter than 50 ns in pulses 710, 712, 714 is expected to be filtered out and ignored by the conventional I2C receiver. The enhanced devices 304, 314, 316 may communicate by transmitting a time (tSEC) pulse with less than the tSP pulse width on the SDA conductor 702 and/or the SCL conductor 704, where tSP is specified by the I2C specification.Referring also to FIG. 4, the minimum duration of the durations 410, 414 of the high and low logic states of SDA wire 402 and SCL wire 404 are defined for certain modes of operation in the I2C specification. In the example of Fm operation, the duration 410 of each logic high period must be greater than 0.6 μs, and the duration 414 of each logic low period (t low) must be greater than 1.3 μs, where the maximum value is not specified.FIG. 8 illustrates certain aspects associated with the operation of spike filter 812 in legacy I2C devices 306, 308, 310, 312. The first diagram 800 illustrates an example of the input signal 802 provided to the spike filter 812, and the resultant output signal 804. The input signal 802 includes a short pulse 806 that has a duration of 616 less than the tSP pulse width. Spike filter 812 operates to prevent short pulse 806 from appearing in output signal 804 . In some examples, spike filter 812 is implemented as a resistor-capacitor circuit (RC circuit), and output signal 804 may include a residual component 808 of short pulse 806 . The residual component 808 may reach a voltage level below the detection threshold voltage 810 and thus will not be detected by the receiver of legacy I2C devices 306, 308, 310, 312. The residual component 808 may include a time period during which the voltage of the output signal 804 increases, followed by a period during which the voltage of the output signal 804 decreases or decays back to 0V. The combination of maximum voltage and delay time may cause problems with legacy I2C devices 306, 308, 310, 312.In the example illustrated by the timing diagram 820, the residual component 828 on the output signal 824 may pass through the rising voltage corresponding to the leading edge of the first short pulse 834 starting at time 826, the first short pulse 834 at time 830, The falling edge corresponds to the peak voltage level, and the slow decay towards 0V is characterized. As illustrated in timing diagram 820, the voltage of output signal 824 may not reach 0V before the time 832 for the leading edge of second short pulse 836 arrives.Timing diagram 840 illustrates the cumulative effect of residual voltage from previous short pulses when short pulses are closely spaced. In this example, a series of pulses 842 are spaced such that the output signal 824 does not return to 0V between successive pulses. For each pulse after the initial pulse, the voltage of the output signal 824 increases from a voltage higher than 0V, and after several pulses the voltage of the output signal 824 may reach a maximum voltage exceeding the detection threshold voltage 810 at time 844 . In these cases, legacy I2C devices 306, 308, 310, 312 may determine that a transition has occurred, which has undeterminable consequences.According to certain aspects disclosed herein, I2C master 304 with enhanced capabilities may be configured to test spike filters of legacy I2C devices 306, 308, 310, 312 to ensure that these spike filters can be handled for use in enhanced The signaling rate for communication between devices 304, 314, 316. In one example, master device 304 may transmit commands according to the I2C protocol while short bursts are transmitted on the bus's SCL wire. These short pulses may be transmitted at minimal intervals so that poorly performing spike filters 812 may not prevent transitions from being detected by legacy I2C devices 306, 308, 310, 312.9 is a graph illustrating the capabilities, power efficiency, and/or effectiveness of a spike filter sent by the master device 304 to determine one or more legacy I2C devices 306, 308, 310, 312 coupled to the master device 304 over the serial bus 302. A timing diagram 900 of a first example of a test transmission. In this example, the command word illustrated in FIG. 6 is transmitted by the master device 304, and a portion of the command word is illustrated in the timing diagram 900. After master device 304 has transmitted start condition 906, address bits 926, 928, 930 are transmitted on SDA conductor 902, and short pulse series 908a, 908b may be transmitted at one or more occasions when SCL conductor 904 is in a low logic state. 908c. For example, as illustrated in enlarged view 918, these short pulses may each have a duration of 40 ns and may be separated by a 40 ns low period, resulting in an 80 ns clock period 920.The SCL wire 904 may be allowed to stabilize before the short pulse trains 908a, 908b, 908c are transmitted. For example, after transitioning 922 from a high logic state I2C, SCL conductor 904 may remain in a low logic state for a first period of time 914 before short burst series 908c is transmitted. The duration of the first time period 914 may be determined by the I2C specification and may be defined as, for example, 200 ns. The short pulse train 908c may be terminated before the next I2C transition 924 from a low logic state to a high logic state. The SCL conductor 904 remains in the low logic state for a second period of time 916 before the next I2C transition 924 occurs. The duration of the second period 916 may be determined by the I2C specification, and may be defined as 200 ns in one example.The active spike filter coupled to the SCL conductor 904 suppresses each series of pulses 908a, 908b, 908c, whereupon the output of this spike filter includes only I2C transitions (eg, transitions 922, 924). In the example illustrated in FIG. 9 , the output of the active spike filter coupled to the SCL conductor 904 is roughly a square wave with a period of 10 μs.10 is a diagram illustrating the capabilities, power efficiency, and/or effectiveness of a spike filter sent by master device 304 to determine one or more legacy I2C devices 306, 308, 310, 312 coupled to master device 304 over serial bus 302. The timing diagram 1000 of the second example of test transmission. In this example, the command word illustrated in FIG. 6 is transmitted by the master device 304, and a portion of the command word is illustrated in the timing diagram 1000. After master device 304 has transmitted start condition 1006, address bits 1028, 1030, 1032 are transmitted on SDA conductor 1002, and burst series 1008a, 1008b may be transmitted at one or more occasions while SCL conductor 1004 is in a high logic state. 1008c. For example, as illustrated in enlarged view 1020, these short pulses may each have a duration of 40 ns and may be separated by a 40 ns low period, resulting in an 80 ns clock period 1026.The SCL wire 1004 may be allowed to stabilize before short pulse trains 1008a, 1008b, 1008c are transmitted. For example, after transitioning 1022 from a low logic state I2C, the duration that the SCL conductor 1004 may remain in a high logic state for a first time period 1016 for a first time period 1016 before the short pulse train 1008b is transmitted may be determined by the I2C specification, and may be It is defined as, for example, 200 ns. The short pulse train 1008b may be terminated before the next I2C transition 1024 from the high logic state to the low logic state. SCL conductor 1004 remains in a high logic state for a second time period 1018 before the next I2C transition 1024 occurs. The duration of the second time period 1018 may be determined by the I2C specification, and may be defined as 200 ns in one example.The active spike filter coupled to the SCL wire 1004 suppresses each burst series 1008a, 1008b, 1008c, whereupon the output of this spike filter includes only I2C transitions (eg, transitions 1022, 1024). In the example illustrated in FIG. 10, the output of the active spike filter coupled to the SCL conductor 1004 is roughly a square wave with a period of 10 μs.Other configurations of signaling may be used as test transmissions to test the efficacy of spike filters in legacy I2C devices 306, 308, 310, 312. Test transmissions usually incorporate the disturbances into commands or data words transmitted according to the I2C protocol. These perturbations may include duration pulses and/or spikes of less than 50 ns. In some examples, testing the transmission may include disturbing both high and low states of the signal transmitted over the serial bus. For example, the I2C command or data word may be one or more of the series of short pulses 908a, 908b, 908c illustrated in FIG. 9 and one or more of the series of short pulses 1008a, 1008b, 1008c illustrated in FIG. The series merged.The jammer pulses, such as those provided in the series of short pulses 908a, 908b, 908c, 1008a, 1008b, 1008c, may be adapted to have any desired duty cycle. Some spike filters may become inoperative when the duty cycle is modified to a point where the spike filter cannot recover sufficiently between pulses. Changing the duty cycle can provide additional information that can be used to determine the maximum clock rate for an enhanced operation of the shared bus. The position of the jammer in the short pulse trains 908a, 908b, 908c, 1008a, 1008b, 1008c may be adapted or changed to provide additional information related to the operation of the spike filter. For example, changing the location of a jammer pulse can provide additional information that can be used to determine the maximum clock rate arrival for an enhanced operation of a shared bus.During the spike filter test, messages are transmitted on the SDA wires 902, 1002. The message may be selected from any available command word or other message that requires compliance or compatibility with the I2C's slave device. The effectiveness of the spike filter in the slave device when a jammer pulse is transmitted on the SCL conductors 904, 1004 may be determined based on whether the slave device correctly responds to the message.An I2C command may be included in the test transmission to make the legacy I2C device 306, 308, 310, 312 respond when the legacy I2C device 306, 308, 310, 312 recognizes the command or data word according to the I2C protocol. Responses may include, for example, acknowledgement transmissions, and/or reading or writing registers. When legacy I2C devices 306, 308, 310, 312 are required to respond to any command or message, such commands or messages may be used as a basis for testing transmissions. Such a command or message, or a portion thereof, may be referred to herein as a "command word."Figures 9 and 10 illustrate the use of a slave device address call as a command word (see also Figure 6). According to the I2C specification, the legacy I2C devices 306, 308, 310, 312 are required to respond to the slave device address call. However, any combination of transmissions that require legacy I2C devices 306, 308, 310, 312 to respond may be used as a basis for testing transmissions. For example, the scrambling may be merged with a command that follows the slave device address, and/or merged into a data word that follows the command that is to be written to a certain register within the legacy I2C device 306, 308, 310, 312. Other transactions can be used to test the efficacy or effectiveness of the spike filter.In one example, master device 304 may transmit bytes that will be written to internal registers of legacy I2C devices 306, 308, 310, 312. The master device 304 can read this byte back from the internal register in the same transaction. The master device 304 may repeat this transaction using modified signaling to write and read different bytes to the same internal register. The modified signaling may have disturbances that are inserted or merged into one or both wires of the serial bus. If the read-back returns incorrect bytes, the spike filter may be considered invalid and/or it may not produce the desired result of filtering out spikes or short pulses. If the read back returns the correct byte, the spike filter may be considered valid and the master device 304 may determine that the target legacy I2C device 306, 308, 310, 312 can coexist with the enhanced device 314, 316.In some examples, the latter example may be modified so that the master device 304 writes bytes to the internal register without incorporating bursts into the signaling. The master device 304 may then transmit signaling incorporating bursts to read back the byte from the internal register. Accordingly, there are many variations and combinations of I2C transactions that can be used to test spike filters, whereby for example, a disturbance in the form of short pulses can be inserted or merged at any point in these I2C transactions.FIG. 11 is a flowchart illustrating a process for testing spike filters in legacy I2C devices 306, 308, 310, 312. The master device may test the validity of the spike filter in the slave device by sending a series of commands to the slave device at different clock conditions to introduce a pulse or other disturbance in the clock signal transmitted on the SCL conductors 904, 1004. Each command may be preceded by a start condition or a repeat condition followed by a 7-bit address and a write command bit. If the master device detects an ACK on the SDA wire 902, 9004 after the master device has transmitted the write bit, the spike filter in the slave device may be considered valid for the current clock condition. The process may employ, for example, the signaling illustrated in FIGS. 9 and 10 .At block 1102, the master device 304 may determine if there are legacy I2C devices 306, 308, 310, 312 on the bus. Master device 304 may determine the existence of legacy slave devices 306, 308, 310, 312 by transmitting slave device addresses at clock rates suitable for slave devices 306, 308, 310, 312. For example, the master device 304 may initially transmit the slave device address at a 1 MHz clock rate (I2C Fm+) and may determine whether an acknowledgement has been received from the slave devices 306, 308, 310, 312. Acknowledgement indicates the presence of slave devices 306, 308, 310, 312. If no acknowledgement is received, the master device 304 may transmit the slave device address at one or more lower clock rates until an acknowledgement is received. The lower clock rate may correspond to the clock rate specified by the I2C protocol, and may include, for example, 400 kHz (I2C Fm) and 100 kHz (I2C Sm) clock rates.In one example, the master device 304 initiates a spike filter test at the highest frequency that the slave devices 306, 308, 310, 312 are capable of operating. For example, the highest frequency can be 1 MHz. In another example, the highest frequency may be 400 kHz. If the slave device 306, 308, 310, 312 fails to respond at its maximum specified frequency, the master device 304 detects a NACK on the SDA conductor 902, 1002 and determines that the slave device 306, 308, 310, 312 is defective. As described in the examples related to FIGS. 9 and 10, using a frequency of 100 kHz provides a longer duration of high and low signaling states in which disturbances are inserted or merged. In some examples, master device 304 may combine short bursts with signaling transmitted at frequencies greater than 100 kHz.At block 1104, it may be determined whether an acknowledgement was received at block 1102. If master device 304 detects a NACK on SDA wire 902, 1002 after transmission at a lower clock rate, master device 304 may determine that an error has occurred and control may be passed to block 1114 for an error handling procedure. Otherwise, the procedure continues at block 1106.At block 1106, master device 304 may select a 100 kHz clock rate (which may be generally supported by legacy slave devices 306, 308, 310, 312). Master device 304 may then transmit the slave device address at the selected clock rate, with a short pulse inserted in the low logic state of SCL conductor 904, as illustrated in FIG. The spike filters of the slave devices 306, 308, 310, 312 are expected to suppress these short pulses. When these short pulses are suppressed, legacy slave devices 306, 308, 310, 312 recognize the slave device address and transmit some acknowledgement to master device 304. If spike filters in legacy slave devices 306, 308, 310, 312 cannot suppress these short pulses, legacy slave devices 306, 308, 310, 312 may detect additional transitions on SCL conductor 904, and/or SCL conductor 904. Can be stuck in a high state. In either case, the legacy slave device 306, 308, 310, 312 incorrectly decodes the address and does not provide acknowledgement of the command.At block 1108, the master device 304 determines whether an acknowledgement from the legacy slave device 306, 308, 310, 312 has been received. If an acknowledgment has not been received, the master device 304 detects a NACK on the SDA wire 902, 1002 and determines that an error has occurred and control can be passed to block 1114 for an error handling procedure. Otherwise the procedure continues at block 1110.At block 1110, the master device 304 may select a 100 kHz clock rate (which may typically be supported by legacy slave devices 306, 308, 310, 312). Master device 304 may then transmit the slave device address at the selected clock rate, with a short pulse inserted in the high logic state of SCL conductor 1004, as illustrated in FIG. The spike filters of the slave devices 306, 308, 310, 312 are expected to suppress these short pulses. When these short pulses are suppressed, legacy slave devices 306, 308, 310, 312 recognize the slave device address and transmit some acknowledgement to master device 304. If spike filters in legacy slave devices 306, 308, 310, 312 cannot suppress these short pulses, legacy slave devices 306, 308, 310, 312 may detect additional transitions on SCL conductor 1004, and/or SCL conductor 1004. Can be stuck in a high state. In either case, the legacy slave device 306, 308, 310, 312 incorrectly decodes the address and does not provide acknowledgement of the command.At block 1112, master device 304 determines whether an acknowledgement has been received from legacy slave device 306, 308, 310, 312. If an acknowledgment has not been received, the master device 304 detects a NACK on the SDA wire 902, 1002 and determines that an error has occurred and control can be passed to block 1114 for an error handling procedure. Otherwise, it can be inferred that the legacy slave devices 306, 308, 310, 312 have an appropriately designed 50 ns spike suppression filter with increased confidence.In some examples, master device 304 may configure one or more delays for testing purposes. These delays may be implemented to control the period of time 914, 916, 1016 that initiates or terminates the series of short pulses 908a, 908b, 908c, 1008a, 1008b, 1008c and transitions 922, 924, 1022, 1024 on the SCL wires 904, 1004. The duration of 1018. For example, master device 304 may provide 200 ns time periods 914, 916, 1016, 1018 when operating at 100 kHz speed. The duration of the time periods 914, 916, 1016, 1018 may be configured based on the frequency of clocks transmitted on the SCL conductors 904, 1004 or for other reasons.Other test protocols and signaling combinations may be used to test spike suppression filters in legacy slave devices 306, 308, 310, 312. For example, short pulses may be added to both the low logic state and the high logic state on the SCL wires 904, 1004. In another example, the interval and duration of these short pulses may be modified and/or provided according to a predefined pattern.FIG. 12 is a conceptual diagram illustrating a simplified example of a hardware implementation of an apparatus 1200 that employs a processing circuit 1202 that may be configured to perform one or more of the functions disclosed herein. According to various aspects of the present disclosure, the elements disclosed herein, or any portion of the elements, or any combination of the elements, may be implemented using processing circuitry 1202 . Processing circuit 1202 may include one or more processors 1204 controlled by some combination of hardware and software modules. Examples of processor 1204 include: microprocessors, microcontrollers, digital signal processors (DSPs), field programmable gate arrays (FPGAs), programmable logic devices (PLDs), state machines, sequencers, gating logic Discrete hardware circuits, and other suitable hardware configured to perform the various functionalities described throughout this disclosure. The one or more processors 1204 may include dedicated processors that perform certain functions and may be configured, enhanced, or controlled by one of the software modules 1216 . The one or more processors 1204 may be configured by a combination of software modules 1216 loaded during initialization and further configured by loading or unloading one or more software modules 1216 during operation.In the illustrated example, the processing circuit 1202 may be implemented using a bus architecture represented generally by the bus 1210 . Depending on the specific application of the processing circuit 1202 and the overall design constraints, the bus 1210 may include any number of interconnected buses and bridges. Bus 1210 links various circuits together, including one or more processors 1204, and storage 1206. The storage 1206 may include a memory device and a mass storage device, and may be referred to herein as a computer-readable medium and/or a processor-readable medium. The bus 1210 can also link various other circuits such as timing sources, timers, peripherals, voltage regulators, and power management circuits. Bus interface 1208 may provide an interface between bus 1210 and one or more transceivers 1212 . Transceiver 1212 may be provided for each networking technology supported by the processing circuitry. In some examples, multiple networking technologies may share some or all of the circuitry or processing modules found in transceiver 1212 . Each transceiver 1212 provides a means for communicating with various other devices over a transmission medium. Depending on the nature of the device, a user interface 1218 (eg, a keypad, display, speaker, microphone, joystick) may also be provided, and the user interface 1218 may be communicatively coupled to the bus 1210 either directly or through a bus interface 1208 .The processor 1204 may be responsible for managing the bus 1210 and general processing including execution of software stored in a computer-readable medium (which may include the storage 1206). In this regard, processing circuitry 1202 (including processor 1204) may be used to implement any of the methods, functions, and techniques disclosed herein. The storage 1206 may be used to store data that the processor 1204 manipulates when executing software, and the software may be configured to implement any of the methods disclosed herein.One or more processors 1204 in the processing circuit 1202 may execute software. Software should be interpreted broadly to mean instructions, sets of instructions, code, sections of code, program code, programs, subroutines, software modules, applications, software applications, software packages, routines, subroutines, objects, executables. Threads, threads of execution, procedures, functions, algorithms, etc., whether described in terms of software, firmware, middleware, microcode, hardware description languages, or other terms, are all so. The software may reside in storage 1206 in a computer-readable form or reside on an external computer-readable medium. External computer-readable media and/or storage 1206 may include non-transitory computer-readable media. By way of example, non-transitory computer-readable media include magnetic storage devices (eg, hard disks, floppy disks, magnetic stripes), optical disks (eg, compact discs (CDs) or digital versatile disks (DVDs)), smart cards, flash memory devices ( For example, "flash drive", card, stick, or key driver), random access memory (RAM), read only memory (ROM), programmable ROM (PROM), erasable PROM (EPROM), electrically erasable PROM (EEPROM), registers, removable disks, and any other suitable medium for storing software and/or instructions that can be accessed and read by a computer. By way of example, computer readable media and/or storage 1206 may also include carrier waves, transmission lines, and any other suitable medium for transferring software and/or instructions that may be accessed and read by a computer. The computer-readable medium and/or storage 1206 may reside in the processing circuitry 1202, in the processor 1204, external to the processing circuitry 1202, or distributed across multiple entities including the processing circuitry 1202. Computer-readable media and/or storage 1206 may be implemented in a computer program product. As an example, a computer program product may comprise a computer-readable medium in an encapsulating material. Those skilled in the art will recognize how best to implement the described functionality presented throughout this disclosure depending on the particular application and the overall design constraints imposed on the overall system.The storage 1206 may maintain software maintained and/or organized with loadable code segments, modules, applications, programs, etc., which may be referred to herein as software modules 1216 . Each of the software modules 1216 may include instructions and data that assist the runtime image 1214 when installed or loaded onto the processing circuit 1202 and executed by the one or more processors 1204, the runtime image 1214 controlling one or more Processor 1204 operations. When executed, certain instructions may cause the processing circuit 1202 to perform the functions according to certain methods, algorithms, and processes described herein.Some of the software modules 1216 may be loaded during initialization of the processing circuit 1202, and these software modules 1216 may configure the processing circuit 1202 to implement the various functions disclosed herein. For example, some software modules 1216 may configure internal devices and/or logic 1222 of processor 1204 and may manage external devices such as transceivers 1212, bus interfaces 1208, user interfaces 1218, timers, math coprocessors, etc. )Access. Software module 1216 may include a control program and/or operating system that interacts with interrupt handlers and device drivers and controls access to various resources provided by processing circuit 1202 . These resources may include memory, processing time, access to the transceiver 1212, user interface 1218, and the like.The one or more processors 1204 of the processing circuit 1202 may be multi-functional, whereby some of the software modules 1216 are loaded and configured to perform different functions or different instances of the same function. The one or more processors 1204 may additionally be adapted to manage background tasks initiated in response to input from, for example, the user interface 1218, the transceiver 1212, and the device driver. To support the execution of multiple functions, the one or more processors 1204 may be configured to provide a multitasking environment whereby each of a plurality of functions is implemented on demand or as desired by one or more processors 1204 . Service set of tasks. In one example, the multitasking environment may be implemented using a time-sharing program 1220 that passes control of the processor 1204 between different tasks, whereby each task completes any pending operations and/or Or, control of one or more processors 1204 is returned to time-sharing program 1220 in response to input such as an interrupt. When a task has control over one or more processors 1204, the processing circuitry is effectively dedicated to the purpose targeted by the function associated with the controller task. The time-sharing program 1220 may include an operating system, a main loop for transferring control on a cyclic basis, a function for allocating control over one or more processors 1204 according to the prioritization of functions, and/or by pairing one Control of the plurality of processors 1204 is provided to an interrupt-driven main loop that handles functions to respond to external events.FIG. 13 includes a flowchart 1300 illustrating a method for detecting the capabilities of a device coupled to a serial bus. Various steps of the method may be performed by the master device 304 coupled to the serial bus.At block 1302, the master device 304 may generate commands to be transmitted on the serial bus according to the I2C protocol. The command may include an address corresponding to the first slave device.At block 1304, master device 304 may combine the command with a pulse train to obtain a test signal. Each pulse in the pulse train may have a duration of less than 50 ns. In one example, the pulse train may be incorporated into each of a plurality of intervals when the clock signal transmitted on the serial bus is in a low state. In another example, the pulse train may be incorporated into each of multiple intervals when the clock signal transmitted on the serial bus is in a high state. In another example, the pulse train may be incorporated into each of a plurality of intervals when the clock signal transmitted on the serial bus is in a low state, and may be incorporated into a clock transmitted on the serial bus. The signal is in each of multiple intervals when in a high state. In various examples, each pulse in the pulse train may have a 40ns long high state.At block 1306, master device 304 may transmit the test signal on the serial bus.At block 1308, the master device 304 may determine the efficacy of the spike filter in the first slave device based on whether the first slave device correctly responds to the command. The spike filter is expected to suppress duration pulses with less than 50 ns. The first slave can respond to the command correctly by confirming the command. The master device 304 may determine the efficacy of the spike filter by causing a first value to be written to a register of the first slave device, a second value to be read from this register in the first slave device, and a first value equal to The second value determines that the spike filter is valid.In some examples, the master device 304 may determine that the first device exists by transmitting the command at one or more clock frequencies without embedding the pulse sequence. The first device may be configured to acknowledge the command when the first device is present on the serial bus and is adapted to communicate using at least one of the one or more clock frequencies. For example, master device 304 may transmit commands at 400 kHz to determine if the addressed slave device is functioning normally. If no response is received or determined, master device 304 may consider the addressed slave device to be non-existent, defective, or otherwise malfunction. If the addressed slave device responds correctly, the master device 304 may test the spike filter by embedding a short pulse in a command word transmitted at a clock frequency corresponding to the lowest one of the one or more clock frequencies.In some examples, the pulse train is transmitted on the SCL conductor of the serial bus. In some examples, the pulse sequence is transmitted on the serial bus's SDA wire. In some examples, the first pulse train is transmitted on the serial bus's SCL wire and the second pulse train is transmitted on the serial bus's SDA wire.FIG. 14 is a diagram illustrating a simplified example of a hardware implementation of an apparatus 1400 employing a processing circuit 1402 . The processing circuit typically has a processor 1416, which may include one or more of a microprocessor, a microcontroller, a digital signal processor, a sequencer, and a state machine. Processing circuit 1402 may be implemented with a bus architecture represented generally by bus 1420. Depending on the specific application of the processing circuit 1402 and the overall design constraints, the bus 1420 may include any number of interconnected buses and bridges. The bus 1420 will include one or more processor and/or hardware modules (by the processor 1416, modules or circuits 1404, 1406, and 1408, a line interface that can be configured to communicate over a serial bus 1414 including multiple connectors or wires). Circuits 1412, and various circuits represented by computer-readable storage media 1418, are linked together. The bus 1420 may also link various other circuits (such as timing sources, peripherals, voltage regulators, and power management circuits) that are well known in the art and therefore will not be further described.Processor 1416 is responsible for general processing including executing software stored on computer readable storage medium 1418 . The software, when executed by the processor 1416, causes the processing circuit 1402 to perform the various functions described above for any particular device. Computer-readable storage media 1418 may also be used to store data that is manipulated by processor 1416 when executing software, including data decoded from symbols transmitted on serial bus 1414 . Processing circuit 1402 further includes at least one of modules 1404, 1406, and 1408. Modules 1404, 1406, and 1408 may be software modules running in processor 1416, residing/storing in computer-readable storage medium 1418, one or more hardware modules coupled to processor 1416, or some of them combination. Modules 1404, 1406, and 1408 may include microcontroller instructions, state machine configuration parameters, or some combination thereof.In one configuration, apparatus 1400 includes a module and/or circuit 1410 configured to generate a command to be transmitted on serial bus 1414, a module and/or circuit configured to combine the command with a pulse train to obtain a test signal 1406. A module and/or circuit 1408 configured to transmit a test signal on a serial bus 1414, and configured to determine the efficacy of a spike filter in the first slave device based on whether the first slave device acknowledges the test signal. The modules and/or circuits 1404, 1408, 1416.It should be understood that the specific order or hierarchy of steps in the disclosed process is illustrative of exemplary approaches. It should be understood that based on design preferences, the specific order or hierarchy of steps in these processes can be rearranged. The accompanying method claims present elements of the various steps in a sample order, and are not meant to be limited to the specific order or hierarchy presented.The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein but are to be accorded the full scope consistent with the linguistic claims, in which case the singular forms cited of elements are not intended to be otherwise unless otherwise specified. It means "There is only one," but "One or more." Unless specifically stated otherwise, the term "some" refers to "one or more." The elements of the various aspects described throughout this disclosure are all structural and functional equivalents that are presently or in the future known to one of ordinary skill in the art and are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed as a means plus function unless the element is expressly recited using the phrase "means for." |
A method includes forming a group of first structures on a semiconductor device and forming spacers adjacent side surfaces of each of the first structures to form a group of second structures. The method further includes using the group of second structures to form at least one sub-lithographic opening in a material layer located below the group of second structures. |
What is claimed is:1. A method comprising:forming a first material layer on a semiconductor device;forming a photoresist layer;patterning the photoresist layer to form at least two masks;forming polymer spacers on adjacent side surfaces of the at least two masks using a series of intermittent sputter deposition and etch processes; andetching the first material layer using the at least two masks and the polymer spacers to create at least one sub-lithographic space in the first material layer.2. The method of claim 1 wherein the first material layer includes a hard mask layer.3. The method of claim 2 further comprising:using the etched hard mask layer to etch a second material layer beneath the hard mask layer.4. The method of claim 2 wherein the hard mask layer comprises at least one of amorphous carbon or a low-k dielectric material, andwherein the forming the hard mask layer includes:forming the hard mask layer to a thickness ranging from about 100 Ȧ to about 2,000 Ȧ.5. The method of claim 1 wherein the first material layer includes an antireflective coating, a polycrystalline silicon material, or a dielectric material.6. The method of claim 1 wherein forming polymer spacers includes:forming the polymer spacers to a height ranging from about 100 Ȧ to about 1,000 Ȧ and a width ranging from about 50 Ȧ to about 1,000 Ȧ.7. The method of claim 1 wherein forming polymer spacers includes:performing the series of intermittent sputter deposition and etch processes in a same etch chamber.8. The method of claim 1 wherein the series of intermittent sputter deposition and etch processes is performed at a temperature below 100[deg.] C.9. The method of claim 1 wherein forming polymer spacers includes:using a first chemistry for depositing the polymer material, the first chemistry including at least one of CH2F2, CH3F, C4F6, or C4F8.10. The method of claim 1 wherein a width of the sub-lithographic space ranges from about 50 Ȧ to about 1,000 Ȧ.11. A method comprising:forming a plurality of first structures on a semiconductor device;forming spacers adjacent side surfaces of each of the plurality of first structures using a series of intermittent sputter deposition and etch processes to form a plurality of second structures; andusing the plurality of second structures to form at least one sub-lithographic opening in a material layer located below the plurality of second structures.12. The method of claim 11 wherein the plurality of first structures comprises photoresist masks.13. The method of claim 11 wherein the series of intermittent sputter deposition and etch processes is performed at a temperature below 100[deg.] C.14. The method of claim 11 wherein forming spacers includes:depositing a spacer material on the semiconductor device in an etch chamber using a first chemistry, andetching the spacer material in the etch chamber to form the spacers using a second chemistry.15. The method of claim 11 wherein the using includes:using the plurality of second structures as a mask, andetching the material layer located below the plurality of second structures to form the at least one sub-lithographic opening to a width ranging from about 50 Ȧ to about 1,000 Ȧ.16. A method for forming an opening in a semiconductor device, the method comprising:forming a plurality of first structures on a semiconductor device;forming spacers adjacent side surfaces of each of the plurality of first structures using a series of intermittent sputter deposition and etch processes to form a plurality of second structures; andusing the plurality of second structures as a mask to form the opening in a material layer located below the plurality of second structures, the opening being formed to a width less than approximately 100 nm.17. The method of claim 16 wherein theseries of intermittent sputter deposition and etch processes is performed in a same etch chamber. |
FIELD OF THE INVENTIONImplementations consistent with the principles of the invention relate generally to semiconductor manufacturing and, more particularly, to forming sub-lithographic spaces in semiconductor devices.BACKGROUND OF THE INVENTIONThe escalating demands for high density and performance associated with non-volatile memory devices require small design features, high reliability and increased manufacturing throughput. The reduction of design features, however, challenges the limitations of conventional methodology.For example, currently, lithography is limited in its ability to print spaces (or contacts) less than 100 nanometers (nm) in width or diameter. There exists a need to print spaces (or contacts) that are beyond lithographic capabilities.SUMMARY OF THE INVENTIONIn an implementation consistent with the principles of the invention, a method includes forming a first material layer on a semiconductor device; forming a photoresist layer; patterning the photoresist layer to form at least two masks; forming polymer spacers on adjacent side surfaces of the at least two masks; and etching the first material layer using the at least two masks and the polymer spacers to create at least one sub-lithographic space in the first material layer.In another implementation consistent with the principles of the invention, a method includes forming a group of first structures on a semiconductor device and forming spacers adjacent side surfaces of each of the first structures to form a group of second structures. The method further includes using the group of second structures to form at least one sub-lithographic opening in a material layer located below the group of second structures.In yet another implementation consistent with the principles of the invention, a method for forming an opening in a semiconductor device is provided. The method includes forming a group of first structures on a semiconductor device; forming spacers adjacent side surfaces of each of the group of first structures to form a group of second structures; and using the group of second structures as a mask to form an opening in a material layer located below the group of second structures. The opening is formed to a width less than approximately 100 nm.BRIEF DESCRIPTION OF THE DRAWINGSThe accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate an embodiment of the invention and, together with the description, explain the invention. In the drawings,FIG. 1 illustrates an exemplary process for forming a semiconductor memory device in an implementation consistent with the principles of the invention;FIGS. 2-5 illustrate exemplary views of a semiconductor device fabricated according to the processing described in FIG. 1; andFIGS. 6-17 illustrate exemplary views of a semiconductor memory device fabricated using the processing described in FIG. 1.DETAILED DESCRIPTIONThe following detailed description of implementations consistent with the principles of the invention refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements. Also, the following detailed description does not limit the invention. Instead, the scope of the invention is defined by the appended claims and their equivalents.Exemplary ProcessingFIG. 1 illustrates an exemplary process for forming sub-lithographic spaces for a semiconductor device in an implementation consistent with the principles of the invention. In one implementation, the semiconductor device may include a flash memory device. FIGS. 2-5 illustrate exemplary views of a semiconductor device fabricated according to the processing described in FIG. 1.With reference to FIGS. 1 and 2, processing may begin with a semiconductor device 200 that includes a layer 210 (or an area 210). In one implementation, layer 210 may comprise any material in which one or more sub-lithographic spaces are to be formed. For example, layer 210 may comprise a hard mask material, such as amorphous carbon, a low-k dielectric (e.g., SiLK), SiO2, or SiOC. Alternatively, layer 210 may comprise a dielectric anti-reflective coating (ARC), a polysilicon material, or other materials. Layer 210 may have a thickness ranging from about 100 Ȧ to about 2,000 Ȧ.A photoresist material may be patterned and etched to form masks 220 on the top surface of layer 210 (act 105). Masks 220 may be used to facilitate etching layer 210, as described in more detail below. The width of each mask 220 may range from about 800 Ȧ to about 2,000 Ȧ. In addition, the pitch (i.e., the center-to-center distance between masks 220) may range from about 1,200 Ȧ to about 2,000 Ȧ.An in-situ polymer spacer deposition and etch may be performed (acts 110 and 115). For example, a polymer material 310 may be deposited on semiconductor device 200, as illustrated in FIG. 3 (act 110). Polymer material 310 may be deposited to a thickness ranging from about 100 Ȧ to about 1,000 Ȧ. In one implementation, polymer material 310 may be deposited in an etch chamber (e.g., a high density plasma reactor) using a known chemistry, such as CH2F2/HBr, CH3F/HBr, combinations of, for example, C4F6, CH3F, and C4F8, and/or any other polymerizing chemistry. For example, when a CH2F2/HBr chemistry is used, CH2F2 may be provided at a flow rate ranging from about 20 standard cubic centimeters per minute (sccm) to about 200 sccm and HBr may be provided at a flow rate ranging from about 20 sccm to about 200 sccm. Similar flow rates may be used when other polymerizing chemistries are used.In one implementation, the temperature in the etch chamber may be maintained below 100[deg.] C. (or other acceptable temperature) during the polymer deposition process to keep masks 220 from flowing or otherwise distorting. In one implementation, the temperature in the etch chamber may range from about 20[deg.] C. to about 70[deg.] C. To optimize polymer coverage over masks 220, the polymer deposition process may include a series of intermittent sputter deposition and etch processes in other implementations consistent with the principles of the invention. The above polymer deposition techniques allow for polymer material 310 to be deposited directly on photoresist masks 220 without the need for an intermediate hard mask etch.Polymer material 310 may be etched to form polymer spacers 410 adjacent side surfaces of masks 220, as illustrated in FIG. 4 (act 115). In one implementation, polymer spacers 410 may be formed to a height ranging from about 100 Ȧ to about 2,000 Ȧ and a width ranging from about 50 Ȧ to about 1,000 Ȧ. A distance D between a spacer 410 associated with a mask 220 and a spacer associated with an adjacent mask 220 may range from about 300 Ȧ to about 700 Ȧ.In one implementation consistent with the principles of the invention, the etch of polymer material 310 to form spacers 410 may be performed in the same etch chamber as the polymer deposition process by changing the chemistry that is used. The etch chemistry may include, for example, organic etch chemistries, such as HBr/O2, CF4/O2, CO/O2, etc. For example, when a HBr/O2 chemistry is used, HBr may be provided at a flow rate ranging from about 20 sccm to about 200 sccm and O2 may be provided at a flow rate ranging from about 20 sccm to about 200 sccm. Similar flow rates may be used when other chemistries are used. During the polymer etch, the etch chamber may be maintained at a temperature ranging from about 20[deg.] C. to about 70[deg.] C.Layer 210 may then be etched to form spaces 510 in layer 210 that are smaller than lithographic limits, as illustrated in FIG. 5 (act 120). Spaces 510 may be formed to a width that may substantially correspond to distance D between adjacent spacers 410 (i.e., a distance ranging from about 300 Ȧ to about 700 Ȧ). The formation of spaces 510 causes structures 520 to be formed. In one implementation, structures 520 may be used as hard mask structures for subsequent etching of one or more layers beneath layer 210. In this way, patterning may be performed with dimensions that are not possible with conventional lithographic techniques.The following example illustrates the above processing. FIG. 6 illustrates the cross-section of a semiconductor device 600 formed in accordance with an embodiment of the invention. In one implementation, semiconductor device 600 may include an electrically erasable programmable read only memory (EEPROM), such as a flash memory device. Referring to FIG. 6, semiconductor device 600 may include layers 610, 620, 630, 640 and 650. In an exemplary embodiment, layer 610 may be a substrate of semiconductor device 600 and may include silicon, germanium, silicon-germanium, or other semiconducting materials. In alternative implementations, layer 610 may be a conductive layer or a dielectric layer formed a number of layers above the surface of a substrate in semiconductor device 600.Layer 620 may be a dielectric layer formed on layer 610 in a conventional manner. In an exemplary implementation, dielectric layer 620 may include an oxide, such as a silicon oxide (e.g., SiO2), and may have a thickness ranging from about 20 Ȧ to about 100 Ȧ. Dielectric layer 620 may function as a tunnel oxide layer for a subsequently formed memory cell of semiconductor device 600.Layer 630 may be formed on layer 620 in a conventional manner and may include a dielectric material, such as a nitride (e.g., a silicon nitride) or an oxynitride. Layer 630, consistent with the invention, may act as a charge storage layer for semiconductor device 600 and may have a thickness ranging from about 50 Ȧ to about 150 Ȧ. In alternative implementations, layer 630 may include a conductive material, such as polycrystalline silicon, used to form a floating gate electrode.Layer 640 may be formed on layer 630 in a conventional manner and may include a dielectric material, such as an oxide (e.g., SiO2). Alternatively, layer 640 may include a material having a high dielectric constant (K), such as Al2O3 or HfO2, that may be deposited or thermally grown on layer 630. In still other alternatives, layer 640 may be a composite that includes a number of dielectric layers or films. Layer 640 may have a thickness ranging from about 50 Ȧ to about 200 Ȧ and may function as an inter-gate dielectric for memory cells in semiconductor device 600.Layer 650 may include a conductive material, such as polycrystalline silicon, formed on layer 640 in a conventional manner. Alternatively, layer 650 may include other semiconducting materials, such as germanium or silicon-germanium, or various metals, such as titanium or tungsten. Layer 650, consistent with an implementation of the invention, may be used to form one or more control gate electrodes for one or more memory cells in semiconductor device 600. In an exemplary implementation, layer 650 may have a thickness ranging from about 500 Ȧ to about 2,000 Ȧ. An optional silicide layer, such as titanium silicide (not shown) may be formed on layer 650.A photoresist material may be patterned and etched to form masks 660 on the top surface of layer 650, as illustrated in FIG. 6. Masks 660 may be used to facilitate formation of one or memory cells in semiconductor device 600, as described in more detail below.Semiconductor device 600 may then be etched, as illustrated in FIG. 7. Referring to FIG. 7, layers 620-650 may be etched in a conventional manner with the etching terminating at substrate 610, thereby forming structures 710. Alternatively, the etching may terminate at another layer, such as layer 640, followed in some implementations by additional etching, to form structures 710. Each structure 710 (also referred to herein as a memory cell 710) may represent a memory cell of semiconductor device 600, where each memory cell 710 includes a dielectric layer 620, a charge storage layer 630, an inter-gate dielectric layer 640, and a control gate 650. Only two memory cells 710 are illustrated in semiconductor device 600 in FIG. 7 for simplicity. It should be understood that semiconductor device 600 may typically include a memory array including a large number of memory cells 710.In an exemplary implementation consistent with the invention, each memory cell 710 may be a SONOS-type memory cell, with a silicon control gate electrode 650 formed on an oxide-nitride-oxide (ONO) stack (i.e., layers 640, 630, and 620), with nitride layer 630 acting as a charge storage layer, and the ONO stack being formed on a silicon substrate 610.Source and drain regions 720 and 730 may then be formed in substrate 610, as illustrated in FIG. 7. For example, n-type or p-type impurities may be implanted in substrate 610 to form source and drain regions 720 and 730, based on the particular end device requirements. The particular implantation dosages and energy used to form source and drain regions 720 and 730 may be selected based on the particular end device requirements. One of ordinary skill in the art would be able to optimize the source/drain implantation process based on the particular circuit requirements. It should also be understood that source region 720 and drain region 730 may alternatively be formed at other points in the fabrication process of semiconductor device 600. For example, sidewall spacers may be formed prior to the source/drain ion implantation to control the location of the source/drain junctions based on the particular circuit requirements.Photoresist masks 660 may be removed using a conventional process. Spacers 810 may be formed adjacent the sidewalls of the memory cells 710, as illustrated in FIG. 8. For example, a dielectric material, such as a silicon oxide, a silicon nitride, a silicon oxynitride or another dielectric material, may be deposited and etched to form spacers 810 on each side of memory cells 710, as illustrated in FIG. 8. Spacers 810 may be used to electrically isolate adjacent memory cells 710 from each other. Spacers 810 may also be used to facilitate the deposition of impurities in semiconductor device 600.An interlayer dielectric (ILD) layer 910 may be deposited over semiconductor device 600, as illustrated in FIG. 9. In an exemplary implementation, ILD layer 910 may include a phosphosilicate glass (PSG) material, a boro-phosphosilicate glass (BPSG) material, an oxide, or some other dielectric material. The thickness of ILD 910 may range from about 1,000 Ȧ to about 10,000 Ȧ.ILD 910 may optionally be planarized using a conventional process, such as a chemical-mechanical polishing (CMP) process, as illustrated in FIG. 10. Referring to FIG. 10, the CMP process may planarize the top surface of ILD 910 to facilitate formation of subsequent structures, such as interconnect lines. ILD 910, consistent with the invention, may represent an ILD located closest to substrate 610. In alternative implementations, ILD 910 may represent an interlayer dielectric formed a number of layers above the surface of substrate 610. In each case, ILD 910 may function to isolate various conductive structures, such as various interconnect lines described below, or to isolate source region 720 or drain region 730 from other conductive structures.A hard mask layer 1110 may be formed over semiconductor device 600, as illustrated in FIG. 11. In an exemplary implementation, hard mask layer 1110 may comprise amorphous carbon, SiLK, SiO2, SiOC, and/or some other hard mask material. The thickness of hard mask layer 1110 may range from about 100 Ȧ to about 2,000 Ȧ.A photoresist material may be patterned and etched to form masks 1120 on the top surface of layer 1110. Masks 1120 may be used to facilitate etching layer 1110, as described in more detail below. The width of each mask 1120 may range from about 800 Ȧ to about 2,000 Ȧ. In addition, the pitch (i.e., the center-to-center distance between masks 1120) may range from about 1,200 Ȧ to about 2,000 Ȧ.An in-situ polymer spacer deposition and etch may be performed. For example, a polymer material 1210 may be deposited on semiconductor device 600, as illustrated in FIG. 12. Polymer material 1210 may be deposited to a thickness ranging from about 100 Ȧ to about 1,000 Ȧ. As set forth above, polymer material 1210 may be deposited in an etch chamber (e.g., a high density plasma reactor) using a known chemistry, such as CH2F2/HBr, CH3F/HBr, combinations of, for example, C4F6, CH3F, and C4F8, and/or any other polymerizing chemistry at a temperature ranging from about 20[deg.] C. to about 70[deg.] C. Similar flow rates as that described above with respect to FIG. 3 may also be used.Polymer material 1210 may be etched to form polymer spacers 1310 adjacent side surfaces of masks 1120, as illustrated in FIG. 13. In one implementation, polymer spacers 1310 may be formed to a height ranging from about 100 Ȧ to about 2,000 Ȧ and a width ranging from about 50 Ȧ to about 1,000 Ȧ. A distance between a spacer 1310 associated with a mask 1120 and a spacer associated with an adjacent mask 1120 may range from about 200 Ȧ to about 800 Ȧ.As set forth above, the etching of polymer material 1210 to form spacers 1310 may be performed in the same etch chamber as the polymer deposition process by changing the chemistry that is used. The etch chemistry may include, for example, organic etch chemistries, such as HBr/O2, CF4/O2, CO/O2, etc. The etch chamber may be maintained at a temperature ranging from about 20[deg.] C. to about 70[deg.] C. during the etching of polymer material 1210. In addition, similar flow rates as that described above with respect to FIG. 4 may be used.Hard mask layer 1110 may then be etched to form a space in layer 1110 that is smaller than lithographic limits, as illustrated in FIG. 14. The space between structures 1410 may be formed to a width ranging from about 200 Ȧ to about 800 Ȧ. The etching causes structures 1410 to be formed on either side of the space. Structures 1410 may be used as hard mask structures for subsequent etching of semiconductor 600. In this way, patterning may be performed with dimensions that are not possible with conventional lithographic techniques.Masks 1120 and spacers 1310 may be removed in a conventional manner, as illustrated in FIG. 15. Hard mask structures 1410 may be used to form a trench 1610 in ILD 910, as illustrated in FIG. 16, using a well-known etching technique, with the etching terminating at substrate 610. The width of trench 1610 may range from about 200 Ȧ to about 800 Ȧ. Trench 1610 may be used to form a contact to source region 720 or drain region 730. Next, hard mask structures 1410 may be removed and a metal layer 1710, such as copper or aluminum, may be deposited to fill trench 1610, as illustrated in FIG. 17. Metal layer 1610 may represent a contact to, for example, source region 720 and/or drain region 730.While the above example focuses on forming a space for a contact that is below lithographic limits, it will be appreciated that the above techniques can be performed for forming sub-lithographic spaces for other reasons in one or more underlying material layers. For example, the techniques described above can also be used for forming the gate stack.Thus, in implementations consistent with the principles of the invention, polymer spacers are formed on photoresist masks to form spaces that are smaller than lithographic techniques permit.CONCLUSIONThe foregoing description of exemplary embodiments of the invention provides illustration and description, but is not intended to be exhaustive or to limit the invention to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of the invention. For example, in the above descriptions, numerous specific details are set forth, such as specific materials, structures, chemicals, processes, etc., in order to provide a thorough understanding of the present invention. However, implementations consistent with the invention can be practiced without resorting to the details specifically set forth herein. In other instances, well known processing structures have not been described in detail, in order not to unnecessarily obscure the thrust of the present invention. In practicing the present invention, conventional deposition, photolithographic and etching techniques may be employed, and hence, the details of such techniques have not been set forth herein in detail.While a series of acts has been described with regard to FIG. 1, the order of the acts may be varied in other implementations consistent with the invention. Moreover, non-dependent acts may be implemented in parallel.No element, act, or instruction used in the description of the present application should be construed as critical or essential to the invention unless explicitly described as such. Also, as used herein, the article "a" is intended to include one or more items. Where only one item is intended, the term "one" or similar language is used. Further, the phrase "based on" is intended to mean "based, at least in part, on" unless explicitly stated otherwise. |
One or more examples relate to a detector. The detector is configured such that the sensed signal is a differential value. Such a difference value may indicate a difference in self-capacitance indications exhibited at the first internal capacitor and the second internal capacitor. Such a differential value may be proportional to a relationship between a first material and a second material present at a device under test coupled to an electrode of the detector. Such a differential value may be proportional to a vertical height of a surface of a material present at a device under test coupled to an electrode of the detector. The difference in coupling capacitance may be obtained by performing a complementary acquisition process with a symmetric capacitance sensor. When the acquisition processes are performed substantially simultaneously, a coupling error indication that may be present in the self-capacitance indication is not present in the differential value. |
1. A device comprising:detector, wherein the signal that the detector is configured to sense is a differential value indicative of a difference in self-capacitance indication exhibited at a first internal capacitor and a second internal capacitor, wherein the differential value is the same as that coupled to the The electrodes of the detector are proportional to the vertical height of the surface of the material present at the device under test.2. The apparatus of claim 1 comprising:a first sensor configured to generate a first voltage indicative of a first self-capacitance of a corresponding electrode; andA second sensor configured to generate a second voltage indicative of a second self-capacitance of the corresponding electrode.3. The apparatus of claim 2, wherein:the first sensor is configured to change the voltage level of the first voltage at least in part in response to a change in the first self-capacitance; andThe second sensor is configured to change the voltage level of the second voltage in response at least in part to a change in the second self-capacitance.4. The apparatus of claim 2, wherein the first sensor and the second sensor are configured to provide a substantially symmetrical response to capacitive coupling with the material present at the device under test.5. The apparatus of claim 2, wherein the first sensor and the second sensor are configured to provide information on the relationship between the material present at the device under test and the first sensor and the second sensor. Substantially symmetrical response to changes in coupling between sensors.6. The apparatus of claim 5, wherein the substantially symmetrical response that the first sensor and the second sensor are configured to provide includes a response to the material and the presence of the device under test. A change in coupling between the first sensor and the second sensor responds at least in part to a change in a dielectric property of the material.7. The apparatus of claim 2, wherein:The first sensor includes:a first acquisition circuit configured to generate the first voltage at the first internal capacitor; anda first measurement circuit configured to generate a first value indicative of a voltage level exhibited by the first voltage, andThe second sensor includes:a second acquisition circuit configured to generate the second voltage at the second internal capacitor; andA second measurement circuit configured to generate a second value indicative of a voltage level exhibited by the second voltage.8. The apparatus of claim 7, wherein:said first measurement circuit comprises a first analog-to-digital converter arranged to measure said first voltage generated at said first internal capacitor; andThe second measurement circuit comprises a second analog-to-digital converter arranged to measure the second voltage generated at the second internal capacitor.9. The apparatus of claim 2, comprising:Sample and hold circuit;Analog-to-digital converters; anda processor configured to control the sample-and-hold circuit and the analog-to-digital converter to alternately measure the first voltage generated at the first internal capacitor and at the second internal The second voltage generated at the capacitor.10. The apparatus of claim 2, comprising:a first lead disposed around said respective electrode of said first sensor;a second wire arranged around said corresponding electrode of said second sensor; anda first driven guard electrode electrically coupled to the first conductive line and configured to receive a first guard voltage; andA second driving guard electrode, the second driving guard electrode is electrically coupled to the second wire, and the second driving guard electrode is configured to receive a second guard voltage.11. A method, said method comprising:obtaining a differential value indicative of a difference in self-capacitance indications of the electrodes exhibited at the first inner capacitor and the second inner capacitor; andA vertical height of a surface of material present at a device under test coupled to the electrode is inferred in response at least in part to the differential value.12. The method of claim 11, comprising:performing a first acquisition procedure to obtain a first self-capacitance indication of a first one of the electrodes; andA second acquisition process is performed to obtain a second self-capacitance indication of a second one of the electrodes.13. The method of claim 12, wherein performing the first acquisition process comprises:charging the first internal capacitor to a reference voltage;discharging a first external capacitor, the first external capacitor associated with the first one of the electrodes;coupling the first internal capacitor and the first external capacitor; andThe voltage exhibited by the first internal capacitor is measured.14. The method of claim 12, wherein performing the second acquisition process comprises:discharge the second internal capacitor;charging a second external capacitor to a reference voltage, the second external capacitor associated with the second one of the electrodes;coupling the second internal capacitor and the second external capacitor; andThe voltage exhibited by the second internal capacitor is measured.15. The method of claim 12, wherein performing the first acquisition process and the second acquisition process comprises:performing said first acquisition process with a first sensor and performing said second acquisition process with a second sensor; andThe second acquisition process is performed with the first sensor and the first acquisition process is performed with the second sensor.16. The method of claim 12, wherein the first acquisition process and the second acquisition process are performed substantially simultaneously.17. The method of claim 12, wherein the first indication of self-capacitance and the second indication of self-capacitance are generated substantially simultaneously.18. A device comprising:detector, wherein the signal that the detector is configured to sense is a differential value that is an indication of a coupling capacitance exhibited by two corresponding electrodes, wherein the indication of the coupling capacitance is the same as that coupled to the The relationship between the first material and the second material present at the device under test of said two corresponding electrodes of said detector is proportional.19. A device comprising:a detector, wherein the signal that the detector is configured to sense is a change in self-capacitance of an electrode that includes only self-capacitance due to a change in capacitive coupling between the electrode and the material of interest. change in capacitance.20. A device comprising:a detector, wherein the signal the detector is configured to sense is a change in self-capacitance of an electrode comprising only changes in self-capacitance due to changes in the dielectric properties of the material of interest . |
Capacitive Sensing Using Differential Value IndicationCross References to Related ApplicationsThis application claims the benefit under 35 U.S.C. § 119(e) of U.S. Provisional Patent Application Serial No. 62/704,895, filed June 2, 2020, the disclosure of which is hereby incorporated by reference in its entirety.technical fieldOne or more examples relate generally to capacitive sensing. One or more examples relate generally to capacitive distance sensing and level sensing. One or more examples generally involve acquiring a differential value that is indicative of a self-capacitance component.Background techniqueCapacitive sensors are used in various operating environments, such as, but not limited to, capacitive proximity sensing and distance sensing.Description of drawingsTo easily identify a discussion of any particular element or act, the most significant digit or digits in a reference designation refer to the figure number that first introduces that element.FIG. 1A is a block diagram depicting a capacitive sensing system according to one or more examples.FIG. 1B is a block diagram depicting a capacitive sensing system according to one or more examples.1C is a block diagram depicting a capacitive sensing system according to one or more examples.2 is a block diagram depicting a detector according to one or more examples.3 is a flowchart depicting a process for generating differential values according to one or more examples.4 is a flowchart depicting an acquisition process according to one or more examples.5 is a block diagram depicting a first drive signal applied to an electrode of a first sensor and a second drive signal applied to an electrode of a second sensor, the timing of the second drive signal being consistent with that of the first drive signal, according to one or more examples. The timing of is 180 degrees out of phase (θ-180).6 is a graph depicting example voltage levels of a first voltage and a second voltage during an example performance of a process performed by the disclosed detector to generate a differential value, according to one or more examples.7 is a graph depicting example voltage levels during an example performance of a process performed by a disclosed detector to generate a differential value, according to one or more examples.8 is a block diagram depicting a portion of a detector configured to apply a guard voltage that increases immunity to leakage current at a corresponding electrode of a sensor, according to one or more examples.9 depicts a diagrammatic view of exemplary voltage levels during a first acquisition phase and a second acquisition phase, according to one or more examples.10 is a block diagram depicting a measurement circuit that may be used to implement a first measurement circuit and a second measurement circuit with a single ADC, as a non-limiting example, according to one or more examples.11 is a graph depicting self-capacitance indications and coupling capacitance indications in exemplary operation according to one or more examples.Detailed waysIn the following Detailed Description, reference is made to the accompanying drawings which form a part hereof, and in which are shown by way of illustration specific examples in which the disclosure may be practiced. These examples are described in sufficient detail to enable those of ordinary skill in the art to practice the disclosure. However, other examples enabled herein may be utilized and structural, material, and process changes may be made without departing from the scope of the present disclosure.The illustrations presented herein are not intended to be actual views of any particular method, system, device, or structure, but are merely idealized representations used to describe examples of the present disclosure. In some cases, similar structures or components in the various drawings may retain the same or similar numbering for the convenience of the reader; Attributes are the same.The following description may include examples to assist those of ordinary skill in the art in practicing the disclosed examples of the invention. The use of the terms "exemplary", "by example" and "for example" means that the relevant description is illustrative, and while the scope of the present disclosure is intended to encompass examples and legal equivalents, the use of such terms is not intended to Or the scope of the present disclosure is limited to specified components, steps, features, or functions.It should be readily understood that the components of the examples as generally described herein and illustrated in the drawings may be arranged and designed in many different configurations. Accordingly, the following description of various examples is not intended to limit the scope of the present disclosure, but merely represents various examples. While various aspects of these examples may be shown in the drawings, the drawings are not necessarily drawn to scale unless specifically indicated.Furthermore, the particular embodiments shown and described are examples only, and should not be construed as the only way to practice the disclosure unless otherwise indicated herein. Components, circuits and functions may be shown in block diagram form in order not to obscure the disclosure in unnecessary detail. Rather, the particular embodiments shown and described are exemplary only, and should not be construed as the only way to practice the disclosure, unless otherwise indicated herein.Additionally, the block definitions and partitioning of logic between the various blocks are examples of specific implementations. It will be apparent to those of ordinary skill in the art that the present disclosure can be practiced with many other partitioning solutions. In most cases, details regarding timing considerations, etc. have been omitted where such details are not necessary to gain a full understanding of the present disclosure and are within the capability of one of ordinary skill in the relevant art.Those of ordinary skill in the art would understand that information and signals may be represented using any of a variety of different technologies and techniques. Some figures may show signals as a single signal for clarity of presentation and description. Those of ordinary skill in the art will appreciate that a signal may represent a signal bus, where the bus may have various bit widths, and that the present disclosure may be implemented on any number of data signals, including a single data signal.The various exemplary logic blocks, modules, and circuits described in connection with the examples disclosed herein may be implemented in a general-purpose processor, special-purpose processor, digital signal processor (DSP), integrated circuit (IC), application-specific integrated circuit (ASIC), on-site Realized or performed by a programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic components, discrete hardware components, or any combination thereof designed to perform the functions described herein, the term "processor" Use covers all of the above. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, such as a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in combination with a DSP core, or any other such configuration. A general-purpose computer that includes a processor is considered a special-purpose computer when it is configured to execute computing instructions (eg, but not limited to, software code) related to examples of the present disclosure.Examples may be described in terms of processes depicted as flowcharts, flow diagrams, block diagrams, or block diagrams. Although a flowchart may describe operational acts as a sequential process, many of these acts may be performed in another sequence, in parallel, or substantially simultaneously. Additionally, the order of actions may be rearranged. A process herein may correspond to a method, thread, function, procedure, subroutine, subroutine, other structure, or a combination thereof. Furthermore, the methods disclosed herein may be implemented by hardware, software, or both. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another.Any reference to elements herein using designations such as "first," "second," etc., does not limit the quantity or order of those elements, unless such a limitation is explicitly stated. Rather, these designations may be used herein as a convenient method of distinguishing between two or more elements or instances of an element. Thus, reference to a first element and a second element does not imply that only two elements may be employed there or that the first element must precede the second element in some manner. Also, unless stated otherwise, a set of elements may comprise one or more elements.As used herein, the term "substantially" or "about" in reference to a given parameter, property, or condition means and includes what one of ordinary skill in the art would understand to an extent with a minor variance, such as, within acceptable manufacturing or operating conditions. The degree to which a given parameter, property or condition is satisfied without being within a tolerance). By way of example, depending on the particular parameter, property or condition that is substantially satisfied, the parameter, property or condition may be at least 90% satisfied, at least 95% satisfied, or even at least 99% satisfied.As used herein, any relative terms (such as "above", "over", " "below", "on", "below", "upper" or "lower", but not limited to), and the relative terms No particular preference, orientation or order is implied or dependent, unless the context clearly dictates otherwise.In this description, the term "coupled" and its derivatives may be used to indicate that two elements co-operate or interact with each other. When an element is described as being "coupled" to another element, the element may be in direct physical or electrical contact, or intervening elements or layers may be present. In contrast, when an element is described as being "directly coupled" to another element, there are no intervening elements or layers present. The term "connected" is used interchangeably with the term "coupled" in this specification, and has the same meaning as "coupled", unless expressly indicated otherwise or the context will otherwise indicate to those of ordinary skill in the art . It will be understood that when an element is referred to as being "connected" or "coupled" to a first element and a second element, the element is coupled to the first element and the element is coupled to the second element.When an element is referred to herein as being "electrically coupled" to another element, one or more of a charge or a signal can be transferred between the element and the other element, either directly or via intervening elements, if present. It should be understood that when elements are referred to as being "electrically connected" or "electrically coupled" to a first element and a second element, one or more of the charges and/or signals may be passed through the elements, directly or via intervening elements (if if present) is transferred between the first element and the second element.Self-capacitance is the capacitance of an electrode to a virtual ground, and this virtual ground may or may not be known.When measuring the self-capacitance of an electrode, this is usually done with respect to a signal with local ground as reference potential, and this signal is used as an indication of the self-capacitance or the magnitude of the change therein. There may be variations in the coupling between the electrodes and the virtual ground, which affect the self-capacitance and indication of self-capacitance of the electrodes, and such variations may or may not be known. As a non-limiting example, if an electrode is coupled to a material of interest, there may be a grounding of the material of interest to the capacitive sensor due to environmental factors such as, but not limited to, an object touching the material or a device under test in which the material is present. Galvanic coupling of potential or local ground, which may affect the self-capacitance of the electrodes and the indication of self-capacitance. Additionally, electrical noise can degrade the accuracy and resolution of the signal and the values generated to represent it. As a non-limiting example, if a signal or its representative value is used to infer information about a material of interest, the presence of noise effects in the signal/value as well as variations due to causes other than material variation can degrade the signal from the material of interest. The degree of correlation between and the inference of information about the material of interest from the signal/value. The existence of such effects is referred to herein as "coupling errors"In applications where self-capacitance indications are used to infer information about a material of interest, such as level sensing (e.g., but not limited to, the level of a liquid, solution, or mixture in a container), coupling errors may be critical for utilizing such An undesired, even catastrophic, effect on a system or process of information. Furthermore, the inventors of the present disclosure realized that it is often difficult to compensate for coupling errors at runtime without worsening the coupling errors. Furthermore, the magnitude of the coupling error depends on many factors, such as the power supply topology of the capacitive sensor (e.g., isolated or non-isolated, switched, linear, battery-powered, but not limited to), the runtime due to other application loads Coupling changes due to variations, as well as run-time changes in material coupling to ground (such as, but not limited to, human hand coupling, for example), and it is difficult to predict and address all of these in advance.In coupling error sensitive applications, attempts are sometimes made to increase the degree of controlled capacitive coupling of the material to the ground of the capacitive sensor circuit in order to reduce the effect of variations between the material of interest and ground. The inventors of the present disclosure recognize that fully realizing the benefits of this approach can be challenging in implementation due to mechanical, electrical, or cost constraints. Attempts are sometimes made to couple the material of interest to ground, however, the inventors of the present disclosure realized that this implies other constraints such as electrical isolation, introduction of foreign metallic bodies (conductors for ground paths) into potentially corroded materials, and regulatory Constraints such as health issues in food and beverage applications.The inventors of the present disclosure realized that, as a non-limiting example, a solution that would naturally reduce coupling errors without some or all of the disadvantages discussed above would be desirable.The inventors of the present disclosure realized that in the case of changes in the self-capacitance of the electrodes, these changes could be due to changes in the material of interest, which could be due to a gap between the material/circuit of interest of the capacitive sensor and ground potential. , and components of the change may be split due to changes in the material of interest and changes between the material of interest/circuitry of the capacitive sensor and ground potential. The inventors of the present disclosure realized that it would be desirable to observe or capture a component of the self-capacitance change due to changes in the material of interest that is independent of changes between the material of interest/circuitry of the capacitive sensor and ground potential.As used herein, "coupling capacitance" refers to the component of the self-capacitance of an electrode that corresponds to capacitive coupling with a local reference potential of the electrode.FIG. 1A is a block diagram depicting a capacitive sensing system 100 according to one or more examples. Capacitive sensing system 100 is generally configured to generate differential value 106 that is proportional to property 112 of material 110 present at device under test 114 (DUT 114 ). Property 112 of material 110 may vary, and differential value 106 may reflect the change in property 112 . In one or more examples, property 112 of material 110 may be based at least in part on relationship 116 between first material 140a and second material 140b in materials 110 . The relationship 116 between the first material 140a and the second material 140b in the materials 110 may vary, and the properties 112 of the materials 110 may reflect the variation in the relationship 116 . In this manner, property 112 is indicative of relationship 116 between first material 140a and second material 140b.In one or more examples, the value of the property 112, the relationship 116, and the measure of the first material 140a, the second material 140b, or the DUT 114 proportional thereto may be based at least in part on the corresponding coupling capacitance ( CS) to infer. Detector 102 is generally configured to generate differential value 106 that is indicative of a corresponding coupling capacitance of electrode 108 , including when coupled to DUT 114 . The detector 102 may generate signals at the first internal capacitor and the second internal capacitor 104 respectively, which signals are indicative 124 of the self-capacitance of the electrodes 108 of the detector 102 . As discussed herein, detector 102 may generate differential value 106 by calculating the difference between self-capacitance indications 124 .Detector 102 may generate the signal and value of self-capacitance indication 124 by performing a particular process that is susceptible to coupling errors. Self-capacitance indication 124 includes a coupling capacitance indication. Self-capacitance indication 124 may additionally include a coupling error indication. As discussed above, the presence of an indication of coupling error may affect how well the self-capacitance indication 124 corresponds to the coupling capacitance of the electrode 108 . By taking the difference between the respective self-capacitance indications 124 of electrodes 108 , detector 102 generates a value indicative of the coupling capacitance of electrodes 108 , ie differential value 106 . More specifically, detector 102 is configured to generate self-capacitance indications 124 in such a way that coupling error indications present in respective self-capacitance indications 124 can be individually canceled out. The coupling capacitance indication present in the corresponding self capacitance indication 124 is retained in the differential value 106 .FIG. 1B is a block diagram depicting a capacitive sensing system 100 . Electrodes 108 have coupling capacitances CS1 and CS2 respectively associated with respective ones of electrodes 118a and 118b of electrodes 108 as at least one component of their respective self capacitances. Coupling capacitors CS1 and CS2 are substantially the same. Difference value 106 is indicative of coupling capacitances CS1 and CS2 , and thus the corresponding coupling capacitance of electrodes 118 a and 118 b in electrodes 108 . Coupling capacitances CS1 and CS2 may also be characterized as components of the self capacitance of respective ones of electrodes 118 a , 118 b associated with the coupling between respective ones of electrodes 108 and reference potential 122 . Furthermore, changes in the coupling between the respective electrodes 118 a and 118 b and the reference potential 122 are reflected in changes in the coupling capacitances CS1 and CS2 and in the magnitude of the differential value 106 .In one or more examples, various inferences may be reliably made based at least in part on differential value 106 . Non-limiting examples of inferences include the corresponding amount of material 140a or 140b or fluid level or utilization of DUT 114 (eg, capacity utilized, capacity remaining, but not limited thereto).FIG. 1C is a diagram depicting capacitive sensing system 100 . The first sensor 130 and the second sensor 132 are disposed adjacent to a volume 136 (eg, defined by the DUT 114 of FIG. 1 ). The respective lengths of the respective electrodes 118a and 118b of the first sensor 130 and the second sensor 132 extend continuously along a first vertical direction (eg, along the Y direction as depicted in FIG. 1C , but not limited thereto). The first vertical direction may extend substantially parallel to a vertically oriented boundary plane of volume 136 . Volume 136 represents the volume of the space occupied by material 110 (reference numeral "110" refers to the entire contents of volume 136). Material 110 includes a first material 140a (eg, liquid, but not limited thereto) and a second material 140b (eg, air, but not limited to), the first material and the second material being different. In various examples, material 110 may have a heterogeneous state or a homogeneous state. As a non-limiting example of a heterogeneous state, a non-zero amount of the first material 140a and the second material 140b may be present in the material 110 (e.g., the DUT 114 has some liquid in it, but is not full, and the remainder of the volume has air, but not limited thereto). As a non-limiting example of a homogeneous state, one or the other of the first material 140a or the second material 140b is present (e.g., the DUT 114 is "filled" with the first material 140a or the second material 140b, but is not limited thereto) .The coupling capacitance of electrodes 118a and 118b may vary proportionally to a change in property 112 , which is a dielectric property of material 110 . Non-limiting examples of dielectric properties include dielectric geometry (eg, but not limited to, thickness or width of material 110 ) and dielectric material properties (ie, characteristics of energy absorption from an electric field). The dielectric properties of material 110 may be based at least in part on the respective dielectric properties of first material 140a and second material 140b, and thus at least in part on the respective amounts of first material 140a and second material 140b present in volume 136. ratio. The ratio of the respective amounts of the first material 140a and the second material 140b may vary. The dielectric properties of material 110 may vary in response to changes in the ratio of the respective amounts of first material 140 a and second material 140 b present in volume 136 . In one or more examples, the respective coupling capacitances of electrodes 118a and 118b may be indicative of relationship 116 (FIG. 1A), where relationship 116 is the ratio of respective amounts of first material 140a and second material 140b. The ratio of the respective amounts of the first material 140a and the second material 140b may be an indication of the liquid level or vertical height of the first material 140a or the second material 140b. The vertical height of the surface 120 of the first material 140a may vary (eg, rise or fall, but is not limited thereto). As a non-limiting example, various amounts of first material 140a may be added or removed from volume 136, and similar amounts (by volume) of second material 140b may be removed or added. In one or more examples, surface 120 may be substantially flat or block-shaped, substantially parallel or angled relative to the surface on which volume 136 is disposed.The value of the coupling capacitance (Cs1, Cs2, where Cs1=Cs2) of the electrodes 118a and 118b may vary proportionally with the vertical height of the surface 120 of the first material 140a. Specific non-limiting examples of varying vertical heights 138 (which may also be characterized as a "level" of first material 140a) of surface 120 are depicted, namely first vertical height 126, second vertical height 128, and third vertical height Vertical height 134. FIG. 1C depicts different values, namely CSvalue1, CSvalue2 and CSvalue3, indicating the coupling capacitances of electrodes 118a and 118b for the first vertical level 126, the second vertical level 128 and the third vertical level 134, respectively.FIG. 2 is a block diagram depicting a detector 200 according to one or more examples. Detector 200 is generally configured to generate differential value 220 that is indicative of the magnitude of coupling capacitances Cs1 and Cs2 of respective electrodes 228 and 230 of first sensor 208 and second sensor 210 , where Cs1 = Cs2. Detector 200 is a non-limiting example of detector 102 of FIG. 1A .In one or more examples, first sensor 208 and second sensor 210 are symmetrical. As used herein, when a capacitive sensor, such as first sensor 208 and second sensor 210, is described as being "symmetrical," this means that the capacitive sensor responds to substantially the same degree of coupling capacitance of the corresponding electrodes 228 and 230. Basically the same changes. As a non-limiting example, a symmetric capacitive sensor exhibits substantially the same degree of capacitive coupling to a material and experiences substantially the same degree of capacitance in response to a change in the dielectric properties of the material affecting a change in the material's dielectric constant K Coupling changes. Non-limiting examples of changes in dielectric properties due to pressure or temperature include, by way of non-limiting examples, changes in dielectric geometry (e.g., but not limited to, changes in thickness or width) or changes in dielectric material properties ( That is, a characteristic change in energy absorption from the electric field). In one or more examples, in order to increase the degree of symmetry between the first sensor 208 and the second sensor 210, the areas A1 and A2 (A1=A2) of the respective electrodes 228 and 230, the capacitance of the internal capacitor (Ci1=Ci2 ), one or more of the reference voltage level and the ground level are substantially the same.The circuit of the first external capacitor 214 and the second external capacitor 216 depicted in FIG. 2 is an equivalent circuit for representing the respective coupling capacitances CS1 and CS2 of the electrodes 228 and 230, which are the first external capacitor 214 and the second external capacitor 214. The corresponding plate of capacitor 216. It is worth noting that in one or more examples, the actual values of the self-capacitance and coupling capacitances CS1 and CS2 may or may not be known due to the utilization of the first external capacitor 214 and the second external capacitor 214 as discussed herein. An indication of the self-capacitance and coupling capacitance of the respective electrodes 228 and 230 of the external capacitor 216 .In the case of each of the first external capacitor 214 and the second external capacitor 216, a material or liquid (not depicted) serves as the dielectric of the respective first external capacitor 214 and second external capacitor 216, and the electrodes 228 and 230 is an insulated electrode disposed near a portion of a wall defining a holding area of the DUT (e.g., but not limited to, a basin, chamber, hollow, cavity, or space in which material is or will be located) for use as a first The respective plates of the external capacitor 214 and the second external capacitor 216 . Electrodes 228 or 230 are not required to be in physical contact with the walls of the DUT or with materials present in the DUT or with another material that is in physical contact with the DUT or materials present therein.While the examples may appear to refer to rigid-walled or fixed-volume DUTs, this is not required, and rigid-walled, flexible, fixed and non-fixed-volume DUTs (e.g., bags, chambers for dispensing liquids via pumps) chambers, but not limited thereto) are specifically contemplated and are within the scope of this disclosure.In one or more examples, first sensor 208 is configured to generate a first voltage V1 indicative of a self-capacitance of electrode 228 and second sensor 210 is configured to generate a second voltage V2 indicative of a second self-capacitance of electrode 230 . In one or more examples, voltages V1 and V2 are generated by performing different but complementary acquisition processes via first acquisition circuit 222 of first sensor 208 and second acquisition circuit 224 of second sensor 210 , respectively. The first acquisition circuit 222 and the second acquisition circuit 224 each include a plurality of switches that are controlled to perform a first acquisition process or a second acquisition process, as the case may be, and are discussed below with reference to FIG. 4 .Responsive, at least in part, to first acquisition circuit 222 performing a first acquisition process and second acquisition circuit 224 performing a second acquisition process, respectively in first internal capacitor 212 of first acquisition circuit 222 and second internal capacitor 224 of second acquisition circuit 224 Voltages V1 and V2 are generated at capacitor 218 exhibiting voltage levels indicative of the self-capacitance of first external capacitor 214 and second external capacitor 216 , respectively. These self-capacitances may (but not necessarily) include coupling errors.In one or more examples, an increase or decrease in the voltage level of the first voltage V1 should reflect a corresponding proportional increase or decrease in the self-capacitance indication, while a decrease in the voltage level of the second voltage V2 or increase should reflect a corresponding proportional increase or decrease of the second self-capacitance indication.The measurement circuit 226 includes a first analog-to-digital converter (ADC) 202 of the first sensor 208 , a second ADC 204 of the second sensor 210 and a processor 206 . The first ADC 202 and the second ADC 204 are arranged to generate a first value R1 and a second value R2 corresponding to respective voltage levels of the voltages V1 and V2 . Thus, the first value R1 and the second value R2 are indicative of self-capacitance, and the first voltage V1 and second voltage V2 are also indicative of self-capacitance. In one or more examples, first internal capacitor 212 and second internal capacitor 218 may be arranged and sized, respectively, such that respective voltage levels of voltages V1 and V2 are within the operating range of first ADC 202 and second ADC 204 . The processor 206 is generally configured to calculate the difference between the first value and the second value (i.e., R1-R2), and generate a value reflecting the difference between the first value R1 and the second value R2 and indicating the coupling capacitance CS1 , The differential value of CS2 is 220.In a given execution of a measurement, either of the first sensor 208 and the second sensor 210 may be based on a first acquisition process (eg, but not limited to, first acquisition process 424 of FIG. 4 ) or a second acquisition process (eg, but not limited to, the second acquisition process 426 of FIG. 4 ) to operate. In one or more examples, the first sensor 208 and the second sensor 210 are operable using their respective switches S1 , S2 , S3 , and S4 . As used herein, when a switch or a connection via a switch is described as being "on," it means that current flows through the switch or connection, and when a switch is described as "off," it means that current does not flow through the switch. or connect.3 is a flowchart depicting a process 300 for generating a differential value that is indicative of coupling capacitance, according to one or more examples.At operation 302 , process 300 obtains a differential value indicative of a difference in the self capacitance indication of the electrodes. An indication of self-capacitance may be exhibited at the first internal capacitor and the second internal capacitor. In one or more examples, the self-capacitance indication can optionally be obtained by performing a first acquisition process and a second acquisition process, the first acquisition process and the second acquisition process being different.At operation 304 , process 300 optionally infers a vertical height of a surface of material present at the device under test coupled to the electrode responsive at least in part to the differential value. In one or more examples, process 300 optionally infers a relationship between the first material and the second material present at the device under test, and optionally infers the first material and the second material in response at least in part to the differential value. The vertical height of the surface of one of the two materials.4 is a flowchart depicting a differential acquisition process 400 for acquiring differential values indicative of coupling capacitance, according to one or more examples.The differential acquisition process 400 includes a first acquisition process 424 and a second acquisition process 426 . First acquisition process 424 and second acquisition process 426 are non-limiting examples of the first acquisition process and the second acquisition process of process 300 .In one or more examples, the detector 200 (or, more generally, the differential acquisition process 400 ) can be configured to operate the first sensor 208 and the second sensor 210 according to the first acquisition process 424 or the second acquisition process 426 Either with a given implementation for the differential acquisition process 400 . In the particular non-limiting example depicted in FIG. 4 , first sensor 208 performs a first acquisition process 424 and second sensor 210 performs a second acquisition process 426 .In one or more examples, first acquisition process 424 and second acquisition process 426 are performed substantially simultaneously. The following operations are performed substantially simultaneously: operation 404/operation 412, operation 406/operation 414, operation 408/operation 416, and operation 410/operation 418. As discussed below, this ensures that the sensors performing the first acquisition process 424 and the second acquisition process 426 are affected in the same but opposite manner, as discussed below.At operation 402, the differential acquisition process 400 initiates a measurement. In one or more examples, differential acquisition process 400 initializes a first sensor (eg, but not limited to first sensor 208 ) to perform measurements according to first acquisition process 424 and initializes a second sensor (eg, second sensor 210, but not limited thereto) to perform measurements according to the second acquisition process 426.At operation 404, the first acquisition process 424 charges the first internal capacitor to the reference voltage Vref. In the case of the first sensor 208, the connection via switches S1, S3 and S4 is open, and the connection between Vref and the first internal capacitor 212 is made via switch S2, and the first internal capacitor 212 is connected to the first external capacitor The connection between 214 and ground is broken via switch S2.At operation 406, the first acquisition process 424 discharges the first external capacitor (ie, discharges the self-capacitance of the electrode represented by the first external capacitor). In the case of the first sensor 208, the connection via switches S1 and S3 is broken, the connection between the first external capacitor 214 and ground is made via switch S4, and the connection between VREF and the first internal capacitor 212 is via switch S2 is on, and the connection between the first internal capacitor 212 and both the first external capacitor 214 and ground via switch S2 is open. In the case of the first sensor 208, operation 404 and operation 406 may be performed concurrently.At operation 408, the first acquisition process 424 couples the first internal capacitor and the first external capacitor. In the case of the first sensor 208, the connection via switches S1, S3 and S4 is open, the connection between the first internal capacitor 212 and the first external capacitor 214 is made via switch S2, and the first internal capacitor 212 is connected to Vref and ground are disconnected via switch S2. This arrangement enables charge sharing from the first inner capacitor 212 to the first outer capacitor 214 .Notably, at least in part in response to performing operations 404, 406, and 408, the voltage level of the first voltage V1 is the reference voltage Vref, the first coupling capacitance Cs1 of the first external capacitor 214, and the first coupling capacitance Cs1 of the first internal capacitor 212. A function of the internal capacitance Ci1.At operation 410, the first acquisition process 424 measures the first voltage V1 present on the first internal capacitor to obtain a first value. The first value represents the voltage level of the first voltage V1. In the case of the first sensor 208, the connection between the first ADC 202 and the first internal capacitor 212 via the switch S1 is switched on, the connection between the first internal capacitor 212 and the first external capacitor 214 is switched on via the switch S2, The connection between the first internal capacitor 212 and Vref and ground via switch S2 is open, and the corresponding connections of switches S3 and S4 are open. The first voltage V1 is measured via the first ADC 202 to obtain a first value R1.Turning to the second acquisition process 426, at operation 412, the second acquisition process 426 discharges the second internal capacitor. In the case of the second sensor 210, the connections via switches S1, S3 and S4 are open, and the connection between ground and the second internal capacitor 218 is made via switch S2, and the second internal capacitor 218 is connected to the second external capacitor The connection between 216 and Vref via switch S2 is broken.At operation 414, the second acquisition process 426 charges the second external capacitor to the reference voltage Vref. (ie, charging the self-capacitance of the electrode represented by the second external capacitor to the reference voltage Vref). In the case of the second sensor 210, the connection via switches S1 and S4 is broken, the connection between the second external capacitor 216 and Vref is made via switch S3, the connection between ground and the second internal capacitor 218 is via switch S2 is turned on, and the connection between the second inner capacitor 218 and both the second outer capacitor 216 and ground via switch S2 is broken.At operation 416, the second acquisition process 426 couples the second internal capacitor and the second external capacitor. In the case of the second sensor 210, the connection via switches S1, S3 and S4 is open, the connection between the second internal capacitor 218 and the second external capacitor 216 is made via switch S2, and the second internal capacitor 218 is connected to Vref and ground are disconnected via switch S2. This arrangement enables charge sharing from the second outer capacitor 216 to the second inner capacitor 218 .Notably, at least in part in response to performing operations 412, 414, and 416, the voltage level of the second voltage V2 is the reference voltage Vref, the second coupling capacitance CS2 of the second external capacitor 216, and the first coupling capacitance CS2 of the second internal capacitor 218. Two functions of the internal capacitance Ci2.At operation 418, the second acquisition process 426 measures the second voltage V2 present on the second internal capacitor to obtain a second value. The second value represents the voltage level of the second voltage V2. In the case of the second sensor 210, the connection between the second ADC 204 and the second internal capacitor 218 via the switch S1 is switched on, the connection between the second internal capacitor 218 and the second external capacitor 216 is switched on via the switch S2, The second internal capacitor 218 is disconnected from Vref and ground via switch S2, and the corresponding connections of switches S3 and S4 are disconnected. The second voltage V2 is measured via the second ADC 204 to obtain a second value R2.At optional operation 420 , the differential acquisition process 400 may optionally perform another execution of the second acquisition process 426 with the first sensor 208 and perform another execution of the first acquisition process 424 with the second sensor 210 . The first value and the second value obtained by the first acquisition process 424 and the second acquisition process 426 performed by the first sensor 208 can be combined to obtain the first value R1, and the first acquisition process 424 and the second acquisition process 424 are performed by the second sensor 210 The values obtained by the two acquisition processes 426 may be combined to obtain the second value R2.FIG. 6 (described further below) depicts voltages generated during differential acquisition process 400 without optional operation 420, and FIG. 7 (described further below) depicts voltages generated during differential acquisition process 400 with optional operation 420 voltage generated during.At operation 422, the differential acquisition process 400 calculates the difference between the first value R1 and the second value R2 to obtain a differential value indicative of the coupling capacitance. The obtained difference indicates the difference of the first capacitance and the coupling capacitances CS1 and CS2. The difference in self-capacitance indicates the distance from the material.It is worth noting that since the first acquisition process 424 and the second acquisition process 426 described below are performed substantially simultaneously, they are subject to the same noise and coupling errors. Since the first and second sensors (e.g., but not limited to, first sensor 208 and second sensor 210) are symmetrical, they are affected to the same extent by the same noise and environmental coupling errors, i.e., there is essentially coupling error of the same magnitude. When the first acquisition process 424 and the second acquisition process 426 are performed, the net changes in voltage and charge flow direction experienced at the first and second sensors are of opposite polarity, so the signal is affected by noise and the environment in opposite ways. The influence of coupling errors, ie the corresponding coupling errors present have opposite signs. Complementary (eg, substantially the same magnitude but substantially opposite sign) coupling errors exist in voltages V1 and V2 and are captured by values R1 and R2. When R2 is subtracted from R1, the coupling error cancels out and yields a value where the coupling error proportional to distance is non-existent or insignificant.5 is a block diagram depicting a first drive signal 502 applied to the electrode 228 of the first sensor 208 and a second drive signal 504 applied to the electrode 230 of the second sensor 210, wherein the timing of the second drive signal 504 is consistent with that of the first drive signal. The timing of signal 502 is 180 degrees out of phase (θ-180). First drive signal 502 is a non-limiting example of a drive signal applied to electrode 228 during performance of first acquisition process 424, and second drive signal 504 is a drive signal applied to electrode 230 during performance of second acquisition process 426. Non-limiting examples of signals. In one or more examples, the first drive signal 502 and the second drive signal 504 are applied synchronously.In optional operation 420, where the first sensor 208 performs a second acquisition process 426 and the second sensor 210 performs a first acquisition process 424, the phases of the corresponding drive signals used to generate these measurement signals may remain the same or may also vary, As long as the corresponding timing is kept 180 degrees out of phase.6 is a graph depicting exemplary voltage levels of the first voltage V1 and the second voltage V2 in a first graph 602 and a second graph 604, respectively, during an exemplary execution of the differential acquisition process 400 performed by the detector 200. Flat graph.At time t0, the first internal capacitor 212 of the first sensor 208 is charged to Vdd (reference voltage Vref), as shown by the dashed line of the first graph 602, and the second internal capacitor 218 of the second sensor 210 is discharged to Vss (ground or system ground), as shown by the dashed line in the second graph 604 . During the duration from t0 to t1, the first external capacitor 214 is discharged to Vss, as shown by the solid line of the first graph 602, and the second external capacitor 216 is charged to Vdd, as shown by the second graph 604. shown by the solid line.At time t1 , first inner capacitor 212 and first outer capacitor 214 are coupled, and from t1 to time t2 there is charge sharing from first inner capacitor 212 to first outer capacitor 214 . This charge sharing reduces the first voltage V1 to a level indicative of the first coupling capacitance CS1 at time t2 as shown by curve 606 , where the solid line of the first graph 602 shows the voltage across the first external capacitor 214 . Also at time t1 , the second inner capacitor 218 and the second outer capacitor 216 are coupled, and from t1 to time t2 there is charge sharing from the second outer capacitor 216 to the second inner capacitor 218 . This charge sharing increases the second voltage V2 to a level indicative of the second coupling capacitance CS2 at time t2 as shown by curve 608 , where the solid line of the second graph 604 shows the voltage across the second external capacitor 216 .In the case of the first sensor 208, the curve 606 thus represents the outflow of charge from the first inner capacitor 212 to the first outer capacitor 214, and there is a net negative direction change in the voltage level of the first voltage V1 from time t1 to time t0 . In the case of the second sensor 210, the curve 608 represents the inflow of charge from the second outer capacitor 216 to the second inner capacitor 218, and there is a net positive direction change in the voltage level of the second voltage V2 from time t1 to time t0.FIG. 7 is a graph depicting exemplary voltage levels during an exemplary performance of differential acquisition process 400 (including optional operation 420 ) performed by detector 200 .FIG. 7 depicts the voltage levels at the first sensor 208 and the second sensor 210 during two acquisition phases (acquisition phase 1 and acquisition phase 2). During acquisition phase 1, the first sensor 208 performs a first acquisition process 424, as depicted in the upper graph, and the second sensor 210 performs a second acquisition process 426, as depicted in the lower graph. During acquisition phase 2, the first sensor 208 performs a second acquisition process 426, as depicted in the upper graph, and the second sensor 210 performs a first acquisition process 424, as depicted in the lower graph.A first curve 702a and a first curve 702b represent voltage levels exhibited by voltage V1 (on the internal capacitor) during acquisition phase 1 and acquisition phase 2, respectively. A second curve 704a and a second curve 704b represent voltage levels exhibited by the voltage on the external capacitor during acquisition phase 1 and acquisition phase 2, respectively.Third curve 706a and third curve 706b represent voltage levels exhibited by voltage V2 during acquisition phase 1 and acquisition phase 2, respectively. Fourth curve 708a and fourth curve 708b represent voltage levels exhibited by the external capacitor during acquisition phase 1 and acquisition phase 2, respectively. At time t21 during acquisition phase 1 , a first value VA1 represented by FIG. 7 is obtained by the first sensor 208 ( t21 ), and a second value VB2 represented by FIG. 7 is obtained by the second sensor 210 ( t21 ). At time t22 during acquisition phase 2, a third value VB1 represented by FIG. 7 is obtained by the first sensor 208 (t22) and a fourth value VA2 represented by FIG. 7 is obtained by the second sensor 210 (t22).The first value VA1 is obtained at time t21 ( t21 ), and from time t11 to time t21 , the first voltage V1 exhibits a generally decreasing voltage level from VDD to VSS. The second value VB1 is obtained at time t22 ( t22 ), and from time t12 to time t22 , the first voltage V1 exhibits a generally increasing voltage level from VSS to VDD. When VA1 ( t21 ) and VB1 ( t22 ) are combined, at least some noise effects on the first sensor 208 are canceled out.Similarly, the third value VB2 is obtained at time t21 ( t21 ), and from time t11 to time t21 , the first voltage V1 exhibits a generally increasing voltage level from VSS to VDD. The fourth value VA2 is obtained at time t22 ( t22 ), and from time t12 to time t22 , the second voltage V2 exhibits a generally decreasing voltage level from VDD to VSS. When VB2(t21) and VA2(t22) are combined, at least some noise effects on the second sensor 210 are canceled out.A differential value obtained by at least partially responding to two-phase measurements performed by each sensor may be expressed as: differential value=VA1(t11)-VB2(t11)+VA2(t22)-VB1(t22). Additionally or alternatively, in one or more examples, a differential value may be calculated for the individual phase measurements and then used to calculate the two-phase measurements, expressed as: Value=VA1(t11)-VB2(t11), and value obtained from the second phase=VA2(t22)-VB1(t22), and differential value=VA1(t11)-VB2(t11)+VA2(t22)- VB1(t22).8 is a block diagram depicting a detector portion 800 configured to apply a guard voltage (i.e., referred to as a drive shield) that increases to corresponding electrodes of a first sensor and a second sensor (e.g., a first sensor). Immunity to leakage current at electrode 228 of one sensor 208 and electrode 230 of second sensor 210 ), but is not limited thereto.A first guard line 814 is disposed on a first support structure portion 806 (eg, but not limited to, a portion of a printed circuit board (PCB)) on a path around an electrode 810 disposed on the first support structure portion 806 . Similarly, a second guard line 824 is arranged on a second support structure portion 808 (which may be the same or a different support structure than the first support structure portion 806 ), on a path around the electrode 812 . A first driven guard electrode 820 coupled to a first guard line 814 is provided for receiving the first guard voltage 816 generated by the first guard circuit 802, and a second driven guard electrode 822 coupled to a second guard line 824 is provided for A second protection voltage 818 generated by the second protection circuit 804 is received.In one or more examples, the respective waveforms of the first guard voltage 816 and the second guard voltage 818 may track the waveforms of the voltages at the first external capacitor 214 and the second external capacitor 216 . However, as long as the total net change in voltage levels exhibited by the first guard voltage 816 and the second guard voltage 818 is consistent with the total net change in voltage levels exhibited by the voltages across the first external capacitor 214 and the second external capacitor 216 over the duration of interest With the same net change in voltage level, there is no need for a 1-to-1 correspondence of the waveforms. In one or more examples, the first protection circuit 802 and the second protection circuit 804 may be configured with information about the net voltage change and timing to generate a first protection voltage 816 and a second protection voltage 818 respectively, the first protection voltage and the second protection voltage represent voltage levels suitable for protection.FIG. 9 depicts a diagrammatic view of exemplary voltage levels during the first and second acquisition phases of the exemplary differential acquisition process 400 including optional operation 420 and the protection voltages of FIG. 8 .The first diagrammatic view 910 depicts the voltage levels exhibited at the electrodes 810 and the first guard voltage 816 during the first and second acquisition phases of the exemplary differential acquisition process 400 . A second diagrammatic view 912 depicts the voltage levels exhibited by the second guard voltage 818 and the electrode 812 during the first and second acquisition phases of the exemplary differential acquisition process 400 .The first acquisition phase is from time t01 to t21, and the second acquisition phase is from time t02 to t22. The labels depicted in FIGS. 6 and 7 for marking the times t11 and t12 at which charge sharing between the internal and external capacitors begins are not depicted in FIG. 9 to avoid unnecessarily obscuring the drawings and their description.In the first diagrammatic view 910, the first curve 902a represents the voltage level exhibited by the first protection voltage 816 during the first acquisition phase, and the first curve 902b represents the voltage level exhibited by the first protection voltage 816 during the second acquisition phase. voltage level. The second curve 904a represents the voltage level exhibited by the electrode 810 during the first acquisition phase, and the second curve 904b represents the voltage level exhibited by the electrode 810 during the second acquisition phase.In the second diagrammatic view 912, the third curve 906a represents the voltage level of the second protection voltage 818 during the first acquisition phase, and the third curve 906b represents the voltage level of the second protection voltage 818 during the second acquisition phase. The fourth curve 908a represents the voltage level exhibited by the electrode 812 during the first acquisition phase, and the fourth curve 908b represents the voltage level exhibited by the electrode 812 during the second acquisition phase.From time t01 to time t21, the net change in voltage level exhibited by the second curve 904a and the first curve 902a and the first guard voltage 816 and the polarity of this change are the same, i.e. VSS to Net change in VDD. From time t02 to immediately after time t22, the net change in voltage level exhibited by the electrode 810 and the first guard voltage 816, depicted by the second curve 904b and the first curve 902b, and the polarity of this change are the same , the net change from VDD to VSS. From time t01 to time t21, the net change in voltage level exhibited by the fourth curve 908a and the third curve 906a and the first guard voltage 818 and the polarity of the change are the same, VDD to Net change in VSS. From time t02 to immediately after time t22, the net change in voltage level exhibited by the electrode 812 and the second guard voltage 818 depicted by the fourth curve 908b and the third curve 906b and the polarity of this change are the same , the net change from VSS to VDD.Notably, immediately after t22, the second curve 904b drops to VSS and the fourth curve rises to VDD. This may reflect the discharge of the first external capacitor or the charging of the second external capacitor, optionally as part of an optional newly performed operation 412 or operation 404 of the second acquisition process 426 or first acquisition process 424 respectively. This may optionally reflect the driving of electrode 810 to VSS or electrode 812 to VDD, so the net change in voltage level is the same as guard voltages 816 and 818 .The presence of a first guard voltage 816 and a second guard voltage 818 (a voltage level exhibiting a net change in voltage level exhibited by electrodes 810 and 812 and the polarity of such changes) causes electrode 810 to communicate with electrode 812. immune to leakage currents.FIG. 10 is a block diagram depicting a measurement circuit 1000 that may be used to implement the first measurement circuit 226 and the second measurement circuit 226 with a single ADC 1004 as a non-limiting example. The measurement circuit 1000 includes a first sample-and-hold circuit 1006 and a second sample-and-hold circuit 1008, the first sample-and-hold circuit and the second sample-and-hold circuit are respectively configured to sample the voltage of the continuously varying analog signal and to maintains its value at a constant level over time. In one or more examples, respective inputs of first sample-and-hold circuit 1006 and second sample-and-hold circuit 1008 may be coupled via first terminal 1010 and second terminal 1012 to sense lines of a different sensor, such as the first sensor 208 The first sensing line 236 and the second sensing line 238 of the second sensor 210, but not limited thereto. In one or more examples, first sample and hold circuit 1006 and second sample and hold circuit 1008 are configured to perform respective voltage sampling in response to timing signal 1016 . In one or more examples, the first sample-and-hold circuit 1006 and the second sample-and-hold circuit 1008 utilize the same timing signal 1016 (or two well-synchronized timing signals), such that the first sample-and-hold circuit and the second sample-and-hold circuit can be controlled The two sample-and-hold circuits perform their respective voltage samplings substantially simultaneously.The measurement circuit 1000 includes a multiplexer 1014 arranged to selectively provide one of the outputs of the first sample and hold circuit 1006 and the second sample and hold circuit 1008 to an input of the ADC 1004 . ADC 1004 is configured to digitize corresponding voltage levels provided by multiplexer 1014 and store these values in buffer 1002 . The digitized values are stored in buffer 1002 for retrieval and combination by a processor (eg, but not limited to processor 206 ) to obtain a differential value indicative of coupling capacitance. Although ADC 1004 measures voltages provided by respective inputs to multiplexer 1014 at different times, the voltages exhibit voltage levels of voltages sampled substantially simultaneously, and thus include complementary coupling errors, as discussed herein.FIG. 11 is a diagram including a first diagrammatic view 1102 and a second diagrammatic view 1104 depicting the utilization of a self-capacitance indication (e.g., self-capacitance indication 124 of FIG. 1A , respectively) during several events. , but not limited to) and the value indication of the vertical height of the differential value (eg, differential value 106 of FIG. 1A or differential value 220 of FIG. 2 , but not limited to). By comparing the corresponding values depicted in views 1102 and 1104 during events (1 finger, 2 fingers, palm touching DUT wall; finger touching water present at DUT; and ground wire inserted into water) it can be seen that vertical The value for altitude indicates a considerable change in response to events, including when the ground wire is inserted into the water, which results in a large change compared to the value indicated for no event. In the second diagrammatic view 1104 , the value indications for vertical height exhibit no substantial change for most events, and when the ground wire is inserted into the water, a modest change is observed compared to the first diagrammatic view 1102 .Thus, the first diagrammatic view 1102 depicts the self-capacitance indications affected respectively by 1 finger, 2 fingers, the palm touching the DUT wall; fingers touching the water present at the DUT; and the ground wire being inserted into the water, which results in coupling in the values error. The presence of coupling errors leads to large fluctuations in values. As can be seen in the second diagrammatic view 1104, utilizing the differential value provides a reduced impact on the self-capacitance indication.Any characterization in this description of something as "typical," "conventional," "known," etc. does not necessarily mean that the thing is disclosed in the prior art or is known in the art with respect to the discussion. Nor does such characterization necessarily imply that it is well known, well understood or commonly used in the relevant art. It only means known or understood by the inventors of the present disclosure.As used in this disclosure, the term "combination" referring to a plurality of elements may include a combination of all elements or any of various sub-combinations of certain elements. For example, the phrase "A, B, C, D, or combinations thereof" may refer to any one of A, B, C, or D; a combination of each of A, B, C, and D; and A, B, C or any subcombination of D, such as A, B and C; A, B and D; A, C and D; B, C and D; A and B; A and C; A and D; B and C; B and D; or C and D.Terms used in this disclosure and particularly in the appended claims (eg, subject matter of the appended claims, etc.) are generally intended to be "open-ended" terms (eg, the term "including" should be construed as "including but not limited to", the term "having" shall be interpreted as "having at least", the term "includes" shall be interpreted as "including but not limited to", etc.). As used herein, the term "each" means some or all. As used herein, the term "each" means all.Additionally, if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases "at least one" and "one or more" to introduce claim recitations. However, use of such phrases should not be read to imply that a claim recitation introduced by the indefinite article "a" or "an" limits any particular claim containing such an introduced claim recitation to containing only one of such recitation example, even when the same claim includes the introductory phrase "one or more" or "at least one" and an indefinite article such as "a" or "an" (e.g., "a" and/or "a " can be construed to mean "at least one" or "one or more", but not limited thereto); the same applies to the use of definite articles to introduce claim recitations. As used herein, the term "each" means some or all, and the term "each and each" means all.Additionally, even if a specific number of an introduced claim recitation is expressly recited, those skilled in the art will recognize that such recitation should be construed to mean that at least the recited number (e.g., an unqualified recitation "two") A statement" without other modifiers is intended to be at least two statements, or two or more statements, without limitation). Also, in those instances where the conventions like "at least one of, but not limited to, A, B, and C" or "one or more of, but not limited to, A, B, and C" are used , such configurations are generally intended to include A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B and C together, without limitation.Furthermore, any discrete word or phrase presenting two or more alternative terms, whether in the specification, claims, or drawings, should be construed as including either of those terms, either of those terms, or any of those terms. or two term possibilities. For example, the phrase "A or B" will be understood to include the possibilities of "A" or "B" or "A and B."Various examples of non-limiting examples of the present disclosure include:Embodiment 1: An apparatus comprising: a detector, wherein the signal that the detector is configured to sense is a differential value indicative of a difference in self-capacitance indications exhibited at a first internal capacitor and a second internal capacitor , where the differential value is proportional to the vertical height of the surface of the material present at the device under test coupled to the detector's electrodes.Embodiment 2: The apparatus of embodiment 1, comprising: a first sensor configured to generate a first voltage indicative of a first self-capacitance of a corresponding electrode; and a second sensor, the second The sensor is configured to generate a second voltage indicative of a second self-capacitance of the corresponding electrode.Embodiment 3: The apparatus of any one of Embodiments 1 and 2, wherein the first sensor is configured to change the voltage level of the first voltage at least in part in response to a change in the first self-capacitance and the second sensor is configured to change the voltage level of the second voltage at least in part in response to a change in the second self-capacitance.Embodiment 4: The apparatus of any one of Embodiments 1 to 3, wherein the first sensor and the second sensor are configured to provide substantially symmetric coupling to capacitive coupling with material present at the device under test the response to.Embodiment 5: The apparatus of any one of embodiments 1 to 4, wherein the first sensor and the second sensor are configured to provide the material present at the device under test with the first sensor and the second sensor A substantially symmetrical response to changes in coupling between the second sensors.Embodiment 6: The apparatus of any one of Embodiments 1 to 5, wherein the substantially symmetrical response of the first sensor and the second sensor configured to provide includes: The response of the material to a change in coupling between the first sensor and the second sensor is at least in part in response to a change in a dielectric property of the material.Embodiment 7: The apparatus of any one of Embodiments 1 to 6, wherein the first sensor comprises a first acquisition circuit configured to generate the first acquisition circuit at the first internal capacitor a voltage; and a first measurement circuit configured to generate a first value indicative of a voltage level exhibited by the first voltage, and the second sensor includes: a second acquisition circuit , the second acquisition circuit is configured to generate the second voltage at the second internal capacitor; and a second measurement circuit is configured to generate a second value indicative of the second voltage generated by the second The voltage level exhibited by a voltage.Embodiment 8: The apparatus according to any one of embodiments 1 to 7, wherein the first measurement circuit comprises a first analog-to-digital converter arranged to measure the first voltage generated at an internal capacitor; and the second measurement circuit includes a second analog-to-digital converter arranged to measure the second voltage generated at the second internal capacitor Voltage.Embodiment 9: The apparatus according to any one of Embodiments 1 to 8, comprising: a sample-and-hold circuit; an analog-to-digital converter; and a processor configured to control the sample-and-hold circuit and the An analog-to-digital converter to alternately measure the first voltage generated at the first internal capacitor and the second voltage generated at the second internal capacitor.Embodiment 10: The device according to any one of Embodiments 1 to 9, the device comprising: a first wire arranged around a corresponding electrode of the first sensor; a second wire arranged around a A corresponding electrode arrangement of the second sensor; and a first drive guard electrode electrically coupled to the first lead and configured to receive a first guard voltage; and a second drive guard electrode, the second drive guard electrode A guard electrode is electrically coupled to the second wire, and the second drive guard electrode is configured to receive a second guard voltage.Embodiment 11: A method comprising: obtaining a differential value indicative of a difference in an indication of self-capacitance of electrodes exhibited at a first internal capacitor and a second internal capacitor; and responding at least in part to the The differential value is used to infer the vertical height of the surface of the material present at the device under test coupled to the electrode.Embodiment 12: The method of embodiment 11, comprising: performing a first acquisition process to obtain a first indication of self-capacitance of a first one of the electrodes; and performing a second acquisition process to obtain a self-capacitance of a first one of the electrodes A second self-capacitance indication of the second electrode.Embodiment 13: The method according to any one of Embodiments 11 and 12, wherein performing the first acquisition process comprises: charging a first internal capacitor to a reference voltage; discharging a first external capacitor, the first external capacitor associated with the first of the electrodes; coupling the first internal capacitor and the first external capacitor; and measuring a voltage exhibited by the first internal capacitor.Embodiment 14: The method according to any one of Embodiments 11 to 13, wherein performing the second acquisition process comprises: discharging a second internal capacitor; charging a second external capacitor to a reference voltage, the second external capacitor associated with the second one of the electrodes; coupling the second internal capacitor and the second external capacitor; and measuring a voltage exhibited by the second internal capacitor.Embodiment 15: The method of any one of Embodiments 11 to 14, wherein performing the first acquisition process and the second acquisition process comprises: performing the first acquisition process with a first sensor and performing a second acquisition process with a second sensor the second acquisition process; and performing the second acquisition process with the first sensor and performing the first acquisition process with the second sensor.Embodiment 16: The method of any one of Embodiments 11-15, wherein the first acquisition process and the second acquisition process are performed substantially simultaneously.Embodiment 17: The method of any one of Embodiments 11-16, wherein the first indication of self-capacitance and the second indication of self-capacitance are generated substantially simultaneously.Embodiment 18: An apparatus comprising: a detector, wherein the signal that the detector is configured to sense is a differential value indicative of a coupling capacitance exhibited by two corresponding electrodes, wherein the coupling The indication of capacitance is proportional to the relationship between the first material and the second material present at the device under test coupled to the two respective electrodes of the detector.Embodiment 19: An apparatus comprising: a detector, wherein the signal that the detector is configured to sense is a change in self capacitance of an electrode including only The change in self-capacitance is caused by the change in the capacitive coupling between them.Embodiment 20: An apparatus comprising: a detector, wherein the signal that the detector is configured to sense is a change in self-capacitance of an electrode that includes only dielectric capacitance due to a material of interest Changes in self-capacitance caused by changes in properties.The features of the various examples described herein are not mutually exclusive and may exist in various combinations and permutations, even if such combinations or permutations are not explicitly described herein, without departing from the scope of the present disclosure. Indeed, those of ordinary skill in the art will recognize variations, modifications, and other implementations of what is described herein without departing from the scope of the present disclosure. Accordingly, the invention should not be limited only by the foregoing exemplary description, but only by the appended claims and their legal equivalents. |
Systems and methods are provided for an automatic exposure detection feature for adding to an x-ray panel readout device, including an imaging panel having a low power mode, a sense circuit for receiving an input signal in the low power mode and restricting an input signal voltage to a first voltage, and a sensor for sensing a change in the input signal voltage, wherein the change in input signal voltage indicates exposure to an x-ray signal. |
CLAIMS:1. A method for automatic exposure detection in imaging applications comprises:entering a low power state at an imaging panel;restricting an input signal voltage to a first voltage at a sense circuit in the imaging panel;receiving an input signal at the sense circuit in the imaging panel;sensing a change in the input signal voltage, wherein the change in the input signal voltage indicates exposure to an x-ray signal; andexiting the low power state based on the change in the input signal voltage.2. The method of claim 1, wherein restricting the input signal voltage comprises clamping the input signal voltage at a diode.3. The method of claim 1, wherein clamping the input signal voltage comprises clamping the input signal voltage at a charge amplifier.4. The method of claim 1, wherein the imaging panel includes a readout integrated circuit (ROIC) having a signal chain, and wherein entering the low power state includes reusing at least a part of the signal chain for the sense circuit.5. The method of claim 4, wherein the ROIC includes an integrator, and wherein entering the low power state comprises powering down the integrator.6. The method of claim 4, wherein reusing at least a part of the signal chain includes reusing a clamp element of the signal chain, and wherein entering the low power state includes powering down other elements of the signal chain.7. The method of claim 1, wherein sensing a change in the input signal voltage includes sensing a change using a converter.8. A system for automatic exposure detection in imaging applications comprises:an imaging panel including a low power mode;a sense circuit for restricting an input signal voltage to a first voltage and receiving an input signal in the low power mode; anda sensor for sensing a change in the input signal voltage, wherein the change in input signal voltage indicates exposure to an x-ray signal.9. The system of claim 8, wherein the sense circuit includes a diode for clamping the input signal voltage.10. The system of claim 8, wherein the sense circuit includes a charge amplifier for clamping the input signal voltage.11. The system of claim 8, further comprising a readout integrated circuit (ROIC) having a signal path, wherein in the low power mode, the sense circuit uses at least a part of the signal path.12. The system of claim 11, wherein the ROIC includes an integrator, and wherein the integrator is powered down in the low power mode.13. The system of claim 11, wherein the part of the signal path the sense circuit uses includes a clamp.14. The system of claim 8, wherein the sensor is a converter.15. The system of claim 8, wherein the sense circuit is an electrostatic discharge circuit.16. A system for automatic exposure detection in imaging applications comprises:an imaging panel including a low power mode; a sense circuit for restricting an input signal voltage to a first voltage and for receiving an input signal in the low power mode; andmeans for sensing a change in the input signal voltage, wherein the change in input signal voltage indicates exposure to an x-ray signal.17. The system of claim 16, wherein the means for sensing includes a converter.18. The system of claim 16, wherein the sense circuit is an electrostatic discharge circuit.19. The system of claim 16, wherein the sense circuit includes a diode for clamping the input signal voltage.20. The system of claim 16, wherein the sense circuit includes a charge amplifier for clamping the input signal voltage. |
Automatic Exposure Detection Circuit for Imaging ApplicationsCROSS REFERENCE TO RELATED APPLICATIONS[0001] This Application claims priority to U.S. Patent Application Serial No. 62/512,796 filed May 31, 2017 and U.S. Patent Application Serial No. 15/993,987 filed May 31, 2018, both Applications are considered incorporated by reference into the disclosure of this Application.TECHNICAL FIELD OF THE DISCLOSURE[0002] The present invention relates to the field of imaging and automatic exposure detection.BACKGROUND[0003] Current x-ray systems generally use digital radiography in which digital x-ray sensors are used instead of traditional photographic film. Some advantages of digital imaging include efficiency as well as the digital storage of images and digital image enhancement techniques. Other advantages include the immediacy of image availability, the elimination of image processing steps, and a wider dynamic range. Additionally, digital x-ray systems generally use less radiation than conventional photographic film x-ray systems. Instead of film, digital imaging uses a digital image capture device.[0004] Digital radiography systems generally include an imaging panel for detecting an image, such as a flat panel detector. There are many different types of detectors, including indirect flat panel detectors, direct flat panel detectors, CMOS detectors, charge coupled device (CCD) detectors, and phosphor plate radiography detectors.[0005] X-ray systems often include a circuit for sensing the arrival of an x-ray signal at a wireless digital x-ray panel. To optimize battery life, some systems create a separate sensing path, independent of the panel readout path, but this requires additional area and components while also increasing the weight of the panel. Other systems use the main readout electronics, which substantially increases the power usage of the system. SUMMARY OF THE DISCLOSURE[0006] Systems and methods for sensing the arrival of an x-ray signal in a wireless digital x-ray panel are disclosed. In particular, according to some implementations, a method for automatic exposure detection in imaging applications includes entering a low power state at an imaging panel, restricting an input signal voltage to a first voltage at a sense circuit in the imaging panel, receiving an input signal at the sense circuit in the imaging panel, sensing a change in the input signal voltage, wherein the change in the input signal voltage indicates exposure to an x-ray signal, and exiting the low power state based on the change in the input signal voltage.[0007] In some examples, restricting the input signal voltage comprises clamping the input signal voltage. In some examples, restricting the input signal voltage comprises clamping the input signal voltage at a diode. In some examples, clamping the input signal voltage comprises clamping the input signal voltage at a charge amplifier. In one example, the charge amplifier is optimized for x-ray sensing.[0008] In some implementations, a change in the input signal voltage is sensed using a converter. In some implementations, the imaging panel includes a readout integrated circuit (ROIC) having a signal chain, and entering the low power state includes reusing at least a part of the signal chain for the sense circuit. In some examples, the ROIC includes an integrator, and entering the low power state comprises powering down the integrator. In some implementations, reusing at least a part of the signal chain includes reusing a clamp element of the signal chain, and entering the low power state includes powering down other elements of the signal chain. In some implementations, sensing a change in the input signal voltage includes sensing a change using a converter.[0009] According to some implementations, a system for automatic exposure detection in imaging applications includes an imaging panel including a low power mode, a sense circuit for receiving an input signal in the low power mode and restricting an input signal voltage to a first voltage, and a converter for sensing a change in the input signal voltage, wherein the change in input signal voltage indicates exposure to an x-ray signal. In some examples, the sense circuit includes a diode for clamping the input signal voltage. In some examples, the sense circuit includes a charge amplifier for clamping the input signal voltage. In some examples, the sensor is a converter. In some implementations, the sense circuit is an electrostatic discharge circuit.[0010] In some implementations, the system for automatic exposure detection further includes a readout integrated circuit (ROIC) having a signal path, and in the low power mode, the sense circuit uses at least a part of the signal path. In some examples, the part of the signal path the sense circuit uses includes a clamp. In some implementations, the ROIC includes an integrator, and the integrator is powered down in the low power mode.[0011] According to some implementations, a system for automatic exposure detection in imaging applications comprises an imaging panel including a low power mode, a sense circuit for restricting an input signal voltage to a first voltage and for receiving an input signal in the low power mode, and means for sensing a change in the input signal voltage. The change in input signal voltage indicates exposure to an x-ray signal.[0012] In some implementations, the means for sensing includes a converter. In some implementations, the sense circuit is an electrostatic discharge circuit. In some implementations, the sense circuit includes a diode for clamping the input signal voltage. In some examples, the sense circuit includes a charge amplifier for clamping the input signal voltage.BRIEF DESCRIPTION OF THE DRAWINGS[0013] To provide a more complete understanding of the present disclosure and features and advantages thereof, reference is made to the following description, taken in conjunction with the accompanying figures, wherein like reference numerals represent like parts, in which:[0014] FIGURE 1 is a diagram illustrating the x-ray sensing signal path, according to some embodiments of the disclosure;[0015] FIGURE 2 is a graph illustrating a read out integrated circuit output switching from imaging mode to x-ray sensing mode, according to some embodiments of the disclosure;[0016] FIGURE 3 is a graph illustrating total x-ray sensing power versus line time, according to some embodiments of the disclosure;[0017] FIGURE 4 is a graph illustrating an x-ray signal path; [0018] FIGURE 5 is a block diagram illustrating an x-ray sensing signal path, according to some embodiments of the disclosure;[0019] FIGURE 6 is a detailed diagram illustrating an x-ray sensing signal path, according to some embodiments of the disclosure;[0020] FIGURE 7 is an enlarged view of a sense circuit, according to some embodiments of the disclosure.[0021] FIGURE 8 is a graph illustrating results of a simulation of an x-ray sensing signal path, according to some embodiments of the disclosure; and[0022] FIGURE 9 is a flow chart illustrating a method for automatic exposure detection in imaging applications, according to some embodiments of the disclosure.DESCRIPTION OF EXAMPLE EMBODIMENTS OF THE DISCLOSURE[0023] X-ray systems include a circuit for sensing the arrival of an x-ray signal at a wireless digital x-ray panel. However, current methods for sensing x-ray signal arrival use additional area and components, or substantially increase the power usage of the system. For example, some systems create a separate independent channel optimized for x-ray detection, for the sensing path, which senses x-rays while the entire readout signal path is powered down. This type of system is power efficient but requires additional components in the design which increases the cost and weight of the panel. Other systems reuse the main readout signal path, either in a low power state or using a subset of channels while powering down others. This simplifies board design and does not increase weight, but significantly increases the power usage of the system, directly affecting battery life. Systems and methods are disclosed for sensing the arrival of an x-ray signal in a wireless digital x-ray panel without adding cost, area, weight, or increasing power usage.[0024] Imaging readout integrated circuits (ROIC's) are used to convert the charge from an imaging sensor into a high precision digital value. In some examples, the charge is a current. The performance of the conversion from analog to digital determines the quality of the image. Quality can be improved by optimizing the noise, linearity, speed, and power used for the conversion function. In some implementations, the signal chain for the ROIC includes a low noise charge amplifier circuit, a correlated double sampler, and a high-resolution ADC. In many x-ray applications, the ROIC panel is wireless and independent of the x-ray source. In order to save battery life, the ROIC panel stays in a lower power state until an x-ray is sensed.[0025] In some systems, for a low power x-ray sensing mode, readout integrated circuits (ROIC's) are software reconfigured to a different low power state. In the low-power state, the ROIC does not operate at full performance, but some functionality is still maintained in order to detect the arrival of an x-ray signal. In some systems, for a low power mode, many ROIC's are powered down to conserve energy, while others remain powered on to detect x- ray signals. Powering down some ROIC's while leaving other ROIC's on reduces overall power consumption. However, the problem with these systems is that the system design is constrained by the features of the ROIC and the time required to stabilize the performance of the system on wake-up. Additionally, the ROIC's that remain on still consume a lot of power, and thus the system consumes more power than necessary for x-ray detection.[0026] Systems and methods are provided for an automatic exposure detection feature for adding to an x-ray panel readout device. Readout IC's (ROIC's) are used to convert the charge on pixels within an x-ray panel to digital values in order to create a visible image. The accuracy and precision of these circuits correlates with the quality of the image. Many applications use an Automatic Exposure Detection (AED) circuit which can remain in a low power state until the x-ray signal is detected. When the x-ray signal is detected, the AED circuit powers up the imaging panel and captures the image. After imaging is completed, the AED circuit returns to a low power state.[0027] Systems and methods are provided for re-using one or more of the high performance ROIC components in an x-ray sensing function. The x-ray sensing function is optimized for x-ray detection and uses about 95-98% less power than the ROIC function. The systems and methods for re-using the high performance ROIC components have no significant impact to the circuit design or to circuit area. Additionally, the systems and methods for reusing the high performance ROIC components do not affect performance during the main imaging function. As described in greater detail below, in some implementations, the clamp element of an a ROIC can be used for x-ray sensing while other elements of the ROIC are powered down. In one example, the clamp element is a diode. [0028] According to various implementations, the high performance imaging signal chain is re-used for a much lower power, lower resolution signal chain which is re-optimized for x-ray detection. Re-using the imaging signal chain for a lower power, lower resolution signal chain allows for a low power x-ray sensing mode without impacting the chip area or performance during normal (imaging) operation.[0029] According to one implementation, the high performance blocks of the ROIC signal chain are replaced with lower performance blocks optimized for x-ray detection. According to other implementations, selected elements of the ROIC signal chain are used for x-ray detection while the remaining elements enter a low-power state or are powered down.[0030] An x-ray sensing signal path is designed to be added to an x-ray panel, allowing the panel to easily enter into and exit from a low power standby mode. In the low-power standby mode, the x-ray signal can still be detected. When an x-ray signal is detected in low- power stand-by mode, the system switches to imaging mode and the high-performance ROIC imaging system is powered on. The x-ray panel including the x-ray sensing signal path includes circuits and techniques which can dramatically lower the power of the x-ray sensing function by about 95-98% as compared to previous approaches.[0031] FIGURE 1 is a diagram illustrating an x-ray sensing signal path 100, which can be used to detect x-ray signals in a low power mode, according to some embodiments of the disclosure. The x-ray sensing signal path 100 includes input lines 102, sensors 104, a multiplexor 106, and an analog-to-digital converter 108. The input lines 102 are analog input lines. According to one implementation, x-ray signals directed to an x-ray panel are received at the input lines 102. According to various examples, the x-ray sensing signal path 100 includes multiple sensors 104, and may include a sensor 104 for each input line 102. The output from the sensors 104 is input to the multiplexor 106, and the output from the multiplexor 106 is input to the analog-to-digital converter 108. In one example, 256 analog inputs are received at the input lines 102, and input to 256 sensors 104, and the output from the 256 sensors 104 is input to the multiplexor 106. In some examples, the x-ray sensing signal path 100 is driven to a voltage controlled by the VT input, a selected input voltage. The x-ray sensing signal path 100 power scales with line time. [0032] The readout requirements for x-ray detection can be very different from imaging since readout time, noise and dynamic range a re less important, while power savings and recovery time are more important. The x-ray sensing signal path 100 takes advantage of these differences in order to optimize performance in both readout a nd detection modes, while not sacrificing the performance requirements of the readout process. According to various implementations, ROIC design changes are made to optimize Automatic Exposure Detection (AED) performance. In one example, an ROIC design change for low power x-ray detection mode includes fully powering down the cha rge integrators, thereby greatly reducing system power. In one example, an ROIC design change for low power x-ray detection mode includes power-scaling the ADC and amplifiers to dynamically reduce the power dissipation. I n another example, an ROIC design change for low power x-ray detection mode includes minimizing the power used for system reference biasing, thus reducing turn-on time once the x-ray is detected. A digital output interface can be used for maximum power savings. I n one example, the digital output interface is a CMOS Input/Output (I/O).[0033] I n x-ray sensing mode, the x-ray sensing signal path 100 uses a very low power sense circuit which does not bias the panel to a thin film transistor reference voltage (REF_TFT). I n contrast, in imaging mode, the panel is biased to REF_TFT. I nstead, in x-ray sensing mode, the panel is driven to a voltage controlled by the VT input (or an optional VDD/2 voltage). The VT input voltage clamps the input to VT+/-VBE, where VBE is base-emitter voltage. As a result, in x-ray sensing mode, the final panel voltage is a function of the leakage current on each channel. I n x-ray sensing mode, the input bias current is on the order of several hundred picoamperes (pA). Thus, with no additional input load, the input bias current causes the input to be driven to VT-VBE. Since the input leakage can vary greatly, each channel in x- ray sensing mode can settle to a different output code. I n one example, REF_TFT is one volt.[0034] FIGURE 2 is a graph 200 illustrating various channels of the ROIC output switching from imaging mode to x-ray sensing mode at different input leakages over time (over a number of views shown on the x-axis), according to some embodiments of the disclosure. I n particular, in x-ray sensing mode, each channel settles to a different output code depending on the input leakage. As shown in FIGURE 2, with no additional input load, the input bias current causes the input to be driven to VT-VBE, and when the ROIC output is switched from imaging mode to x-ray sensing mode, the channel with no input load settles at code 10000. Similarly, at a load of 20 pF (picoFarads), when the ROIC output is switched from imaging mode to x-ray sensing mode, the channel with a 20 pF input load settles around code 9700. At a load of 68 pF, when the ROIC output is switched from imaging mode to x-ray sensing mode, the channel with a 68 pF load settles around code 9600. At a load of 150 pF, when the ROIC output is switched from imaging mode to x-ray sensing mode, the channel with a 150 pF load settles around code 9400. In selecting the voltage for VT, the voltage is within the input range of the ADC [0.5v to 4.5v]. Additionally, the voltage VT is equal to or less than the absolute maximum values specified in the datasheet.[0035] According to some implementations, there are two steps to enter the x-ray sensing AED mode. First, the desired mode is selected in the configuration register. This is programmed once on power-up since it is a configuration. The second step to enter x-ray sensing mode is a small digital pattern which is used to begin and end the x-ray sensing mode. Once the proper digital pattern is provided to enter x-ray sensing, the part reconfigures itself and operates with standard or modified system timing. In some examples, the part that reconfigures itself is the ROIC. In some implementations, the selected readout timing is reused for x-ray sensing, but at a much lower frequency.[0036] Due to the wide variety of panel characteristics and the sensitivity of x-ray sensing, the implementation of the system can be tuned for each application. As discussed above, FIGURE 2 shows the offset of the x-ray panel entering x-ray sensing mode. In this example, REF_TFT is IV, which, in imaging mode, results in an output offset of ~8000 Isbs for normal readout conditions. At view 100 (on the x-axis), the device is switched from imaging mode into x-ray sensing mode and due to internal leakages (~100pA) the inputs begin to decay towards VT-0.5v. According to one example, VT=1.5v. As shown in FIGURE 2, switching into x-ray sensing mode stabilizes after 10-20 views, depending on the input loading. In one example, the x-ray is strong enough to bring the signal above 0.5V and into the range of the ADC.[0037] The various channels of the x-ray panel can be monitored for the x-ray detection. In some examples, as few as one channel is monitored within one ROIC, and in other examples, monitoring occurs across all channels within the panel. In one implementation, one x-ray panel is used for x-ray sensing, and the remaining devices are fully powered down for further power savings. Using fewer than 256 channels within one x-ray panel will not yield significantly increased power savings for the x-ray panel, but using fewer than 256 channels within one x-ray panel saves computational power in the system processing. In one implementation, the results of all of the channels being monitored are averaged to reduce noise.[0038] In some implementations, x-ray sensing systems use a voltage VT=2.5v. In one example, the x-ray source is emulated using a current pulse, and the current pulses are stepped from near 0, increasing in ~200nA steps in one pixel. In one example, to maximize sensitivity to the x-ray pulse, the gate drivers remain on during the entire x-ray sensing operation. Binning many pixels together is can improve sensitivity. According to various implementations, different pixels can be selected for binning together for optimal detection. Any number of pixels can be binned, for example lO's of pixels can be binned, 100's of pixels can be binned, and 1000's of pixels are binned. In some examples, columns of pixels are binned. In other examples, rows of pixels can be binned. Sensitivity increases with more pixels binned, at a slight reduction in settling time when switching modes due to the increased input load capacitance (see FIGURE 2). Increased noise due to the input loading is not significant for x-ray sensing mode due to the fairly low resolution requirement for x-ray detection. The inputs do not need to settle completely when entering x-ray sensing mode, as long as the x-ray signal is larger than the noise and settling error, the x-ray signal is easily detected.[0039] According to some implementations, switching modes from x-ray sensing mode to imaging mode results in selected output characteristics. Depending on application specific details, there can be additional thermal settling when switching to imaging mode due to the increase in power dissipation and self-heating of the x-ray panel. Thus, when switching from x-ray sensing mode to imaging mode, it may take a few views before the output is normalized.[0040] One advantage of the x-ray sensing systems and methods discussed herein is the power savings. The system architecture is designed to minimize power dissipation while maintaining features such as proper panel bias, resolution, and fast wake-up time during AED. In some implementations, the system uses an LVDS (Low Voltage Differential Signaling) interface. In other implementations, the system uses a CMOS I/O interface. In addition to the static power, there is a dynamic power dissipation which is a function of line time, and mostly independent of the type of I/O used. The slower the line time, the less total power dissipation. In some examples, the line time in x-ray sensing mode is much slower than typical readout times, and is around 1kHz or less.[0041] FIGURE 3 is a graph 300 illustrating total x-ray sensing power versus line time, according to some embodiments of the disclosure. In particular, the graph 300 of FIGURE 3 shows the dynamic effect of the total power dissipation.[0042] FIGURE 4 is a diagram illustrating an x-ray signal chain 400, including a first circuit module 402, a second circuit module 404, control logic 406, a low-voltage differential signaling module (LVDS) 408, random access memory (RAM) 410, read only memory (ROM) 412, registers 414, and a sequencer 416. The first circuit module 402 includes a clamp 420, an integrator 422, comparators 424a, 424b, and CDS elements 426a, 426b. The first circuit module 402 receives an input signal at the clamp 420. In some examples, the clamp 420 is a clamp diode. The input signal is an analog input signal received at the clamp. In some examples, the input signal is an x-ray signal. As shown in FIGURE 4, the input signal is AN0. The clamp 420 outputs a signal to the integrator 422. In some examples, the integrator 422 also receives a reference input REF_TFT. The integrator 422 integrates the input signal(s) over time to yield an output signal. The output from the integrator 422 is input to the comparators 424a, 424b. The output from the integrator 422 is input to the CDS elements 426a, 426b. In some examples, the integrator 422 is an amplifier/integrator. In one implementation, the CDS elements 426a, 426b reduce the noise of the signal. The second circuit module 404 includes multiplexors 430a, 430b, sample-and-hold amplifiers 432a, 432b, and an ADC 434.[0043] According to one implementation, an x-ray signal chain 400 includes multiple first circuit modules 402 and multiple second circuit modules 404. In some implementations, multiple first circuit modules 402 are input to one second circuit module 404. In particular, output from multiple first circuit modules 402 are input into multiplexors 430a, 430b in the second circuit module 404. In one example, an x-ray signal chain 400 includes 256 first circuit modules 402 and eight second circuit modules 404, and 32 first circuit modules 402 are input to each second circuit module 404 at the multiplexors 430a, 430b.[0044] According to various examples, and as measured in various x-ray panels, the integrator consumes a large amount of the power in conventional x-ray panels. In conventional systems having a low power x-ray sensing mode, the integrator typically remains powered on in low power x-ray sensing mode to keep the voltage on the panel constant. The panel voltage is used for sensing x-rays in the x-ray systems. According to some embodiments of the disclosure, in low power x-ray sensing mode, the integrator is powered down.[0045] In particular, the x-ray panel with the x-ray sensing system described herein powers down the low-noise high-accuracy integrator 422. In some implementations, the x- ray sensing system functions without an integrator. In other implementations, the x-ray sensing system includes a low-power integrator for use in x-ray sensing mode.[0046] In some implementations, in x-ray sensing mode, the clamp 420 is used to drive the input for electrostatic discharge (ESD) events. The input voltage is maintained at a fairly constant level, and a converter is used to sense a change in the voltage indicating the presence of an x-ray. This enables the rest of the chip, including the integrator 422, the comparators 424a, 424b, the CDS elements 426a, 426b, the multiplexors 430a, 430b, the sample-and-hold amplifiers 432a, 432b, and the ADC 434, to enter a low-power state, increasing efficiency of the x-ray panel.[0047] FIGURE 5 is a block diagram illustrating an x-ray sensing signal path 500, according to some embodiments of the disclosure. The x-ray sensing signal path includes a panel 502, an sense circuit 504, an integrator 506, a LPF (low pass filter) 508, a CDS (correlated double sampler) 510, and a sample and hold amplifier (SHA) 512. In some implementations, the panel 502 is an x-ray panel and has about 1,000 pixels enabled per line. Input received at the panel 502 is output to the sense circuit 504. In some examples, the sense circuit is an ESD (electrostatic discharge) circuit. The output from the sense circuit 504 is input to the integrator 506. In some implementations, the integrator 506 includes a reset switch. Output from the integrator 506 is input to the LPF 508. The low pass filtered signal output from the LPF 508 is input to the CDS 510. The CDS 510 reduces noise in the signal and outputs an output signal to the SHA 512. In some implementations, in low power (x-ray sensing) mode, the panel 502 and sense circuit 504 are powered on, and the integrator 506, LPF 508, CDS 510, and SHA 512 are powered off.[0048] FIGURE 6 is a detailed diagram illustrating the circuit elements in the x-ray sensing signal path 500, according to some embodiments of the disclosure. The system illustrated in FIGURE 6 simplifies the Automatic Exposure Detection (AED) function. In particular, no additional components beyond what is used in the imaging mode are added for AED, or x-ray sensing, mode. Not adding additional components saves space and lowers system costs. The system illustrated in FIGURE 6 includes an x-ray sensing panel 602. In some examples, the x-ray sensing panel 602 has about 1,000 pixels enabled per line. The system illustrated in FIGURE 6 further includes an ESD circuit 604, an integrator and reset switch 606, a low pass filter 608, a CDS 610, and a sample and hold circuit 612. In some examples, the x- ray sensing panel 602 includes multiple gate drivers enabled in parallel. Using multiple gate drivers generate a larger signal than using a single gate driver. Additionally, enabling the whole x-ray sensing panel 602 averages the signal across the x-ray sensing panel. In one example, the x-ray sensing panel 602 operates near 2.5 Volts in x-ray sensing mode. The system shown in FIGURE 6 lowers ROIC power in x-ray sensing mode by about 98% as compared to previous systems, by using the ESD circuit 604 for x-ray sensing while powering down the higher power components, including the integrator and reset switch 606, the low pass filter 608, the CDS 610, and the sample and hold circuit 612.[0049] FIGURE 7 is an enlarged view of the ESD circuit 604, according to some embodiments of the disclosure. The ESD circuit 604 receives first 702 and second 704 inputs from the x-ray sensing panel 602 shown in FIGURE 6. Additionally, the ESD circuit 604 has a first VDD input 706, a test input 708, a ground 710, and a second VDD input 714. The first VDD input 706, a test input 708, a ground 710, and a second VDD input 714 are input to a summer 712. The ESD circuit 604 also includes a capacitor 718 connected to the first 702 and second 704 inputs from the panel 602. The ESD circuit 604 also includes a resistor 716, connected to the first 702 and second 704 inputs and the capacitor 718. The ESD circuit 604 include first 720, second 722, third 724, and fourth 726 PN diodes connected in series. The first 720, second 722, third 724, and fourth 726 diodes are also connected to the inputs as shown in FIGURE 7. The capacitor 718 and the resistors 716 are connected between the first 720 and second 722 diodes. The output from the summer 712 is connected between the third 724 and fourth 726 diodes. The ESD circuit 604 includes input node 730 and output node 732.[0050] Since the integrators 606 (shown in FIGURE 6) are powered down in x-ray sensing mode, the drive for the panel 602 comes from the ESD circuit 604.[0051] The ESD circuit 604 is used for sensing x-ray signals. The test input is used to bias the first 720, second 722, third 724, and fourth 726 diodes to 2.5V. Biasing the first 720, second 722, third 724, and fourth 726 diodes to 2.5V clamps the line voltage to 2.5V +/-VBE depending on leakages. In some implementations, the 2.5V is generated with an internal resistor divider. In other implementations, the VT can be used for the input. In various implementations, the voltage +/-VBE is 100-200 mV inside ADC span. When a charge is put in the ESD circuit, the voltage across the diode changes. A converter senses the change in the voltage.[0052] According to various examples, the systems and methods for x-ray sensing are highly configurable and can be optimized for various applications. In some implementations, the x-ray sensing system can be used for full panel sensing. In other implementations, the x- ray sensing system is applied to a limited area of an x-ray panel, with other portions of the x- ray panel powered down, resulting in even greater power savings. The systems and methods described herein provide fast sensing at any x-ray dose level, providing maximum time to minimize startup settling and image artifacts.[0053] According to various implementations, the panel shown in FIGURES 5 and 6 includes multiple gate drivers enabled in parallel. Enabling the entire panel results in averaging of the signal across the panel. In some implementations, multiple gate drivers may be needed to generate a large enough signal for detection. In some examples, the number of gate drivers needed for signal detection depends on the strength of the x-ray. In one implementation, the panel operates near 2.5V for x-ray sensing mode.[0054] According to various implementations, the integrators as shown in FIGURES 5 and 6 are powered down during the x-ray sensing mode, and the integrator reset switch is closed. Thus, the input directly drives the CDS capacitor. In some implementations, one CDS capacitor is used to sample the line voltage. In various examples, the full CDS differentiation is not used. According to some implementations, a single ended ADC conversion is forced with logic changes.[0055] According to various implementations, the SHA and ADC operate typically, and are configured for the power scaling mode. The reference buffers for the ADC and REF_DAC remain powered-on in a keep-alive state to minimize power, yet keep the heavily filtered nodes properly biased. In some implementations, a low power ADC reference buffer is used during conversions.[0056] In some implementations a CMOS I/O is used in the x-ray sensing system for optimal power savings. In one example, one x-ray panel is used for the AED and the other x- ray panels are powered down.[0057] FIGURE 8 is a graph illustrating results of a simulation of an x-ray sensing signal path, according to some embodiments of the disclosure. As shown in the top graph, the input will drift toward 1 Vbe as a function of leakage and charge injection from the CDS capacitor. As shown in the bottom graph, line voltage may be unstable over time. An algorithm can be used to monitor the output data and determine a valid x-ray signal.[0058] FIGURE 9 is a flow chart illustrating a method 900 for automatic exposure detection in imaging applications, according to some embodiments of the disclosure. The method 900 includes, at step 902, entering a low power state at an imaging panel. Once the imaging panel is in a low power state, at step 904, an input signal voltage is restricted to a first voltage at a sense circuit in the imaging panel. In some examples, the input signal voltage is restricted by clamping the voltage. In some examples, restricting the input signal voltage comprises clamping the input signal voltage. In some examples, restricting the input signal voltage comprises clamping the input signal voltage at a diode. In other examples, clamping the input signal voltage comprises clamping the input signal voltage at a charge amplifier. In one example, the charge amplifier is optimized for x-ray sensing.[0059] At step 906, an input signal is received at the sense circuit in the imaging panel. In various examples, the input signal is an x-ray signal. At step 908, a change in the input signal voltage is sensed, wherein the change in the input signal voltage indicates exposure to an x-ray signal. In some examples, the change in the input signal voltage is sensed using a converter. At step 910, the imaging panel exits the low power state based on the change in the input signal voltage. In some examples, when the imaging panel exits the low power state, it enters an imaging mode.[0060] In some implementations, the imaging panel includes a readout integrated circuit (ROIC) having a signal chain, and entering the low power state includes reusing at least a part of the signal chain for the sense circuit. In some examples, the ROIC includes an integrator, and entering the low power state comprises powering down the integrator.[0061] According to various implementations, the systems and methods discussed herein can be used for imaging applications, such as x-rays and CT scans.[0062] In various implementations, the automatic detection exposure systems and methods discussed herein can be integrated into a digital x-ray analog front end. In one example, a digital analog front end has 256-channel and 16-bits, and integrates the charge- to-digital conversion signal chain on a single chip. A digital x-ray analog front end enables a wide range of digital X-ray modalities, including portable radiology and mammography as well as high speed fluoroscopy and cardiac imaging. A digital x-ray analog front end can be delivered on a high density system-on-flex (SOF) package that can be directly mounted on a digital X-ray panel. In some examples, converted channel results are output on a single LVDS self-clocked serial interface, significantly reducing external hardware. A Serial Peripheral Interface (SPI)-compatible serial interface allows configuration of the digital x-ray analog front end, using serial digital interface input. Serial data output allows several digital x-ray analog front end to be daisy-chained on a single 3-wire bus. In some examples, an integrated digital x-ray analog front end timing sequencer controls the sampling activity of the digital x-ray analog front end. The sequencer is programmed via the SPI port and is timed by a single clock.Variations and implementations[0063] In the discussions of the embodiments above, the capacitors, clocks, DFFs, dividers, inductors, resistors, amplifiers, integrators, switches, digital core, transistors, and/or other components can readily be replaced, substituted, or otherwise modified in order to accommodate particular circuitry needs. Moreover, it should be noted that the use of complementary electronic devices, hardware, software, etc. offer an equally viable option for implementing the teachings of the present disclosure. [0064] In one example embodiment, any number of electrical circuits of the FIGURES may be implemented on a board of an associated electronic device. The board can be a general circuit board that can hold various components of the internal electronic system of the electronic device and, further, provide connectors for other peripherals. More specifically, the board can provide the electrical connections by which the other components of the system can communicate electrically. Any suitable processors (inclusive of digital signal processors, microprocessors, supporting chipsets, etc.), computer-readable non-transitory memory elements, etc. can be suitably coupled to the board based on particular configuration needs, processing demands, computer designs, etc. Other components such as external storage, additional sensors, controllers for audio/video display, and peripheral devices may be attached to the board as plug-in cards, via cables, or integrated into the board itself. In various embodiments, the functionalities described herein may be implemented in emulation form as software or firmware running within one or more configurable (e.g., programmable) elements arranged in a structure that supports these functions. The software or firmware providing the emulation may be provided on non-transitory computer-readable storage medium comprising instructions to allow a processor to carry out those functionalities.[0065] In another example embodiment, the electrical circuits of the FIGURES may be implemented as stand-alone modules (e.g., a device with associated components and circuitry configured to perform a specific application or function) or implemented as plug-in modules into application specific hardware of electronic devices. Note that particular embodiments of the present disclosure may be readily included in a system on chip (SOC) package, either in part, or in whole. An SOC represents an IC that integrates components of a computer or other electronic system into a single chip. It may contain digital, analog, mixed- signal, and often radio frequency functions: all of which may be provided on a single chip substrate. Other embodiments may include a multi-chip-module (MCM), with a plurality of separate ICs located within a single electronic package and configured to interact closely with each other through the electronic package. In various other embodiments, the clocking and filtering functionalities may be implemented in one or more silicon cores in Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), and other semiconductor chips. [0066] It is also imperative to note that all of the specifications, dimensions, and relationships outlined herein (e.g., the number of processors, logic operations, etc.) have only been offered for purposes of example and teaching only. Such information may be varied considerably without departing from the spirit of the present disclosure, or the scope of the appended claims. The specifications apply only to one non-limiting example and, accordingly, they should be construed as such. In the foregoing description, example embodiments have been described with reference to particular processor and/or component arrangements. Various modifications and changes may be made to such embodiments without departing from the scope of the appended claims. The description and drawings are, accordingly, to be regarded in an illustrative rather than in a restrictive sense.[0067] Note that the activities discussed above with reference to the FIGURES are applicable to any integrated circuits that involve signal processing, particularly those that use sampled analog, some of which may be associated with processing real-time data. Certain embodiments can relate to multi-DSP signal processing, floating point processing, signal/control processing, fixed-function processing, microcontroller applications, etc.[0068] In certain contexts, the features discussed herein can be applicable to medical systems, scientific instrumentation, wireless and wired communications, radar, industrial process control, audio and video equipment, current sensing, instrumentation (which can be highly precise), and other digital-processing-based systems.[0069] Moreover, certain embodiments discussed above can be provisioned in digital signal processing technologies for medical imaging, patient monitoring, medical instrumentation, and home healthcare. This could include pulmonary monitors, accelerometers, heart rate monitors, pacemakers, etc. Other applications can involve automotive technologies for safety systems (e.g., stability control systems, driver assistance systems, braking systems, infotainment and interior applications of any kind). Furthermore, powertrain systems (for example, in hybrid and electric vehicles) can use high-precision data conversion products in battery monitoring, control systems, reporting controls, maintenance activities, etc.[0070] In yet other example scenarios, the teachings of the present disclosure can be applicable in the industrial markets that include process control systems that help drive productivity, energy efficiency, and reliability. In consumer applications, the teachings of the signal processing circuits discussed above can be used for image processing, auto focus, and image stabilization (e.g., for digital still cameras, camcorders, etc.). Other consumer applications can include audio and video processors for home theater systems, DVD recorders, and high-definition televisions. Yet other consumer applications can involve advanced touch screen controllers (e.g., for any type of portable media device). Hence, such technologies could readily part of smartphones, tablets, security systems, PCs, gaming technologies, virtual reality, simulation training, etc.[0071] Note that with the numerous examples provided herein, interaction may be described in terms of two, three, four, or more electrical components. However, this has been done for purposes of clarity and example only. It should be appreciated that the system can be consolidated in any suitable manner. Along similar design alternatives, any of the illustrated components, modules, and elements of the FIGURES may be combined in various possible configurations, all of which are clearly within the broad scope of this Specification. In certain cases, it may be easier to describe one or more of the functionalities of a given set of flows by only referencing a limited number of electrical elements. It should be appreciated that the electrical circuits of the FIGURES and its teachings are readily scalable and can accommodate a large number of components, as well as more complicated/sophisticated arrangements and configurations. Accordingly, the examples provided should not limit the scope or inhibit the broad teachings of the electrical circuits as potentially applied to a myriad of other architectures.[0072] Note that in this Specification, references to various features (e.g., elements, structures, modules, components, steps, operations, characteristics, etc.) included in "one embodiment", "example embodiment", "an embodiment", "another embodiment", "some embodiments", "various embodiments", "other embodiments", "alternative embodiment", and the like are intended to mean that any such features are included in one or more embodiments of the present disclosure, but may or may not necessarily be combined in the same embodiments.[0073] It is also important to note that some of the operations may be deleted or removed where appropriate, or these operations may be modified or changed considerably without departing from the scope of the present disclosure. In addition, the timing of these operations may be altered considerably. The preceding operational flows have been offered for purposes of example and discussion. Substantial flexibility is provided by embodiments described herein in that any suitable arrangements, chronologies, configurations, and timing mechanisms may be provided without departing from the teachings of the present disclosure.[0074] Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the appended claims. In order to assist the United States Patent and Trademark Office (USPTO) and, additionally, any readers of any patent issued on this application in interpreting the claims appended hereto, Applicant wishes to note that the Applicant: (a) does not intend any of the appended claims to invoke paragraph six (6) of 35 U.S.C. section 112 as it exists on the date of the filing hereof unless the words "means for" or "step for" are specifically used in the particular claims; and (b) does not intend, by any statement in the specification, to limit this disclosure in any way that is not otherwise reflected in the appended claims.OTHER NOTES, EXAMPLES, AND IMPLEMENTATIONS[0075] Note that all optional features of the apparatus described above may also be implemented with respect to the method or process described herein and specifics in the examples may be used anywhere in one or more embodiments.[0076] In a first example, a system is provided (that can include any suitable circuitry, dividers, capacitors, resistors, inductors, ADCs, DFFs, logic gates, software, hardware, links, etc.) that can be part of any type of computer, which can further include a circuit board coupled to a plurality of electronic components. The system can include means for clocking data from the digital core onto a first data output of a macro using a first clock, the first clock being a macro clock; means for clocking the data from the first data output of the macro into the physical interface using a second clock, the second clock being a physical interface clock; means for clocking a first reset signal from the digital core onto a reset output of the macro using the macro clock, the first reset signal output used as a second reset signal; means for sampling the second reset signal using a third clock, which provides a clock rate greater than the rate of the second clock, to generate a sampled reset signal; and means for resetting the second clock to a predetermined state in the physical interface in response to a transition of the sampled reset signal.[0077] The 'means for' in these instances (above) can include (but is not limited to) using any suitable component discussed herein, along with any suitable software, circuitry, hub, computer code, logic, algorithms, hardware, controller, interface, link, bus, communication pathway, etc. In a second example, the system includes memory that further comprises machine-readable instructions that when executed cause the system to perform any of the activities discussed above. |
In some embodiments, a system includes a circuit board, a first chip, and a second chip stacked on the first chip. The first chip is coupled between the circuit board and the second chip, and the first chip includes circuitry to repeats commands the first chip receives to the second chip. Other embodiments are described. |
1.A system comprising:Circuit boardFirst chip;a second chip stacked on the first chip, wherein the first chip is coupled between the circuit board and the second chip, and wherein the first chip includes a command for forwarding the command received by the first chip Give the circuit of the second chip.2.The system of claim 1 wherein the second chip is typically operated at a power significantly higher than the first chip.3.The system of claim 1 further comprising a third chip stacked on the second chip, and a fourth chip stacked on the third chip, wherein the fourth chip is generally higher than the third The power of the chip operates.4.The system of claim 3 wherein the second and third chips do not forward commands to other chips.5.The system of claim 3 wherein the first and fourth chips are typically operated at a power significantly higher than the second and third chips.6.The system of claim 1 wherein the first chip forwards address, write data, and clock signals to the second chip.7.The system of claim 9 wherein said memory card is part of a memory module card and said memory module comprises an additional memory chip, wherein said additional memory chip is not of said first and second chip stacks portion.8.The system of claim 1 wherein the circuit board is a motherboard.9.The system of claim 1 further comprising a chip comprising a processor and a memory controller, and wherein the memory controller provides commands to the first chip.10.The system of claim 12 further comprising wireless transmitting and receiving circuitry coupled to the chip, wherein the chip includes the processor and a memory controller.11.The system of claim 1 further comprising a third chip stacked on the second chip, and wherein the first and third chips are typically operated at a higher power than the second chip, and the third The chip typically operates at a higher power than the first chip.12.A system comprising:Board; andStacking the first chip, the second chip, the third chip, and the fourth chip;Wherein the first chip is coupled between the circuit board and the second chip; the second chip is coupled between the first chip and the third chip; and the third chip is coupled to the second chip and the first chip Between four chips; andWherein the first chip and the fourth chip are typically operated at a power significantly higher than the second chip and the third chip.13.The system of claim 12 further comprising a chip comprising a processor and a memory controller on the different sides of the circuit board from the first, second, third, and fourth chips, and wherein The memory controller provides commands to the first chip, and wherein the first, second, third, and fourth chips are memory chips.14.The system of claim 13 wherein the first chip forwards commands from the processor to the second and fourth chips.15.The system of claim 13 wherein the first chip provides read data to the second chip, and the fourth chip provides read data to the third chip, and the second and third chips will Read data is provided to the processor.16.A system comprising:Storage module circuit board;a first memory chip and a second memory chip, wherein the first memory chip is stacked between the circuit board and the second memory chip, and wherein the first memory chip forwards at least some commands to the second memory chip ;as well asAnd a third memory chip, wherein the third memory chip is stacked between the second memory chip and the fourth memory chip.17.The system of claim 16 further comprising a chip, said chip including a memory controller for providing command, address, and write data signals to said first chip and for use from said second and The three chips receive the read data signal.18.The system of claim 16 further comprising a chip, the chip including a processor and a memory controller, and wherein the memory controller provides commands to the first chip and receives readouts from the second and third chips Data signal.19.The system of claim 16 wherein the first chip forwards commands from the processor to the second and fourth chips.20.The system of claim 16 further comprising:Fifth, sixth, seventh and eighth stacked memory chips;The fifth memory chip is coupled between the memory module circuit board and the sixth memory chip, and the seventh memory chip is coupled between the sixth and eighth memory chips. |
Chip stack with a higher power chip on the outside of the stackTechnical fieldA chip stack is described in which a higher power chip is placed in a position with better heat dissipation performance.Background techniqueVarious settings for memory chips in a memory system have been proposed. For example, in a conventional synchronous dynamic random access memory (DRAM) system, a memory chip communicates data over a multi-drop bidirectional data bus and receives commands and addresses over a command and address bus. Recently, two-way or one-way point-to-point interconnections have been proposed.In some systems, a chip (also known as a die) is stacked on top of another chip. These chips can all be of the same type or some chips may differ from other chips. For example, a set of memory chips (eg, flash memory or DRAM) can be supported by a module substrate. A stack may include a chip having a memory controller. The stack may include a processor chip (with or without a memory controller) and a voltage regulator (VR) chip and perhaps other chips. The chip stack may be on one side of a printed circuit board (PCB) substrate and the chip or other set of chips may be on the other side of the substrate. For example, the processor may be on one side of the substrate and the VR chip may be the other side of the substrate. The VR chip and/or the processor chip can be part of a stack. For example, a heat sink can be included over the processor chip. One or more other heat sinks can also be used.Various packaging techniques have been used to stack one chip onto another. For example, the laminate and the substrate may in turn comprise the following components: a package substrate, a die attach material layer, a chip, a die attach material layer, a chip, a die attach material layer, a chip, etc., and in the chip and There are wire bond conductors between the package substrates. The wire bond wires can be among the die attach materials. A solder ball can be between the package substrate and another substrate. As another example, the solder balls can be positioned between the package substrate layer and/or the redistribution layer, wherein the chip is supported by the package substrate layer and/or the redistribution layer. Wire bonding can also be used in this example. Flip-chip technology can be used. Through silicon vias can be used. The package model can enclose multiple chips or each chip can have its own package. Various other packaging techniques have been used. Various heat dissipation technologies have been developed (eg, fans, heat sinks, liquid cooling, etc.).Systems have been proposed in which chips (such as memory chips) relay the signals received by them for other chips.Many chips operate at higher performance over a specific temperature range. If the temperature becomes too high, the chip may malfunction. Throttling techniques have been developed to reduce the voltage and frequency of the chip, thereby reducing the temperature. However, at lower frequencies and voltages, the performance of the chip will also decrease. Accordingly, once the temperature of the chip is sufficiently low, the voltage and frequency may increase. Ideally, the temperature of the chip is always low enough to eliminate the need to reduce voltage and frequency.The memory module includes a substrate on which a memory chip is placed. The memory chips can be placed only on one side of the substrate or on both sides of the substrate. In some systems, a buffer is also placed on the substrate. For at least some of the signals, the buffer is connected between a memory controller (or other buffer) and a memory chip on the module. In such a buffer system, the signals used by the memory controller with the buffer (eg, frequency and voltage values, and point-to-point versus multi-point settings) may be different from the signals used with the buffer and memory chip. .A dual in-line memory module (DIMM) is an example of a memory module. Multiple modules may be in series and/or in parallel. In some memory systems, a memory chip receives a signal and forwards it to a next memory chip located in a series of two or more memory chips.Memory controllers have been used in chipset hubs and in chips that include processor cores. Many computer systems include transmit and receiver circuitry to allow the system to wirelessly connect to the network.DRAWINGSThe present invention will be fully understood from the following detailed description of the embodiments of the invention.Each of Figures 1-9 is a schematic block diagram illustrating a stacked chip and a support substrate in accordance with some embodiments of the present invention;Each of Figures 10-12 is a schematic block diagram illustrating a stacked memory chip in accordance with some embodiments of the present invention;Figure 13 is a thermal model of a stacked chip device similar to that of Figures 1 and 7;Figure 14 is a schematic block diagram illustrating a system including a processor and a memory module in accordance with some embodiments of the present invention;Each of Figures 15-19 is a block diagram illustrating a system including a memory controller in accordance with some embodiments.Detailed waysFIG. 1 illustrates a schematic diagram of a system including a substrate 10 for supporting a plurality of chips 12, 14, 16, and 18. For the sake of clarity, the spacing is shown between the chips and between the chip 12 and the substrate 10, but in actual implementation there will be some structure between them or they will be adjacent to each other. Chips 12-18 can be packaged. Substrate 10 can be, for example, a printed circuit board (PCB), but that is not required. In some embodiments, substrate 10 is a motherboard that supports a variety of other components. In other embodiments, substrate 10 is a card substrate (such as a memory module substrate or a graphics card substrate) that is supported in turn by a motherboard. Arrows 20 and 22 show the main direction of the heat flow (but of course not the only direction of heat flow). As can be seen, in the example of FIG. 1, chips 16 and 18 have heat dissipation primarily in the direction of arrow 20. Chip 14 has heat dissipation in both directions of arrows 22 and 24, and chip 12 has heat dissipation primarily in the direction of arrow 22. Arrows 20 and 22 do not need to be aligned in a row along the direction of gravity. The temperatures Tj12, Tj14, Tj16, and Tj18 represent the temperatures in the chips 12, 14, 16, and 18, respectively. Arrows 20 and 22 are just examples. Heat flows from a higher temperature to a lower temperature. In fact, the details of arrows 20 and 22 may be different from what is shown and may vary as the temperature of the chip changes. The heat flow can also change when cooling is performed. Chips 12 and 18 are higher power chips, while chips 14 and 16 are lower power chips, which indicate that chips 12 and 18 typically operate at significantly higher power than chips 14 and 16. However, because the chips 12 and 18 are placed on the outside of the stack, they can more thermally dissipate, and the temperatures Tj12 and Tj18 will be on the inside of the stack than the chips 12 and 18 (like the chips 14 and 16). It is much lower. In the system of Figure 1, the chips 12 and 18 can be operated at a higher frequency and/or voltage than when they are placed inside the stack. Moreover, since chips 14 and 16 typically operate at lower power, they do not require the same heat dissipation as higher power chips. In some embodiments, chips 14 and 16 typically operate at the same frequency and/or voltage as chips 12 and 18, although this is not required.In some embodiments, Tj12, Tj14, Tj16, and Tj18 are about the same temperature, but in other embodiments, Tj12, Tj14, Tj16, and Tj18 are substantially different temperatures. Tj12 can be above or below Tj14 and Tj16. Tj18 can be above or below Tj14 and Tj16. Tj12 can be above or below Tj18. Tj14 can be above or below Tj16. The power at which the chip 18 is typically operated may be higher or lower than the power at which the chip 12 is normally operated. The power at which the chip 16 is typically operated may be higher or lower than the power at which the chip 14 is typically operated.As used herein, significantly higher power means at least 20% greater. However, in some embodiments, the difference in power can be well over 20% and can even be more than a few hundred percent. Examples of power differences include between 20% and 50%, between 50% and 100%, between 100% and 200%, and greater than 200%.Various heat dissipation technologies have been developed (eg, fans, heat sinks, liquid cooling, etc.). The invention is not limited herein to any particular of these techniques. In some embodiments, the frequency, voltage, and other characteristics of the chip can be suppressed if the temperature or power consumption exceeds a threshold.2 shows a system in which substrate 24 supports chips 12, 14, 16, and 18 on one side of the substrate and chip 26 on the other side of substrate 24. Chip 26 is shown as higher power, but that is not required. Chip 26 can operate at a higher power than any of chips 12-18. Heat sinks 28 and 30 are shown attached to chips 26 and 18, respectively. The heat sink can be used in conjunction with the chips in the other figures of the disclosure. The heat sink does not have to be on the top or bottom of the laminate, but can also be on the side. The chip in Figure 2 can be packaged.FIG. 3 illustrates a system in which substrate 30 supports lower power chip 32 and higher power chip 34. Arrows 20 and 22 show an exemplary heat flow.4 shows a system in which substrate 40 supports lower power chip 42, lower power chip 46, and higher power chip 48. Chip 42 can operate above, below, or the same power as chip 46. Chip 42 can be a "higher power" chip. Additional chips may be included between chips 42 and 46. The additional chip can be a lower power chip.FIG. 5 illustrates a system in which substrate 50 supports a higher power chip 52, a lower power chip 54, and a highest power chip 56, wherein chip 56 typically operates at a higher power than chip 52 operates.Figure 6 shows a system in which substrate 210 supports chip 212 (highest power), 214 (higher power), 216 (lower power), chip 218 (lowest power), chip 220 (lower power), chip 222 (higher power), and 224 (highest power). This illustrates a chip that has a higher power towards the outside of the stack and has a lower power chip towards the inside of the stack and the highest power chip on the outside. Depending on the chip that is furthest from the substrate 210, the best heat dissipation can be obtained, or the chip next to the substrate 210 can achieve the best heat dissipation. As an alternative to the FIG. 6 system, chip 212 may be a higher power chip and chip 214-chip 220 may be a lower power chip. Additional chips can be included in the stack. There are many different possibilities, only a few of which are described in the disclosure. Different kinds of chips may be included in the stack, including one or more of the following: a processor chip, a memory chip, a VR chip, a memory buffer chip (see FIG. 16), a communication chip, and others. The processor chip can be in the same stack as the VR chip, the buffer chip, and the memory chip, or in a different stack, or not in a stack. There are many possibilities.Figure 7 illustrates a system in which substrate 10 supports a stack of chips 12, 14, 16, and 18. As an example, chips 12, 14, 16, and 18 can be memory chips (eg, flash memory or DRAM) and substrate 10 can be a memory module substrate, but in other embodiments, chips 12, 14, 16, and 18 Not a memory chip. The chips 12, 14, 16, and 18 are supported by package supports 62, 64, 66, and 68, wherein the package supports can extend completely around the chips 12, 14, 16, and 18 (see Figure 8). Solder balls 70 connect substrates 10 and 62, substrates 62 and 64, substrates 64 and 66, and substrates 66 and 68. In the example of Figure 7, wire bonds 72 are used, of which only a few are visible.Figure 8 illustrates a stack having three chips 82, 84, and 86 instead of the four chips of Figure 7. Figure 8 also shows substrate packages 92, 94, and 96 that completely enclose the chips 82, 84, and 86. Solder balls 88 provide electrical connections. Figure 8 can include a stack of more or less than four chips.Figure 9 illustrates a substrate 100 supporting a stack of chips 102, 104, 106, and 108 that are not packaged. Solder balls 110 provide electrical connections. Figure 9 can include a stack of two, three or more than four chips.The invention is not limited to any particular type of packaging and signaling technology. For example, packaging techniques and signaling can include wire bonding, flip chip, package mold, package substrate, redistribution layer, through-silicon vias, and various components and techniques. Although solder balls are shown, different materials can be used for electrical connections.The system of Figures 3-9 can include a chip on the other side of the substrate being displayed. The system of Figures 1-9 can include additional stacks on either side of the substrate as well as additional chips in the stack shown in the Figures. The stack can include additional chips in the stack. There can be two higher power chips adjacent to each other. The substrate of Figures 1-9 can be, but does not have to be, a printed circuit board. They can be motherboards or some other substrate, such as a card.An example of a chip in a stack is shown in Figures 10-12. The chip of Figures 10-12 may be a memory chip that includes a memory core for storing data. The substrates are not shown, but they can be as in Figures 1-9. The invention is not limited to the specific examples shown in Figures 10-12. The chip can include different details and relationships.FIG. 10 shows a stack of chips 112 and 114. Chip 112 receives commands, addresses, and write data signals (CAW) and clock signals (Clk) that are transmitted (Tx) from another chip (eg, a memory controller). In the example of Fig. 10, there are six lanes CAW and one channel Clk, so the transmitted signal (Tx) is represented as 6.1. The channel can be a single conductor with a single-ended signal and two conductors with a differential signal. Chip 112 performs the command operations passed to chip 112 and also forwards the CAW and clock signals to chip 114. Chip 114 performs the operations specified by the commands passed to it. Chip 112 provides a four channel read data signal and a single channel read clock signal (Rx 4.1) on conductor 122. Chip 114 provides a four channel read data signal and a single channel read clock signal (Rx 4.1) on conductor 124. Since it relays the CAW and the clock signal, the chip 112 can be referred to as a repeater chip. As shown below, in some embodiments, read data from one chip can be transferred to another chip that forwards the read data. Since the forwarding chip typically operates at a higher power, the chip 112 can be placed outside of the stack, similar to the chip 34 of FIG. Chips 112 and 114 may be in the same rank, but this is not required.Figure 11 shows a stack of chips 132, 134, 136, and 138. In some embodiments, the chip 132 is closest to the substrate and the chip 138 is furthest from the substrate. In other embodiments, the chip 132 is the farthest. Chip 132 receives a six channel CAW signal and a one channel clock signal. Chip 132 executes the commands transmitted to it and also forwards the CAW and clock signals to chips 134 and 138. Chip 138 forwards the CAW and clock signals to chip 136 in sequence. The read data signal from the core of chip 132 is provided to chip 134. The read data signal from the core of chip 138 is provided to chip 136. Chip 134 provides read data from its own core and read data from chip 132 to conductor 142 along with the read clock signal. Chip 136 provides read data from its own core and read data from chip 138 to conductor 144 along with the read clock signal. In the example of FIG. 11, chips 132 and 138 are referred to as forwarding chips, and chips 134 and 136 are referred to as non-forwarding chips. Chips 134, 136, and 138 operate in accordance with the commands passed to them. Since the forwarding chip is typically operated at a higher power, the chips 132 and 138 can be placed outside of the stack as shown in FIG. Chip 132 can be as farthest from PCB substrate as chip 18. In the example of Figure 11, chips 134 and 138 are part of a first arrangement (a commonly accessed chip), and chips 132 and 134 are part of a second arrangement, but this is not required.FIG. 12 shows a stack of memory chips 152, 154, 156, and 158. In some embodiments, chip 152 is closest to the substrate and chip 158 is furthest from the substrate. In other embodiments, chip 152 is the farthest. Chip 152 receives a six channel CAW signal and a one channel clock signal. Chip 152 executes the commands transmitted to him and also forwards the CAW and clock signals to chips 154, 156, and 158. Chips 134, 136, and 138 execute the commands transmitted to them. The read data signal from the core of chip 152 is provided to chip 154. The read data signal from the core of chip 154 is provided to chip 156. The read data signal from the core of chip 156 is provided to chip 158. In addition, chip 154 forwards the read data signal it receives from chip 152 to chip 156, and chip 156 forwards the read data signal it receives from chip 154 to chip 158. Chip 158 provides a four channel read data signal and a channel read clock signal on conductor 164. (In other embodiments, conductor 164 can carry eight channels of read data and one or two channel clock signals.) Chip 152 typically operates at a higher power than chips 154, 156, and 158 and can be separated from chip 18 The PCB substrate is the farthest. Chip 158 can typically operate at a higher power than chip 154 and 156 or at substantially the same power. Chip 154 can typically operate above or below the power of chip 156 or at the same power. Chips 152, 154, 156, and 158 may each be in a different arrangement, but this is not required.Figure 13 illustrates a heat flow diagram in which Tj12, Tj14, Tj16, and Tj18 represent the temperatures of the chips 12, 14, 16, and 18 in the stack of Figures 1 and 7, respectively. Tamb is the ambient temperature and Tb is the temperature of the substrate plate 10. Symbols q12, q14, q16, and q18 represent the power consumed by chips 12, 14, 16, and 18. The symbol qt represents the power consumed by the hottest chip in the direction away from the substrate 10, and qb represents the power consumed by the hottest chip in the direction toward the substrate 10. In the example of Figure 13, the hottest chip is shown as chip 14, but any other chip depending on the environment can be the hottest. The symbol ψca represents the thermal resistance between the package of the chip package and the ambient air. The package housing is optional. The symbol ψ18-c represents the thermal resistance between the chip 18 and the outer casing; ψ16-18 denotes the thermal resistance between the chips 16 and 18; ψ14-16 denotes the thermal resistance between the chips 14 and 16; ψ12-14 denotes the chip 12 The thermal resistance between and 14; ψb-12 represents the thermal resistance between substrate 10 and chip 12; and ψba is the thermal resistance between substrate 10 and ambient temperature. For example only, ψ16-18, ψ14-16, and ψ12-14 may be approximately 10C/W, where C is Celsius and W is watts, but they may have other values as well.Table 1 shows the results of the thermal simulation example of the model of Fig. 13. However, the invention is not limited to the details of Table 1, and other simulations may result in different results. Table 1 and the details mentioned are merely examples based on current understanding and may include errors. Moreover, the invention can be used with a wide variety of chips and systems, which is another reason why the simulation has limited effectiveness.Table 1: Examples of thermal simulation results from the stacks of Figures 1 and 7.In Table 1, "W" is watt and "C" is Celsius. "Conventional" refers to a stacked system in which higher and lower power chips are interleaved in the following order: substrate, higher power chip, lower power chip, higher power chip, lower power chip. In Table 1, "% non-uniformity" refers to the difference in power consumption between the higher and lower power chips. For example, in the two columns below "12.5% non-uniformity", the difference between the higher and lower power chips is 12.5%.It is believed that based on the available packaging technology, the chip-to-chip thermal resistance can be from ~1C/W to ~10C/W according to the overlay technology ψ16-18, ψ14-16 and ψ12-14 (summary ψo) Variations, although the invention is not limited to these details. Depending on the non-uniformity of chip-to-chip power, the benefits that can be seen with the overlay techniques of Figures 1 and 7 can be ~1 to 3C. In addition, since the rise in temperature can be linearly proportional to the increase in power, this benefit can increase as the DRAM power increases. This means more benefits for high power, high speed storage in DRAM technology. As an example, in the double-average chip power [0.49W to 0.98W] of Table 1, the stacking technique proposed in Figures 1 and 7 can produce a conventional overlay method that exceeds 50% power non-conformance~ 2 (111.0-108.5) C = 5.0C benefits. In addition, for the case of ψo~1C/W (estimated typical chip stacking technique), the benefit of the overlay technique of Figures 1 and 7 may be to reduce Tjmx for power non-conformity up to ~50%. 1.0-1.3C.In summary, based on the preliminary simulation, the proposed stacking method can produce a lower Tjmax~1.0C at one end (ψo~1C/W~chip stack) for different DRAM stack structures and for the other end (ψo~ 10C/W ~ package stack) reaches ~5C, where Tjmax is the maximum of all chip temperatures, and ψo is the thermal resistance between two adjacent chips in the stack. It is also possible to use the same method for the stacking of two chips and eight chips, and the benefits of quantified representation have yet to be determined. In general, the benefits of eight DRAM stacks are expected to be greater than four DRAM stacks. Other conditions will produce different results.In some embodiments, a stack in accordance with the present invention has the potential to provide higher performance/watts for high BW (bandwidth) applications, such as those required by multiple and many core CPUs. RMS (recognition, mining, synthesis) workload. This can be an optimized thermal structure for multi-chip DRAM stacks that effectively provides higher performance/watts.In some embodiments, the repeater DRAMS can consume ~13 to 50% of additional power over the average chip power in the stack. Placing a higher power chip inside the stack rather than outside the stack may make the hottest chip in the stack more heated and performance throttling or always below the desired frequency The operation is more sensitive. Placing a higher power chip on the outside of the stack (as in Figure 7) can result in higher bandwidth/watt. For some embodiments, the difference between the higher and lower power chips may be much higher than 50%. For example, in a system including a processor chip and a memory chip, the processor chip may run several times to operate with the power of the memory chip.In some embodiments, the chip includes circuitry that measures temperature and/or circuitry that estimates temperature based on activity per unit of time.14 shows a system having a memory module 180 that includes a module substrate 182 that supports a first stack, the first stack including a memory chip 184 having a memory core 186. Another stack includes a memory chip 188 having a memory core 190. The module 180 is inserted into the slot 194 which is coupled to the motherboard 196. The motherboard also supports the processor chip 198. The CAW and clock signals of Figures 10-12 may be provided directly or indirectly by a memory controller located internal or external to processor chip 198. The read data and read clock signals of Figures 10-12 can be provided to the memory controller directly or indirectly.Memory controllers and memory chips as described herein can be included in a variety of systems. For example, referring to FIG. 15, chip 404 includes a memory controller 406. Conductors 408-1...408-M each represent one of a plurality of unidirectional or bidirectional interconnects. The memory chip can forward the signal to the next memory chip. For example, the memory chips of stacks 410-1...410-M forward certain signals to the memory chips of stacks 420-1...420-M through interconnects 416-1...416-M. The chip can also be forwarded to other chips in the same stack. Signals can include commands, addresses, and write data. The signal can also include read data. The read data can be sent directly from the chips of the stack 410-1 ... 410-M to the memory controller 406 through the interconnects 408-1...408-M. However, if the read data is forwarded from the chips of the stack 410-1...410-M to the chips of the stack 420-1...420-M, then in some embodiments it is not necessary to read the data as well. The memory controller 406 is sent directly from the chips 410-1...410-M. Read data from the chips of stacks 420-1...420-M can be sent to memory controller 406 through interconnects 418-1...418-M. Interconnects 418-1...418-M are not included in some embodiments. Still referring to FIG. 15, the memory chips of the stacks 410-1...410-M may be on one or both sides of the substrate 414 of the memory module 412. The chips of the stacks 420-1...420-M may be on one or both sides of the substrate 424 of the memory module 422. Alternatively, the chips of the stacks 410-1...410-M may be on the motherboard supporting the chip 404 and the module 424. In this case, substrate 414 represents a portion of the motherboard.Figure 16 shows a system in which the chips of the stacks 510-1...510-M are on one or both sides of the memory module substrate 514, and the chips of the stacks 520-1...520-M are One or both sides of the memory module substrate 524. In some embodiments, the memory controller 500 and the chips of the stacks 510-1...510-M communicate with each other through the buffer 512, and the memory controller 500 and the stacks 520-1...520-M The chips communicate via buffers 512 and 522. In such a buffer system, the signals used by the memory controller with the buffers can be different from the signals used with the buffers and memory chips. Some embodiments may include additional conductors that are not shown in FIG. The buffer may be part of a stack that includes a memory chip.FIG. 17 shows first and second channels 536 and 538 coupled to chip 532, which includes memory controller 534. Channels 536 and 538 are coupled to memory modules 542 and 544, respectively, which include the chips as described herein.In FIG. 18, a memory controller 552 (which represents any of the previously mentioned memory controllers) is included in chip 550, which also includes one or more processor cores 554. Input/output controller chip 556 is coupled to chip 550 and is also coupled to wireless transmit and receive circuitry 558. In FIG. 19, memory controller 552 is included in chip 574, which may be a hub chip. Chip 574 is coupled between chip 570 (which includes one or more processor cores 572) and input/output controller chip 578, which may be a hub chip. Input/output controller chip 578 is coupled to wireless transmit and receive circuitry 558.Additional information and examplesThe invention is not limited to any particular signaling technique or protocol. In the particular implementation of the system in the figures, there will be additional circuitry, control circuitry, and perhaps even the interconnections shown. When the figures show two modules connected by conductors, there may be intermediate circuits not shown. The shape and relative size of the module are not related to the true shape and relative size.An embodiment is an implementation or an example of the invention. The "embodiment", "one embodiment", "some embodiments" or "other embodiments" referred to in the specification means that the specific features, structures, or characteristics described in connection with the embodiments are included in at least some of the present invention. These embodiments, but not necessarily in all embodiments of the invention. The various "embodiments", "one embodiment", or "some embodiments" are not necessarily all referring to the same embodiments.When it is said that the component "A" is coupled to the component "B", the component A may be directly coupled to or passed through the component B, for example, the component C is indirectly coupled to the component B.When the specification or claim indicates that a component, feature, structure, process, or feature A "causes" a component, feature, structure, process, or characteristic B, it means that "A" is at least part of the cause of "B". However, there may be at least one other element, feature, structure, process, or characteristic that promotes "B".It is not necessary to include the specific elements, features, structures, processes, or characteristics of the elements, the features, the structure, the process, or the characteristics. If the specification or claim refers to "a" or "an" or "an"The invention is not limited to the specific details described herein. In fact, many other variations of the foregoing description and drawings may be made within the scope of the invention. Accordingly, the following claims, including any modifications thereto, are intended to limit the scope of the invention. |
The present invention relates to methods and systems for updating a buffer. In one aspect, the present invention provides a method for updating a buffer, which includes strategically writing to the buffer to enable concurrent read and write to the buffer. The method eliminates the need for double buffering, thereby resulting in implementation cost and space savings compared to conventional buffering approaches. The method also prevents image tearing when used to update a frame buffer associated with a display, but is not limited to such applications. In another aspect, the present invention provides efficient mechanisms to enable buffer update across a communication link. In one example, the present invention provides a method for relaying timing information across a communication link. |
WHAT IS CLAIMED IS: CLAIMS 1. A method for updating a buffer having a plurality of lines, comprising: (a) determining a read line position in the buffer, said read line position indicating a line currently being read from the buffer; (b) partitioning the buffer into at least a first section that is safe to update and a second section that must not be updated based on the read line position; and (c) writing data at a line of the first section to update the buffer, wherein the line follows the second section based on the read line position. 2. The method of claim 1, wherein the read line position is determined by determining a read pointer value. 3. The method of claim 1, wherein the first section of the buffer comprises at least one of: (i) lines of the buffer that have been read in a last reading cycle of the buffer; and (ii) lines of the buffer that can be updated based on the read line position. 4. The method of claim 4, wherein (ii) further comprises lines of the buffer that can be updated prior to the read line position reaching said lines based on a buffer read speed and a buffer write speed. 5. The method of claim 1, wherein the second section of the buffer comprises lines of the buffer that cannot be updated prior to the read line position reaching said lines based on a buffer read speed and a buffer write speed. 6. The method of claim 5, wherein the second section of the buffer further comprises lines that must have been updated during a last reading cycle of the buffer. 7. The method of claim 1, wherein the buffer is written to by a first processor and is read by a second processor. 8. The method of claim 7, wherein the first and second processors communicate remotely through a communication link. 9. The method of claim 8, wherein the first processor updates the buffer based on a first event at the first processor that is triggered by a second event at the second processor. 10. The method of claim 9, further comprising: (d) scheduling the first event by writing to a register to enable the triggering of an interrupt that causes the first event based on the second event; and (e) triggering the second event at the second processor based on the read line position of the buffer. 11. The method of claim 10, wherein the first event represents a link wakeup event when the communication link is in hibernation mode. 12. The method of claim 8, wherein the first and second processors represent host and client controllers of a Mobile Display Digital Interface (MDDI) link. 13. The method of claim 9, wherein the first controller represents a Mobile Station Modem (MSM) baseband processor, and wherein the second controller represents an LCD controller. 14. The method of claim 13, wherein the buffer represents a frame buffer used for refreshing an LCD display. 15. The method of claim 14, wherein image tearing in the display is substantially avoided. 16. A method for conveying timing information across a communication link between a first processor and a second processor, wherein the communication link is in hibernation mode, comprising: scheduling a time event at the first processor to convey the timing information to the second processor; initiating a link wakeup by the first processor at the occurrence of the time event; and detecting the link wakeup at the second processor, and using the detected link wakeup timing to synchronize the first and second processors with respect to the conveyed timing information. 17. The method of claim 16, wherein the communication link represents a Mobile Display Digital Interface (MDDI) link. 18. The method of claim 17, wherein the first and second processors represent MDDI client and MDDI host, respectively. 19. The method of claim 18, wherein the timing information represents a buffer refresh time associated with a display being controlled across the MDDI link. |
METHODS AND SYSTEMS FOR UPDATING A BUFFERBACKGROUNDField of the Invention[0001] The present invention relates generally to methods and systems for updating a buffer. More particularly, the invention relates to methods and systems for updating a buffer across a communication link.Background of the Invention[0002] In the field of interconnect technologies, demand for ever increasing data rates, especially as related to video presentations, continues to grow.[0003] The Mobile Display Digital Interface (MDDI) is a cost-effective, low power consumption, transfer mechanism that enables very-high-speed data transfer over a short-range communication link between a host and a client. MDDI requires a minimum of just four wires plus power for bi-directional data transfer that delivers a maximum bandwidth of up to 3.2 Gbits per second.[0004] hi one application, MDDI increases reliability and decreases power consumption in clamshell phones by significantly reducing the number of wires that run across a handset's hinge to interconnect the digital baseband controller with an LCD display and/or a camera. This reduction of wires also allows handset manufacturers to lower development costs by simplifying clamshell or sliding handset designs.[0005] In controlling an LCD display across an MDDI link, one problem that arises relates to image flickering when the display is refreshed. Typically, what is needed is either a long persistence conversion or a refresh rate that is higher than what the human eye can perceive. Long persistence conversion results in image smearing when images appear to move. Therefore, it is desirable for the display to have a high refresh rate. A typical problem that occurs, however, is image tearing. The problem is that while the display is being refreshed at a high rate, the frame buffer associated with the display is being filled at a slower rate. As a result, the display image may reflect both updated and old image information within the same frame of the display. [0006] In one solution, multiple buffers are used and image information is cycled through the multiple buffers to avoid the image tearing problem described above. This includes commonly known "double buffering" approaches. The drawback of such solution, however, is clearly in the increased cost and chip space requirements in implementation. [0007] What is needed therefore are methods and systems to enable buffer update solutions that solve the above described problems while satisfying the cost and space requirements of MDDI applications.SUMMARY[0008] The present invention relates to methods and systems for updating a buffer.[0009] In one aspect, the present invention provides a method for updating a buffer, which includes strategically writing to the buffer to enable concurrent read and write to the buffer. The method eliminates the need for double buffering, thereby resulting in implementation cost and space savings compared to conventional buffering approaches. Among other advantages, the method prevents image tearing when used to update a frame buffer associated with a display, but is not limited to such applications.[0010] In another aspect, the present invention provides efficient mechanisms to enable buffer update across a communication link. In one example, the present invention provides a method for relaying timing information across a communication link. The method, however, is not limited to relaying timing information, and may be used in more general contexts as can be understood by persons skilled in the art(s) based on the teachings herein.[0011] Further embodiments, features, and advantages of the present invention, as well as the structure and operation of the various embodiments of the present invention, are described in detail below with reference to the accompanying drawings.BRIEF DESCRIPTION OF THE DRAWINGS [0012] The accompanying drawings, which are incorporated herein and form a part of the specification, illustrate the present invention and, together with the description, further serve to explain the principles of the invention and to enable a person skilled in the pertinent art to make and use the invention. [0013] FIG. 1 is a block diagram that illustrates an example environment using aMobile Display Digital Interface (MDDI) interface. [0014] FIG. IA is a diagram of a digital data device interface coupled to a digital device and a peripheral device. [0015] FIG. 2 is a block diagram that illustrates an MDDI link interconnection according to an embodiment of the example of FIG. 1. [0016] FIG. 3 is an example that illustrates the image tearing problem.[0017] FIG. 4 is a process flowchart that illustrates a method for updating a buffer according to the present invention.[0018] - FIG. 5 illustrates examples of the method of FIG. 4. [0019] FIGs. 6A, 6B illustrate buffer read/write strategies.[0020] FIG. 7 is a process flowchart that illustrates a method for conveying timing information across a communication link according to the present invention.] FIG. 8 illustrates an example signal timing diagram for initiating MDDI link wakeup to convey timing information. [0022] The present invention will be described with reference to the accompanying drawings. The drawing in which an element first appears is typically indicated by the leftmost digit(s) in the corresponding reference number. DETAILED DESCRIPTION[0023] This specification discloses one or more embodiments that incorporate the features of this invention. The disclosed embodiment(s) merely exemplify the invention. The scope of the invention is not limited to the disclosed embodiment(s). The invention is defined by the claims appended hereto.[0024] The embodiment(s) described, and references in the specification to "one embodiment", "an embodiment", "an example embodiment", etc., indicate that the embodiment(s) described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.[0025] Embodiments of the invention may be implemented in hardware, firmware, software, or any combination thereof. Embodiments of the invention may also be implemented as instructions stored on a machine-readable medium, which may be read and executed by one or more processors. A machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computing device). For example, a machine-readable medium may include read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.), and others. Further, firmware, software, routines, instructions may be described herein as performing certain actions. However, it should be appreciated that such descriptions are merely for convenience and that such actions in fact result from computing devices, processors, controllers, or other devices executing the firmware, software, routines, instructions, etc.Mobile Display Digital Interface (MDDI) [0026] The Mobile Display Digital Interface (MDDI) is a cost-effective, low power consumption, transfer mechanism that enables very-high-speed serial data transfer over a short-range communication link between a host and a client.[0027] Li the following, examples of MDDI will be presented with respect to a camera module contained in an upper clamshell of a mobile phone. However, it would be apparent to persons skilled in the relevant art(s) that any module having functionally equivalent features to the camera module could be readily substituted and used in embodiments of this invention.[0028] Further, according to embodiments of the invention, an MDDI host may comprise one of several types of devices that can benefit from using the present invention. For example, the host could be a portable computer in the form of a handheld, laptop, or similar mobile computing device. It could also be a Personal Data Assistant (PDA), a paging device, or one of many wireless telephones or modems. Alternatively, the host could be a portable entertainment or presentation device such as a portable DVD or CD player, or a game playing device. Furthermore, the host can reside as a host device or control element in a variety of other widely used or planned commercial products for which a high speed communication link is desired with a client. For example, a host could be used to transfer data at high rates from a video recording device to a storage based client for improved response, or to a high resolution larger screen for presentations. An appliance such as a refrigerator that incorporates an onboard inventory or computing system and/or Bluetooth connections to other household devices, can have improved display capabilities when operating in an internet or Bluetooth connected mode, or have reduced wiring needs for in-the-door displays (a client) and keypads or scanners (client) while the electronic computer or control systems (host) reside elsewhere in the cabinet. In general, those skilled in the art will appreciate the wide variety of modern electronic devices and appliances that may benefit from the use of this interface, as well as the ability to retrofit older devices with higher data rate transport of information utilizing limited numbers of conductors available in either newly added or existing connectors or cables. At the same time, an MDDI client may comprise a variety of devices useful for presenting information to an end user, or presenting information from a user to the host. For example, a micro-display incorporated in goggles or glasses, a projection device built into a hat or helmet, a small screen or even holographic element built into a vehicle, such as in a window or windshield, or various speaker, headphone, or sound systems for presenting high quality sound or music. Other presentation devices include projectors or projection devices used to present information for meetings, or for movies and television images. Another example would be the use of touch pads or sensitive devices, voice recognition input devices, security scanners, and so forth that may be called upon to transfer a significant amount of information from a device or system user with little actual "input" other than touch or sound from the user. In addition, docking stations for computers and car kits or desk-top kits and holders for wireless telephones may act as interface devices to end users or to other devices and equipment, and employ either clients (output or input devices such as mice) or hosts to assist in the transfer of data, especially where high speed networks are involved. However, those skilled in the art will readily recognize that the present invention is not limited to these devices, there being many other devices on the market, and proposed for use, that are intended to provide end users with high quality images and sound, either in terms of storage and transport or in terms of presentation at playback. The present invention is useful in increasing the data throughput between various elements or devices to accommodate the high data rates needed for realizing the desired user experience.[0029] FIG. IA is a diagram of a digital data device interface 100 coupled to a digital device 150 and a peripheral device 180. Digital device 150 can include, but is not limited to, a cellular telephone, a personal data assistant, a smart phone or a personal computer. In general digital device 150 can include any type of digital device that serves as a processing unit for digital instructions and the processing of digital presentation data. Digital device 150 includes a system controller 160 and a link controller 170.[0030] Peripheral device 180 can include, but is not limited to, a camera, a bar code reader, an image scanner, an audio device, and a sensor, hi general peripheral 180 can include any type of audio, video or image capture and display device in which digital presentation data is exchanged between a peripheral and a processing unit. Peripheral 180 includes control blocks 190. When peripheral 180 is a camera, for example, control blocks 190 can include, but are not limited to lens control, flash or white LED control and shutter control. Digital presentation data can include digital data representing audio, image and multimedia data. [0031] Digital data interface device IOO transfers digital presentation data at a high rate over a communication link 105. In one example, an MDDI communication link can be used which supports bi-directional data transfer with a maximum bandwidth of 3.2 Gbits per second. Other high rates of data transfer that are higher or lower than this example rate can be supported depending on the communications link. Digital data interface device 100 includes a message interpreter module 110, a content module 120, a control module 130 and a link controller 140.[0032] Link controller 140, which is located within digital data interface 100, and link controller 170, which is located within digital device 150 establish communication link 105. Link controller 140 and link controller 170 may be MDDI link controllers.[0033] The Video Electronics Standards Association ("VESA") MDDI Standard, which is incorporated herein by reference in its entirety, describes the requirements of a highspeed digital packet interface that lets portable devices transport digital images from small portable devices to larger external displays. MDDI applies a miniature connector system and thin flexible cable ideal for linking portable computing, communications and entertainment devices to emerging products such as wearable micro displays. It also includes information on how to simplify connections between host processors and a display device, in order to reduce the cost and increase the reliability of these connections. Link controllers 140 and 170 establish communication path 105 based on the VESA MDDI Standard.[0034] U.S. Patent No. 6,760,772, entitled Generating and Implementing aCommunication Protocol and Interface for High Data Rate Signal Transfer, issued to Zou et al. on July 6, 2004 ('772 Patent") describes a data interface for transferring digital data between a host and a client over a communication path using packet structures linked together to form a communication protocol for presentation data. Embodiments of the invention taught in the '772 Patent are directed to an MDDI interface. The signal protocol is used by link controllers, such as link controllers 140 and 170, configured to generate, transmit, and receive packets forming the communications protocol, and to form digital data into one or more types of data packets, with at least one residing in the host device and being coupled to the client through a communications path, such as communications path 105.[0035] The interface provides a cost-effective, low power, bi-directional, high-speed data transfer mechanism over a short-range "serial" type data link, which lends itself to implementation with miniature connectors and thin flexible cables. An embodiment of link controllers 140 and 170 establishes communication path 105 based on the teachings of the '772 Patent. The '772 Patent is herein incorporated by reference in its entirety.[0036] In other embodiments, link controllers 140 and 170 can both be a USB link controller or they both can include a combination of controllers, such as for example, an MDDI link controller and another type of link controller, such as, for example, a USB link controller. Alternatively, link controllers 140 and 170 can include a combination of controllers, such as an MDDI link controller and a single link for exchanging acknowledgement messages between digital data interface device 100 and digital device 150. Link controllers 140 and 170 additionally can support other types of interfaces, such as an Ethernet or RS-232 serial port interface. Additional interfaces can be supported as will be known by individuals skilled in the relevant arts based on the teachings herein.[0037] Within digital data interface device 100, message interpreter module 110 receives commands from and generates response messages through communication link 105 to system controller 160, interprets the command messages, and routes the information content of the commands to an appropriate module within digital data interface device 100.[0038] Content module 120 receives data from peripheral device 180, stores the data and transfers the data to system controller 160 through communication link 105.[0039] Control module 130 receives information from message interpreter 130, and routes information to control blocks 190 of peripheral device 180. Control module 130 can also receive information from control blocks 190 and routes the information to the message interpreter module 110.[0040] FIG. 1 is a block diagram that illustrates an example environment using anMDDI interface. In the example of FIG. 1, MDDI is used to interconnect modules across the hinge of a clamshell phone 100.[0041] Referring to FIG. 1, a lower clamshell section 102 of clamshell phone 100 includes a Mobile Station Modem (MSM) baseband chip 104. MSM 104 is a digital baseband controller. An upper clamshell section 114 of clamshell phone 100 includes a Liquid Crystal Display (LCD) module 116 and a camera module 118.[0042] Still referring to FIG. 1, an MDDI link 110 connects camera module 118 toMSM 104. Typically, an MDDI link controller is integrated into each of camera module 118 and MSM 104. In the example of FIG. 1, an MDDI Host 122 is integrated into camera module 112, while an MDDI Client 106 resides on the MSM side of the MDDI link 110. Typically, the MDDI host is the master controller of the MDDI link. In the example of FIG. 1, pixel data from camera module 118 are received and formatted into MDDI packets by MDDI Host 122 before being transmitted onto MDDI link 110. MDDI client 106 receives the MDDI packets and re-converts them into pixel data of the same format as generated by camera module 118. The pixel data are then sent to an appropriate block in MSM 104 for processing.[0043] Still referring to FIG. 1, an MDDI link 112 connects LCD module 116 to MSM104. hi the example of FIG. 1, MDDI link 112 interconnects an MDDI Host 108, integrated into MSM 104, and an MDDI Client 120 integrated into LCD module 116. In the example of FIG. 1, image data generated by a graphics controller of MSM 104 are received and formatted into MDDI packets by MDDI Host 108 before being transmitted onto MDDI link 112. MDDI client 120 receives the MDDI packets and reconverts them into image data for use by LCD module 116. Typically, image data is buffered using a frame buffer before being used to refresh the LCD display.[0044] FIG. 2 is a block diagram that illustrates MDDI link interconnection 112 according to the example of FIG. 1. As described above, one of the functions of MDDI link 112 is to transfer image data from MSM 104 to LCD Module 116. A frame interface (not shown in FIG. 2) connects MDDI link controller 120 to modules of LCD Module 116. Similarly, another frame interface (not shown in FIG. 2) connects MDDI link controller 108 to appropriate modules of MSM 104. Typically, MDDI link controller 108 represents the host controller of the MDDI link, while MDDI link controller 120 represents the client controller of the MDDI. Other implementations, however, may reverse the roles of the two controllers.[0045] MDDI link 112 includes a minimum of four wires, comprising two wires for data signals 202 and 204 and two wires for probe signals 206 and 208, in addition to two wires for power signals 210 and 211. Data signals 202 and 204 are bi-directional. Accordingly, data can be transmitted in either direction (from host to client and vice versa) using data signals 202 and 204. Strobe signals 206 and 208 are unidirectional, and may only be driven by the host controller of the link. Accordingly, in the example of FIG. 2, only host controller 108 may drive strobe signals 206 and 208. Method and Systems for Updating a Buffer[0046] As described above, MDDI can be used to connect a baseband processor (MSM104 in FIG. 2, for example) and a graphics controller (LCD module 116 in FIG. 2, for example). The baseband processor channels image information, typically received from a camera sensor, to the graphics controller, which uses the image information to create a display image. Typically, the graphics controller employs one or more frame buffers to store the image information received from the baseband processor before using it to generate the display image. As described above, image tearing is one problem that occurs. This happens when the image information is being read out of the frame buffer at a rate slower or faster than the rate at which it is being written to the frame buffer. Methods and systems for updating a buffer, which, among other advantages, solve the image tearing problem, will be described herein. It should be noted, however, that methods and systems according to the present invention are not limited to the specific exemplary embodiments in which they will described or to being used in an MDDI environment. Further, methods and systems of the present invention can be employed in various other applications that utilize buffering, and that may benefit from the advantages of the present invention.Image Tearing[0047] FIG. 3 illustrates two examples of image tearing that can occur while reading from and/or writing to a buffer. The diagram of FIG. 3 shows plots of read and write pointers as functions of buffer position and time. The read pointer represents the position in the buffer that is being read. The write pointer indicates the position in the buffer that is being written to. In the example of FIG. 3, the buffer position is defined in terms of pixel position in the buffer.[0048] In the first example in FIG. 3, the buffer is being read at a slower rate than it is written to. This is illustrated by the relative slopes of read and write pointer lines 302 and 304. Note that read and write pointer lines 302 and 304 intersect at time t0. Before time to, pixels in the buffer are being read prior to being updated. After time to, pixels are being updated prior to be read. Accordingly, within the same frame (from time 0 to time ti), pixels in positions 0 to p0 (which corresponds to the pixel position read at time to) are read with older image information relative to pixels from position po to the last pixel in the buffer, which are read with updated image information. The result is image tearing with a lower portion of the image reflecting newer image information relative to an upper portion of the image.[0049] In the second example in FIG. 3, the buffer is being read at a faster rate than it is written to. This is illustrated by the relative slopes of read and write pointer lines 302 and 306. Read and write pointer lines 302 and 306 intersect at time t2. Before time t2, pixels in the buffer are being updated prior to being read. After time t2, pixels are being read prior to being updated. Accordingly, within the same frame (from time t\ to time t3), pixels in positions 0 to p2 (which corresponds to the pixel position read at time t2) are read with newer image information relative to pixels from position p2 to the last pixel in the buffer, which are read with old image information. The result is image tearing with an upper portion of the image reflecting newer image information relative to a lower portion of the image.Method for Updating a Buffer[0050] A method to strategically update a buffer will now be provided. The method prevents image tearing when used to update a frame buffer associated with a display. The method may also be used in other buffering applications based on its apparent advantages as will be described herein.[0051] FIG. 4 is a process flowchart 400 that illustrates a method for updating a buffer according to the present invention. Process flowchart 400 begins in step 410, which includes determining a read line position in the buffer. The read line position indicates a line currently being read from the buffer. Typically, step 410 is achieved by determining the value of a read pointer that points to the read line position in the buffer.[0052] Step 420 includes partitioning the buffer into at least a first section that is safe to update and a second section that must not be updated based on the read line position. It is noted here that partitioning the buffer does not refer here to a physical but to a logical partitioning of the buffer. Further, a logical partition of the buffer is not fixed and may change as will be understood from the teachings herein. The first section of the buffer includes lines of the buffer that have been read within the current buffer reading cycle based on the read line position. The first section also includes lines of the buffer that can be updated based on the read line position. Li other words, the first section includes lines whose content has just been read or lines that can be updated prior to the read line position reaching them based on the buffer read speed and the buffer write speed. Lines that cannot be updated prior to the read line position reaching them based on the buffer read speed and the buffer write speed belong to the second section of the buffer. In other words, lines of the second section of the buffer are those for which there is not sufficient time to update before they have to be read. Accordingly, lines of the second section of the buffer must have been updated during the last reading cycle of the buffer. [0053] Step 430 includes updating the buffer by writing data at a line of the first section which follows the second section based on the read line position. Typically, the buffer is updated at a position which is both safe to update as described above and which has already been read during the last reading cycle of the buffer. In one embodiment, step 430 includes writing data at a line of the first section which immediately follows the last line of the second section. Other variations of step 430 may also be possible as will be apparent to a person skilled in the art based on the teachings disclosed herein.Example Illustration[0054] FIG. 5 provides examples that illustrate the method described above in FIG. 4.FIG. 5 shows three examples A, B, and C of reading a buffer 500. For purposes of illustration only, buffer 500 is shown to include 352 lines of data. A read pointer 510 indicates the read line position in the buffer. Sections labeled with the roman numeral "I" represent lines that belong to the first section of the buffer as described above. Sections labeled with the roman numeral "II" represent lines that belong to the second section of the buffer as described above.[0055] hi example A, shaded area "I" represents lines of the first section of the buffer which have already been read during the current reading cycle of the buffer, hi the example, this area includes lines 1 through m-1. Read pointer 510 indicates that line m is currently being read. Accordingly, area "II" in example A represents lines of buffer 500 that cannot be updated based on the current position of read pointer 510. In other words, there is no sufficient time to update lines in area "II" based on the current position of read pointer 510 and the read and write speeds to the buffer. Note that the first section of the buffer also includes an unshaded area "I" below area "II". This area "I" belongs to the first section as it is safe to update, but should not be updated given that it has not been read during the current reading cycle of the buffer. Updating unshaded area "I" prior to reading it would result in image tearing, as described in FIG. 3, where the upper portion of the image reflects older image information relative to the lower portion of the image.[0056] hi example B, the shaded area represents lines of the buffer which have already been read during the current reading cycle of the buffer. In the example, this area includes lines 1 through 351. Read pointer 510 indicates that line 352 is currently being read. Accordingly, area "II" in example B represents lines that must have been updated given the current read line position. Lines in area "II" cannot be updated based on the current read line position and the read and write speeds to the buffer, and belong to the second section of the buffer based on the description above. Lines in area "I" belong to the first section of the buffer, and are safe to update. To update the buffer, writing can begin in area "I". Data can be written at a line in area "I" that immediately follows area "II". This corresponds to line m in example B.[0057] Example C illustrates a scenario subsequent to the one shown in B. In exampleC, read pointer 510 has wrapped around and is reading line m of the buffer. Accordingly, lines preceding the read pointer in the buffer belong to the first section of the buffer, and may be updated. Lines in area "II" must have been updated during the last write cycle to the buffer given the current read line position. Lines in area "II" cannot be updated, and belong to the second section of the buffer as described above. In other words, lines in area "II" must contain updated information given the read line position, as there is not sufficient time to update them before they have to be read. Shaded area "I" represents lines of the first section of the buffer that are safe to update, but should not be updated given that they have not been read during the last reading cycle of the buffer.Buffer Read/Write Strategies[0058] Buffer read/write strategies to avoid image tearing or equivalent problems related to buffer update are described herein. Buffer update strategies according to the present invention further eliminate the need for the commonly adopted "double buffering" technique. Instead, a single buffer is used, which results in both implementation cost and space savings. The present invention is not limited to the exemplary strategies described herein, and variations which are apparent to persons skilled in the art(s) are also considered to be within the scope of the present invention.[0059] FIGs. 6A and 6B illustrate exemplary buffer read/write strategies according to the present invention. The diagrams of FIGs. 6 A and 6B show plots of read pointer 612 and write pointers 614 and 616 as functions of buffer position and time. In the examples of FIGs. 6A and 6B, the buffer position is defined in terms of pixel position in the buffer, which may be equivalently replaced with any other measure of buffer position, such as line number, for example.[0060] Referring to FIG. 6A, an exemplary buffer read/write strategy is depicted over two reading cycles of the buffer. In the first reading cycle, from time 0 to time U, the first half of the buffer is updated, while the entire buffer content is read. In the second reading cycle of the buffer, from time t! to time t2, the second half of the buffer is updated, while the entire buffer content is read. Note that the first half of the buffer, during the second reading cycle, contains updated information that were written to the buffer during the first reading cycle. The second half of the buffer, during the second cycle, is updated prior to being read as shown by write pointer 614 preceding read pointer 612 in time over the second reading cycle. Accordingly, over both reading cycles, data read from the buffer belongs to the same update cycle of the buffer, and no image tearing occurs.[0061] FIG. 6B illustrates another exemplary buffer read/write strategy over two reading cycles of the buffer. During the first reading cycle, the first half of the buffer is updated from time to to time t\. During the second reading cycle, the second half of the buffer is updated from time ti to time t2. Note that writing to the buffer starts at a time to during the first cycle such that, during the first cycle, the entire buffer is read with an initial information content and not an updated content due to the writing process. On the other hand, writing to the buffer ends at a time t2 during the second cycle such that, during the second cycle, the entire buffer contains updated information content when it is read. This is shown by write pointer 616 preceding read pointer 612 in time over the second reading cycle. Accordingly, image tearing will not occur over both reading cycles in the example of FIG. 6B.Buffer Update Through a Communication Link [0062] Methods and systems for updating a buffer according to the present invention may be used in a variety of applications. In one application, as described above, the buffer update approach may be used to update a frame buffer associated with a display. In another application, the buffer is updated remotely, wherein it is written to by a first processor and is read by a second processor, and wherein the first and second processors communicate through a communication link. For example, the first and second processors represent an MSM baseband processor and an LCD module, respectively, that communicate through an MDDI link, as illustrated in FIG. 2. In certain applications, synchronization between the first and second processors will be required.[0063] Methods and systems related to synchronization to enable buffer update across a communication link will now be provided. As will be understood by a person skilled in the art(s) based on the teachings herein, certain aspects of the methods and systems that will be presented may be applicable to synchronization problems in general, and are not limited to synchronization for enabling remote buffer update.[0064] In one aspect, synchronization between the first and second processors includes scheduling a first event at the first processor that is triggered by a second event at the second processor. This is typically done by writing to a register to enable the triggering of an interrupt that causes the first event at the first processor whenever the second event occurs at the second processor. For example, in a remote buffer update application, where the buffer is updated by the first processor and read by the second processor, the first event may represent the need to start writing to the buffer, while the second event may represent that the read pointer has finished a complete reading cycle of the buffer. The second event may then be triggered at the second processor based on the read line position in the buffer.[0065] In another aspect, methods to convey synchronization information across the communication link are provided. The methods may be employed to relay synchronization information related to buffer update, as described above, for example. FIG. 7 is a process flowchart 700 that illustrates a method for conveying timing information across a communication link between a first processor and a second processor, when the communication link is in hibernation mode. Process flowchart 700 begins in step 710, which includes scheduling a time event at the first processor to convey timing information to the second processor. The time event may be a periodic event as required by the specific application. For example, in the case of a buffer update application, the time event may be related to the read line position in the buffer.[0066] Step 720 includes initiating a link wakeup by the first processor at the occurrence of the time event. For example, in the case of a buffer update across an MDDI link, where an MDDI client is located at the LCD module side of the interconnection, the MDDI client may initiate a link wakeup by driving the data signal to a logic one to notify the MDDI host that the buffer should be updated.[0067] Subsequently, step 730 includes detecting the link wakeup at the second processor (for example, an MDDI host on the MSM side of the MDDI interconnection), and using the detected link wakeup timing to synchronize the first and second processors with respect to the timing information that is being conveyed. For example, in the case of a buffer update across an MDDI link, when the MDDI host detects the link wakeup by the MDDI client, it can synchronize itself with the MDDI client with respect to the buffer update start time.[0068] It can be appreciated by a person skilled in the art based on the teachings herein that the method described in FIG. 7 may be extended to convey any kind of timing information across a communication link, and is not limited to buffer update synchronization purposes. The advantages of such method are through saving the link and conveying information by simply waking the link up.[0069] FIG. 8 illustrates an example timing diagram 800 for initiating link wakeup to convey timing information across an MDDI interconnection. For example, the MDDI interconnection may be such as the one described above with reference to FIG. 2 with an MDDI host located at the MSM and an MDDI client located at the LCD module. The MDDI client, accordingly, would initiate a link wakeup to convey buffer update information to the MDDI host, which, in turn, would start refreshing the buffer located in the LCD module. In the example of FIG. 8, vsync_wake signal 802 represents a value written to a register at the MDDI host to enable a wakeup at the host based on vsync signal 806. Wakeup at the host occurs whenever the value of vsync_wake 802 is high. Vsync signal 806 represents a value of a signal "vertical sync", which occurs at the client and is related to buffer update time. For example, vsync 806 goes high whenever the read pointer has wrapped and is reading from the beginning of the buffer. Link_active signal 804 represents whether or not the data signal of the MDDI interconnection is active or in hibernation. Mddi_client_wakeup signal 808 represents a signal at the client, which responds to vsync 806 to wake up the client.[0070] hi the example of FIG. 8, vsync_wake 802 is set at the host at time A. At timeB, the MDDI link goes into hibernation mode. At time C, vsync 806 goes high indicating that the buffer needs to be refreshed by the host. As a result, mddi_client_wakeup 808 also goes high to wake the client up to initiate the link wakeup. The client initiates the link wakeup by driving the data signal of the interconnection, and the link goes active at time D. Subsequently, vsync_wake 802 and mddi_client_wakeup return to zero, and the host detects the link wakeup and begins to refresh the buffer at the client.Conclusion[0071] While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be apparent to persons skilled in the relevant art that various changes in form and detail can be made therein without departing from the spirit and scope of the invention. Thus, the breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents. |
The invention set forth herein describes a mechanism for predicated execution of instructions within a parallel processor executing multiple threads or data lanes. Each thread or data lane executing within the parallel processor is associated with a predicate register that stores a set of 1-bit predicates. Each of these predicates can be set using different types of predicate-setting instructions, where each predicate setting instruction specifies one or more source operands, at least one operation to be performed on the source operands, and one or more destination predicates for storing the result of the operation. An instruction can be guarded by a predicate that may influence whether the instruction is executed for a particular thread or data lane or how the instruction is executed for a particular thread or data lane. |
We Claim: 1. A computer-implemented method for accessing predicate information associated with a thread group, the method comprising: receiving a first instruction for execution by the thread group, wherein the first instruction specifies a first source operand identifier, an operation, and a first destination predicate identifier; for each thread in the thread group, computing a predicate result by applying the operation to data in a first source operand identified by the first source operand identifier; and storing the predicate result in a first predicate register associated with the thread and identified by the first destination predicate identifier, wherein the first source register and the first predicate register are different for each thread in the thread group. 2. The method of claim 1 , wherein the operation compares the data in the first source operand with zero. 3. The method of claim 1 , wherein the first instruction further specifies a second source operand and the operation compares the data in the first source register with the data in the second source operand. 4. The method of claim 3, wherein the first instruction further specifies a third source operand identifier and a combinatorial operation, and the step of computing a predicate result also includes applying the combinatorial operation to data in a third source operand identified by the third source operand identifier and the comparison of the data in the first source operand and the second source operand. 5. The method of claim 4, wherein the third source operand is a predicate. 6. The method of claim 3, wherein the first instruction further specifies a second destination predicate identifier, and further comprising the steps of: for each thread in the thread group,computing a second predicate result by applying an inverse of the comparison operation to the data in the first source operand and the second source operand; and storing the second predicate result in a second predicate register associated with the thread and identified by the second destination predicate identifier, wherein the second predicate register is different for each thread in the thread group. 7. The method of claim 3, wherein at least one of the first source operand or the second source operand is a predicate. 8. The method of claim 3, wherein the second source operand is a condition code. 9. The method of claim 1 , further comprising the step of receiving a guarded instruction for execution by the thread group that specifies the first destination predicate identifier. 10. The method of claim 9, wherein the guarded instruction comprise a select instruction that specifies a third source operand identifier and a fourth source operand identifier, and further comprising the step of, for each thread in the thread group, determining, based on the first predicate register identified by the first destination predicate identifier, whether to select data in a third source operand identified by the third source operand identifier or data in a fourth source operand identified by the fourth source operand identifier. 1 1. The method of claim 9, wherein the guarded instruction comprises a minimum/maximum instruction that specifies a third source operand identifier and a fourth source operand identifier, and further comprising the step of, for each thread in the thread group, determining, based on the first predicate register identified by the first destination predicate identifier, whether to perform a minimum operation or a maximum operation on data in a third source operand identified by the third source operand identifier or data in a fourth source operand identified by the fourth source operand identifier. 12. The method of claim 9, wherein the guarded instruction comprises a branch instruction that specifies a third instruction, and further comprising the step of, for each thread in the thread group, determining, based on the first predicate register identified by the first destination predicate identifier, whether the third instruction should be the next instruction executed. 13. A computer system, comprising: a memory; and a processor configured to: receive a first instruction for execution by the thread group, wherein the first instruction specifies a first source operand identifier, an operation, and a first destination predicate identifier, for each thread in the thread group, compute a predicate result by applying the operation to data in a first source operand identified by the first source operand identifier, and store the predicate result in a first predicate register associated with the thread and identified by the first destination predicate identifier, wherein the first source register and the first predicate register are different for each thread in the thread group. 14. The computer system of claim 13, wherein the first instruction further specifies a second source operand and the operation compares the data in the first source register with the data in the second source operand. 15. The computer system of claim 13, wherein the processor is further configured to receive a guarded instruction for execution by the thread group that specifies the first destination predicate identifier. |
EFFICIENT PREDICATED EXECUTION FOR PARALLEL PROCESSORS CROSS-REFERENCE TO RELATED APPLICATIONS [0001] This application claims priority to United States provisional patent application entitled "Efficient Predicated Execution for SIMT and SIMD Processor Architectures," filed on September 28, 2009 and having a serial number 61/246,509, and United States patent application serial number 12/891 ,629, filed September 27, 2010. BACKGROUND OF THE INVENTION Field of the Invention [0002] The present invention relates generally to the field of parallel processing and, more specifically, to efficient predicated execution for parallel processors. Description of the Related Art [0003] Predicated execution is a mechanism for conditionally executing individual instruction operations, typically by conditionally committing or ignoring the results of executing an instruction, and thereby provides an alternative to conditional branching. In parallel processors, such as single-instruction multiple-thread (SIMT) and SIMD parallel processors where groups of parallel threads or data lanes execute a common instruction stream, predicated execution in each thread or data lane can greatly improve performance over divergent branching code where each thread of a thread group can independently take a different execution path. [0004] In prior parallel processor designs, predicated execution within each thread or data lane makes use of a set of 4-bit condition code (CC) registers for each thread or lane instance, and instructions have a guard comprising several instruction bits to select one of the CC registers and additional bits to encode the comparison condition; a guarded instruction commits its result(s) for a thread or lane only if the condition for that thread or lane evaluates to True and is nullified otherwise. Additionally, many instructions optionally write to a CC register for each thread or data lane, requiring several instruction bits to encode the destination CC register plus one bit to enable/disable the register write operation.[0005] As an example, a prior SIMT parallel thread processor has four 4-bit CC registers per thread, so instruction guards comprise seven bits: two bits to select one of four CC registers and five bits to encode the comparison test. There are 24 possible tests of the CC register. For instructions that optionally write a CC register, three bits are needed to encode the destination CC register and write-enable. [0006] One problem with the prior approach is cost, both in terms of per-thread state (16-bits per thread for four CC registers) and instruction encoding space (7 bits per instruction for the guarding condition, plus 3 bits per instruction for any instruction that writes a CC register). Note that nearly every instruction must have a guard field, so reducing the encoding cost is a major concern. The 16-bits per-thread cost of CC registers is multiplied by the number of parallel threads or data lane instances, typically hundreds per SIMT or SIMD parallel processor, and is further multiplied by the number of parallel processors, which can number in the tens per chip. Per-thread register state costs chip area and power. [0007] As the foregoing illustrates, what is needed in the art is a mechanism for minimizing per-thread state associated with predicated execution, minimizing instruction encoding bits required for predicated execution, and minimizing the number of instructions and cycles required to implement predicated execution. SUMMARY OF THE INVENTION [0008] One embodiment of the present invention sets forth a method for accessing predicate information associated with a thread group. The method includes the steps of receiving a first instruction for execution by the thread group, where the first instruction specifies a first source operand identifier, an operation, and a first destination predicate identifier, for each thread in the thread group, computing a predicate result by applying the operation to data in a first source operand identified by the first source operand identifier, and storing the predicate result in a first predicate register associated with the thread and identified by the first destination predicate identifier, where the first source register and the first predicate register are different for each thread in the thread group. [0009] Advantageously, the invention described herein provides a mechanism for cost efficient predicated execution that minimizes the per-thread state in SIMT/SIMD parallelprocessors. In addition, optional negation of predicates further saves additional bits per- thread that would otherwise be needed to store negated predicates. Further, efficient code can be generated for conditional program regions of parallel multithreaded programs. BRIEF DESCRIPTION OF THE DRAWINGS [0010] So that the manner in which the above recited features of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of its scope, for the invention may admit to other equally effective embodiments. [0011] Figure 1 is a block diagram illustrating a computer system configured to implement one or more aspects of the present invention; [0012] Figure 2 is a block diagram of a parallel processing subsystem for the computer system of Figure 1 , according to one embodiment of the present invention; [0013] Figure 3A is a block diagram of a GPC within one of the PPUs of Figure 2, according to one embodiment of the present invention; [0014] Figure 3B is a block diagram of a partition unit within one of the PPUs of Figure 2, according to one embodiment of the present invention; [0015] Figure 3C is a block diagram of a portion of the SPM of Figure 3A, according to one embodiment of the present invention; [0016] Figure 4 is a more detailed diagram of the predicate register file of Figure 3C, according to one embodiment of the present invention; and [0017] Figure 5 is a flow diagram of method steps for setting predicates in the predicate register file and accessing predicates for conditional (predicated) instruction execution, according to one embodiment of the present invention.DETAILED DESCRIPTION [0018] In the following description, numerous specific details are set forth to provide a more thorough understanding of the present invention. However, it will be apparent to one of skill in the art that the present invention may be practiced without one or more of these specific details. In other instances, well-known features have not been described in order to avoid obscuring the present invention. System Overview [0019] Figure 1 is a block diagram illustrating a computer system 100 configured to implement one or more aspects of the present invention. Computer system 100 includes a central processing unit (CPU) 102 and a system memory 104 communicating via an interconnection path that may include a memory bridge 105. Memory bridge 105, which may be, e.g., a Northbridge chip, is connected via a bus or other communication path 106 (e.g., a HyperTransport link) to an I/O (input/output) bridge 107. I/O bridge 107, which may be, e.g., a Southbridge chip, receives user input from one or more user input devices 108 (e.g., keyboard, mouse) and forwards the input to CPU 102 via path 106 and memory bridge 105. A parallel processing subsystem 1 12 is coupled to memory bridge 105 via a bus or other communication path 1 13 (e.g., a PCI Express, Accelerated Graphics Port, or HyperTransport link); in one embodiment parallel processing subsystem 1 12 is a graphics subsystem that delivers pixels to a display device 1 10 (e.g., a conventional CRT or LCD based monitor). A system disk 1 14 is also connected to I/O bridge 107. A switch 1 16 provides connections between I/O bridge 107 and other components such as a network adapter 118 and various add-in cards 120 and 121 . Other components (not explicitly shown), including USB or other port connections, CD drives, DVD drives, film recording devices, and the like, may also be connected to I/O bridge 107. Communication paths interconnecting the various components in Figure 1 may be implemented using any suitable protocols, such as PCI (Peripheral Component Interconnect), PCI-Express, AGP (Accelerated Graphics Port), HyperTransport, or any other bus or point-to-point communication protocol(s), and connections between different devices may use different protocols as is known in the art. [0020] In one embodiment, the parallel processing subsystem 1 12 incorporates circuitry optimized for graphics and video processing, including, for example, video outputcircuitry, and constitutes a graphics processing unit (GPU). In another embodiment, the parallel processing subsystem 1 12 incorporates circuitry optimized for general purpose processing, while preserving the underlying computational architecture, described in greater detail herein. In yet another embodiment, the parallel processing subsystem 1 12 may be integrated with one or more other system elements, such as the memory bridge 105, CPU 102, and I/O bridge 107 to form a system on chip (SoC). [0021] It will be appreciated that the system shown herein is illustrative and that variations and modifications are possible. The connection topology, including the number and arrangement of bridges, the number of CPUs 102, and the number of parallel processing subsystems 1 12, may be modified as desired. For instance, in some embodiments, system memory 104 is connected to CPU 102 directly rather than through a bridge, and other devices communicate with system memory 104 via memory bridge 105 and CPU 102. In other alternative topologies, parallel processing subsystem 1 12 is connected to I/O bridge 107 or directly to CPU 102, rather than to memory bridge 105. In still other embodiments, I/O bridge 107 and memory bridge 105 might be integrated into a single chip. Large embodiments may include two or more CPUs 102 and two or more parallel processing systems 1 12. The particular components shown herein are optional; for instance, any number of add-in cards or peripheral devices might be supported. In some embodiments, switch 1 16 is eliminated, and network adapter 1 18 and add-in cards 120, 121 connect directly to I/O bridge 107. [0022] Figure 2 illustrates a parallel processing subsystem 1 12, according to one embodiment of the present invention. As shown, parallel processing subsystem 1 12 includes one or more parallel processing units (PPUs) 202, each of which is coupled to a local parallel processing (PP) memory 204. In general, a parallel processing subsystem includes a number U of PPUs, where U > 1. (Herein, multiple instances of like objects are denoted with reference numbers identifying the object and parenthetical numbers identifying the instance where needed.) PPUs 202 and parallel processing memories 204 may be implemented using one or more integrated circuit devices, such as programmable processors, application specific integrated circuits (ASICs), or memory devices, or in any other technically feasible fashion.[0023] Referring again to Figure 1 , in some embodiments, some or all of PPUs 202 in parallel processing subsystem 1 12 are graphics processors with rendering pipelines that can be configured to perform various tasks related to generating pixel data from graphics data supplied by CPU 102 and/or system memory 104 via memory bridge 105 and bus 1 13, interacting with local parallel processing memory 204 (which can be used as graphics memory including, e.g., a conventional frame buffer) to store and update pixel data, delivering pixel data to display device 1 10, and the like. In some embodiments, parallel processing subsystem 1 12 may include one or more PPUs 202 that operate as graphics processors and one or more other PPUs 202 that are used for general-purpose computations. The PPUs may be identical or different, and each PPU may have its own dedicated parallel processing memory device(s) or no dedicated parallel processing memory device(s). One or more PPUs 202 may output data to display device 1 10 or each PPU 202 may output data to one or more display devices 1 10. [0024] In operation, CPU 102 is the master processor of computer system 100, controlling and coordinating operations of other system components. In particular, CPU 102 issues commands that control the operation of PPUs 202. In some embodiments, CPU 102 writes a stream of commands for each PPU 202 to a pushbuffer (not explicitly shown in either Figure 1 or Figure 2) that may be located in system memory 104, parallel processing memory 204, or another storage location accessible to both CPU 102 and PPU 202. PPU 202 reads the command stream from the pushbuffer and then executes commands asynchronously relative to the operation of CPU 102. [0025] Referring back now to Figure 2, each PPU 202 includes an I/O (input/output) unit 205 that communicates with the rest of computer system 100 via communication path 1 13, which connects to memory bridge 105 (or, in one alternative embodiment, directly to CPU 102). The connection of PPU 202 to the rest of computer system 100 may also be varied. In some embodiments, parallel processing subsystem 1 12 is implemented as an add-in card that can be inserted into an expansion slot of computer system 100. In other embodiments, a PPU 202 can be integrated on a single chip with a bus bridge, such as memory bridge 105 or I/O bridge 107. In still other embodiments, some or all elements of PPU 202 may be integrated on a single chip with CPU 102.[0026] In one embodiment, communication path 1 13 is a PCI-EXPRESS link, in which dedicated lanes are allocated to each PPU 202, as is known in the art. Other communication paths may also be used. An I/O unit 205 generates packets (or other signals) for transmission on communication path 1 13 and also receives all incoming packets (or other signals) from communication path 1 13, directing the incoming packets to appropriate components of PPU 202. For example, commands related to processing tasks may be directed to a host interface 206, while commands related to memory operations (e.g., reading from or writing to parallel processing memory 204) may be directed to a memory crossbar unit 210. Host interface 206 reads each pushbuffer and outputs the work specified by the pushbuffer to a front end 212. [0027] Each PPU 202 advantageously implements a highly parallel processing architecture. As shown in detail, PPU 202(0) includes a processing cluster array 230 that includes a number C of general processing clusters (GPCs) 208, where C > 1. Each GPC 208 is capable of executing a large number (e.g., hundreds or thousands) of threads concurrently, where each thread is an instance of a program. In various applications, different GPCs 208 may be allocated for processing different types of programs or for performing different types of computations. For example, in a graphics application, a first set of GPCs 208 may be allocated to perform tessellation operations and to produce primitive topologies for patches, and a second set of GPCs 208 may be allocated to perform tessellation shading to evaluate patch parameters for the primitive topologies and to determine vertex positions and other per-vertex attributes. The allocation of GPCs 208 may vary dependent on the workload arising for each type of program or computation. [0028] GPCs 208 receive processing tasks to be executed via a work distribution unit 200, which receives commands defining processing tasks from front end unit 212. Processing tasks include indices of data to be processed, e.g., surface (patch) data, primitive data, vertex data, and/or pixel data, as well as state parameters and commands defining how the data is to be processed (e.g., what program is to be executed). Work distribution unit 200 may be configured to fetch the indices corresponding to the tasks, or work distribution unit 200 may receive the indices from front end 212. Front end 212 ensures that GPCs 208 are configured to a valid state before the processing specified by the pushbuffers is initiated.[0029] When PPU 202 is used for graphics processing, for example, the processing workload for each patch is divided into approximately equal sized tasks to enable distribution of the tessellation processing to multiple GPCs 208. A work distribution unit 200 may be configured to produce tasks at a frequency capable of providing tasks to multiple GPCs 208 for processing. By contrast, in conventional systems, processing is typically performed by a single processing engine, while the other processing engines remain idle, waiting for the single processing engine to complete its tasks before beginning their processing tasks. In some embodiments of the present invention, portions of GPCs 208 are configured to perform different types of processing. For example a first portion may be configured to perform vertex shading and topology generation, a second portion may be configured to perform tessellation and geometry shading, and a third portion may be configured to perform pixel shading in screen space to produce a rendered image. Intermediate data produced by GPCs 208 may be stored in buffers to allow the intermediate data to be transmitted between GPCs 208 for further processing. [0030] Memory interface 214 includes a number D of partition units 215 that are each directly coupled to a portion of parallel processing memory 204, where D > 1. As shown, the number of partition units 215 generally equals the number of DRAM 220. In other embodiments, the number of partition units 215 may not equal the number of memory devices. Persons skilled in the art will appreciate that DRAM 220 may be replaced with other suitable storage devices and can be of generally conventional design. A detailed description is therefore omitted. Render targets, such as frame buffers or texture maps may be stored across DRAMs 220, allowing partition units 215 to write portions of each render target in parallel to efficiently use the available bandwidth of parallel processing memory 204. [0031] Any one of GPCs 208 may process data to be written to any of the DRAMs 220 within parallel processing memory 204. Crossbar unit 210 is configured to route the output of each GPC 208 to the input of any partition unit 215 or to another GPC 208 for further processing. GPCs 208 communicate with memory interface 214 through crossbar unit 210 to read from or write to various external memory devices. In one embodiment, crossbar unit 210 has a connection to memory interface 214 to communicate with I/O unit 205, as well as a connection to local parallel processing memory 204, thereby enablingthe processing cores within the different GPCs 208 to communicate with system memory 104 or other memory that is not local to PPU 202. In the embodiment shown in Figure 2, crossbar unit 210 is directly connected with I/O unit 205. Crossbar unit 210 may use virtual channels to separate traffic streams between the GPCs 208 and partition units 215. [0032] Again, GPCs 208 can be programmed to execute processing tasks relating to a wide variety of applications, including but not limited to, linear and nonlinear data transforms, filtering of video and/or audio data, modeling operations (e.g., applying laws of physics to determine position, velocity and other attributes of objects), image rendering operations (e.g., tessellation shader, vertex shader, geometry shader, and/or pixel shader programs), and so on. PPUs 202 may transfer data from system memory 104 and/or local parallel processing memories 204 into internal (on-chip) memory, process the data, and write result data back to system memory 104 and/or local parallel processing memories 204, where such data can be accessed by other system components, including CPU 102 or another parallel processing subsystem 1 12. [0033] A PPU 202 may be provided with any amount of local parallel processing memory 204, including no local memory, and may use local memory and system memory in any combination. For instance, a PPU 202 can be a graphics processor in a unified memory architecture (UMA) embodiment. In such embodiments, little or no dedicated graphics (parallel processing) memory would be provided, and PPU 202 would use system memory exclusively or almost exclusively. In UMA embodiments, a PPU 202 may be integrated into a bridge chip or processor chip or provided as a discrete chip with a high-speed link (e.g., PCI-EXPRESS) connecting the PPU 202 to system memory via a bridge chip or other communication means. [0034] As noted above, any number of PPUs 202 can be included in a parallel processing subsystem 1 12. For instance, multiple PPUs 202 can be provided on a single add-in card, or multiple add-in cards can be connected to communication path 1 13, or one or more of PPUs 202 can be integrated into a bridge chip. PPUs 202 in a multi-PPU system may be identical to or different from one another. For instance, different PPUs 202 might have different numbers of processing cores, different amounts of local parallel processing memory, and so on. Where multiple PPUs 202 are present, those PPUs may be operatedin parallel to process data at a higher throughput than is possible with a single PPU 202. Systems incorporating one or more PPUs 202 may be implemented in a variety of configurations and form factors, including desktop, laptop, or handheld personal computers, servers, workstations, game consoles, embedded systems, and the like. Processing Cluster Array Overview [0035] Figure 3A is a block diagram of a GPC 208 within one of the PPUs 202 of Figure 2, according to one embodiment of the present invention. Each GPC 208 may be configured to execute a large number of threads in parallel, where the term "thread" refers to an instance of a particular program executing on a particular set of input data. In some embodiments, single-instruction, multiple-data (SIMD) instruction issue techniques are used to support parallel execution of a large number of threads without providing multiple independent instruction units. In other embodiments, single-instruction, multiple-thread (SIMT) techniques are used to support parallel execution of a large number of generally synchronized threads, using a common instruction unit configured to issue instructions to a set of processing engines within each one of the GPCs 208. Unlike a SIMD execution regime, where all processing engines typically execute identical instructions, SIMT execution allows different threads to more readily follow divergent execution paths through a given thread program. Persons skilled in the art will understand that a SIMD processing regime represents a functional subset of a SIMT processing regime. [0036] Operation of GPC 208 is advantageously controlled via a pipeline manager 305 that distributes processing tasks to SIMT parallel thread processors called streaming multiprocessors (SPMs) 310. Pipeline manager 305 may also be configured to control a work distribution crossbar 330 by specifying destinations for processed data output by SPMs 310. [0037] In one embodiment, each GPC 208 includes a number M of SPMs 310, where M>1 , each SPM 310 configured to process one or more thread groups. Also, each SPM 310 advantageously includes an identical set of functional execution units (e.g., arithmetic logic units, and load-store units, .shown as Exec units 302 and LSUs 303 in Figure 3C) that may be pipelined, allowing a new instruction to be issued before a previous instruction has finished, as is known in the art. Any combination of functional execution units may be provided. In one embodiment, the functional units support a variety of operationsincluding integer and floating point arithmetic (e.g., addition and multiplication), comparison operations, Boolean operations (AND, OR, XOR), bit-shifting, and computation of various algebraic functions (e.g., planar interpolation, trigonometric, exponential, and logarithmic functions, etc.); and the same functional-unit hardware can be leveraged to perform different operations. [0038] The series of instructions transmitted to a particular GPC 208 constitutes a thread, as previously defined herein, and the collection of a certain number of concurrently executing threads across the parallel processing engines (not shown) within an SPM 310 is referred to herein as a "warp" or "thread group." As used herein, a "thread group" refers to a group of threads concurrently executing the same program on different input data, with one thread of the group being assigned to a different processing engine within an SPM 310. A thread group may include fewer threads than the number of processing engines within the SPM 310, in which case some processing engines will be idle during cycles when that thread group is being processed. A thread group may also include more threads than the number of processing engines within the SPM 310, in which case processing will take place over consecutive clock cycles. Since each SPM 310 can support up to G thread groups concurrently, it follows that up to G*M thread groups can be executing in GPC 208 at any given time. [0039] Additionally, a plurality of related thread groups may be active (in different phases of execution) at the same time within an SPM 310. This collection of thread groups is referred to herein as a "cooperative thread array" ("CTA") or "thread array." The size of a particular CTA is equal to m*k, where k is the number of concurrently executing threads in a thread group and is typically an integer multiple of the number of parallel processing engines within the SPM 310, and m is the number of thread groups simultaneously active within the SPM 310. The size of a CTA is generally determined by the programmer and the amount of hardware resources, such as memory or registers, available to the CTA. [0040] Each SPM 310 contains an L1 cache (not shown) or uses space in a corresponding L1 cache outside of the SPM 310 that is used to perform load and store operations. Each SPM 310 also has access to L2 caches within the partition units 215that are shared among all GPCs 208 and may be used to transfer data between threads. Finally, SPMs 310 also have access to off-chip "global" memory, which can include, e.g., parallel processing memory 204 and/or system memory 104. It is to be understood that any memory external to PPU 202 may be used as global memory. Additionally, an L1.5 cache 335 may be included within the GPC 208, configured to receive and hold data fetched from memory via memory interface 214 requested by SPM 310, including instructions, uniform data, and constant data, and provide the requested data to SPM 310. Embodiments having multiple SPMs 310 in GPC 208 beneficially share common instructions and data cached in L1.5 cache 335. [0041] Each GPC 208 may include a memory management unit (MMU) 328 that is configured to map virtual addresses into physical addresses. In other embodiments, MMU(s) 328 may reside within the memory interface 214. The MMU 328 includes a set of page table entries (PTEs) used to map a virtual address to a physical address of a tile and optionally a cache line index. The MMU 328 may include address translation lookaside buffers (TLB) or caches which may reside within multiprocessor SPM 310 or the L1 cache or GPC 208. The physical address is processed to distribute surface data access locality to allow efficient request interleaving among partition units. The cache line index may be used to determine whether of not a request for a cache line is a hit or miss. [0042] In graphics and computing applications, a GPC 208 may be configured such that each SPM 310 is coupled to a texture unit 315 for performing texture mapping operations, e.g., determining texture sample positions, reading texture data, and filtering the texture data. Texture data is read from an internal texture L1 cache (not shown) or in some embodiments from the L1 cache within SPM 310 and is fetched from an L2 cache, parallel processing memory 204, or system memory 104, as needed. Each SPM 310 outputs processed tasks to work distribution crossbar 330 in order to provide the processed task to another GPC 208 for further processing or to store the processed task in an L2 cache, parallel processing memory 204, or system memory 104 via crossbar unit 210. A preROP (pre-raster operations) 325 is configured to receive data from SPM 310, direct data to ROP units within partition units 215, and perform optimizations for color blending, organize pixel color data, and perform address translations.[0043] It will be appreciated that the core architecture described herein is illustrative and that variations and modifications are possible. Any number of processing units, e.g., SPMs 310 or texture units 315, preROPs 325 may be included within a GPC 208. Further, while only one GPC 208 is shown, a PPU 202 may include any number of GPCs 208 that are advantageously functionally similar to one another so that execution behavior does not depend on which GPC 208 receives a particular processing task. Further, each GPC 208 advantageously operates independently of other GPCs 208 using separate and distinct processing units, L1 caches, and so on. [0044] Figure 3B is a block diagram of a partition unit 215 within one of the PPUs 202 of Figure 2, according to one embodiment of the present invention. As shown, partition unit 215 includes a L2 cache 350, a frame buffer (FB) DRAM interface 355, and a raster operations unit (ROP) 360. L2 cache 350 is a read/write cache that is configured to perform load and store operations received from crossbar unit 210 and ROP 360. Read misses and urgent writeback requests are output by L2 cache 350 to FB DRAM interface 355 for processing. Dirty updates are also sent to FB 355 for opportunistic processing. FB 355 interfaces directly with DRAM 220, outputting read and write requests and receiving data read from DRAM 220. [0045] In graphics applications, ROP 360 is a processing unit that performs raster operations, such as stencil, z test, blending, and the like, and outputs pixel data as processed graphics data for storage in graphics memory. In some embodiments of the present invention, ROP 360 is included within each GPC 208 instead of partition unit 215, and pixel read and write requests are transmitted over crossbar unit 210 instead of pixel fragment data. [0046] The processed graphics data may be displayed on display device 1 10 or routed for further processing by CPU 102 or by one of the processing entities within parallel processing subsystem 1 12. Each partition unit 215 includes a ROP 360 in order to distribute processing of the raster operations. In some embodiments, ROP 360 may be configured to compress z or color data that is written to memory and decompress z or color data that is read from memory.[0047] Persons skilled in the art will understand that the architecture described in Figures 1 , 2, 3A, and 3B in no way limits the scope of the present invention and that the techniques taught herein may be implemented on any properly configured processing unit, including, without limitation, one or more CPUs, one or more multi-core CPUs, one or more PPUs 202, one or more GPCs 208, one or more graphics or special purpose processing units, or the like, without departing the scope of the present invention. [0048] In embodiments of the present invention, it is desirable to use PPU 122 or other processor(s) of a computing system to execute general-purpose computations using thread arrays. Each thread in the thread array is assigned a unique thread identifier ("thread ID") that is accessible to the thread during its execution. The thread ID, which can be defined as a one-dimensional or multi-dimensional numerical value controls various aspects of the thread's processing behavior. For instance, a thread ID may be used to determine which portion of the input data set a thread is to process and/or to determine which portion of an output data set a thread is to produce or write. [0049] A sequence of per-thread instructions may include at least one instruction that defines a cooperative behavior between the representative thread and one or more other threads of the thread array. For example, the sequence of per-thread instructions might include an instruction to suspend execution of operations for the representative thread at a particular point in the sequence until such time as one or more of the other threads reach that particular point, an instruction for the representative thread to store data in a shared memory to which one or more of the other threads have access, an instruction for the representative thread to atomically read and update data stored in a shared memory to which one or more of the other threads have access based on their thread IDs, or the like. The CTA program can also include an instruction to compute an address in the shared memory from which data is to be read, with the address being a function of thread ID. By defining suitable functions and providing synchronization techniques, data can be written to a given location in shared memory by one thread of a CTA and read from that location by a different thread of the same CTA in a predictable manner. Consequently, any desired pattern of data sharing among threads can be supported, and any thread in a CTA can share data with any other thread in the same CTA. The extent, if any, of data sharing among threads of a CTA is determined by the CTA program; thus, it is to be understoodthat in a particular application that uses CTAs, the threads of a CTA might or might not actually share data with each other, depending on the CTA program, and the terms "CTA" and "thread array" are used synonymously herein. [0050] Figure 3C is a block diagram of the SPM 310 of Figure 3A, according to one embodiment of the present invention. The SPM 310 includes an instruction L1 cache 370 that is configured to receive instructions and constants from memory via L1.5 cache 335. A warp scheduler and instruction unit 312 receives instructions and constants from the instruction L1 cache 370 and controls local register file 304 and SPM 310 functional units according to the instructions and constants. The SPM 310 functional units include N exec (execution or processing) units 302 and P load-store units (LSU) 303. [0051] SPM 310 provides on-chip (internal) data storage with different levels of accessibility. Special registers (not shown) are readable but not writeable by LSU 303 and are used to store parameters defining each CTA thread's "position." In one embodiment, special registers include one register per CTA thread (or per exec unit 302 within SPM 310) that stores a thread ID; each thread ID register is accessible only by a respective one of the exec unit 302. Special registers may also include additional registers, readable by all CTA threads (or by all LSUs 303) that store a CTA identifier, the CTA dimensions, the dimensions of a grid to which the CTA belongs, and an identifier of a grid to which the CTA belongs. Special registers are written during initialization in response to commands received via front end 212 from device driver 103 and do not change during CTA execution. [0052] A parameter memory (not shown) stores runtime parameters (constants) that can be read but not written by any CTA thread (or any LSU 303). In one embodiment, device driver 103 provides parameters to the parameter memory before directing SPM 310 to begin execution of a CTA that uses these parameters. Any CTA thread within any CTA (or any exec unit 302 within SPM 310) can access global memory through a memory interface 214. Portions of global memory may be stored in the L1 cache 320. [0053] Local register file 304 is used by each CTA thread as scratch space; each register is allocated for the exclusive use of one thread, and data in any of local register file 304 is accessible only to the CTA thread to which it is allocated. Local register file 304can be implemented as a register file that is physically or logically divided into P lanes, each having some number of entries (where each entry might store, e.g., a 32-bit word). One lane is assigned to each of the N exec units 302 and P load-store units LSU 303, and corresponding entries in different lanes can be populated with data for different threads executing the same program to facilitate SIMD execution. Different portions of the lanes can be allocated to different ones of the G concurrent thread groups, so that a given entry in the local register file 304 is accessible only to a particular thread. In one embodiment, certain entries within the local register file 304 are reserved for storing thread identifiers, implementing one of the special registers. [0054] Predicate register file 307 includes predicate registers for each CTA thread. Predicate register file 307 is described in greater detail below with respect to Figures 4 and 5. [0055] Shared memory 306 is accessible to all CTA threads (within a single CTA); any location in shared memory 306 is accessible to any CTA thread within the same CTA (or to any processing engine within SPM 310). Shared memory 306 can be implemented as a shared register file or shared on-chip cache memory with an interconnect that allows any processing engine to read from or write to any location in the shared memory. In other embodiments, shared state space might map onto a per-CTA region of off-chip memory, and be cached in L1 cache 320. The parameter memory can be implemented as a designated section within the same shared register file or shared cache memory that implements shared memory 306, or as a separate shared register file or on-chip cache memory to which the LSUs 303 have read-only access. In one embodiment, the area that implements the parameter memory is also used to store the CTA ID and grid ID, as well as CTA and grid dimensions, implementing portions of the special registers. Each LSU 303 in SPM 310 is coupled to a unified address mapping unit 352 that converts an address provided for load and store instructions that are specified in a unified memory space into an address in each distinct memory space. Consequently, an instruction may be used to access any of the local, shared, or global memory spaces by specifying an address in the unified memory space. Predicated Instruction Execution for a Parallel Processor[0056] The discussion set forth below is directed to a parallel processor that executes parallel threads or parallel data lanes. In some embodiments, groups of parallel threads or parallel data lanes execute a common instruction stream, using SIMT or SIMD techniques. An embodiment using SIMT techniques is described that provides explicitly a predicate register comprised of 1 -bit predicates for each thread executing in the parallel processor. A general set of instructions for setting and using the 1 -bit predicates are also described. Advantageously, the predicate register architecture described below reduces thread state and instruction encoding overhead and requires fewer instructions to implement conditional program regions. [0057] Figure 4 is a more detailed diagram of the predicate register file 307 of Figure 3C, according to one embodiment of the present invention. As shown, the predicate register file 307 includes N different predicate registers 402. Each predicate register 402 is associated with a different thread executing within the SPM 310 on the execution units 302. For the purpose of discussion only, predicate register 402(0) is described below in greater detail. Predicate register 402(0) includes a condition code 404 and predicates 408. [0058] The condition code 404 comprises four 1 -bit condition code flags: OF (overflow flag), CF (carry flag), SF (sign flag), and ZF (zero flag). Within each thread the condition code may be optionally written by instructions; the condition code is typically written by integer and floating-point arithmetic instructions to indicate properties of the arithmetic result. The predicates 408 comprise seven 1 -bit predicates that can be used by the thread associated with the predicate register 402(0). Each of the predicates P0-P6 in predicates 408 indicates one bit of state associated with the thread, where a value of 0 for a predicate indicates False and value of 1 indicates True. In addition to predicates P0-P6, a reserved instruction encoding for a True predicate, PT, whose value is always 1 is provided. Predicate PT does not require any per-thread state. Predicate PT may be used as an instruction source operand when a constant True predicate value is needed, and as an instruction destination operand when an instruction has no live-out predicate result; writes to PT are ignored.[0059] For each thread executing in the SPM 310, corresponding predicates in the predicate register 402 are set via predicate-setting instructions. The following discussion describes six different types of predicate-setting instructions that are used to set predicates in the predicate registers 402 for each thread executing in the SPM 310. [0060] ISETP, FSETP and DSETP are three different predicate-setting instructions. In their simplest form, each of these instructions sets a Boolean predicate associated with the thread to True or False by evaluating one or more source operands. Source operands can be general purpose registers within the local register file 304, immediate values, or constant values. A simple example tests one source operand value and sets a predicate to False if the source operand is zero, otherwise sets the predicate to True. More generally, each of these instructions applies an operation to one or more source operands that yields a Boolean result. A test operation evaluates one source operand by comparing it with zero. A comparison operation compares at least two source operands and stores the Boolean result(s) of the comparisons in one or more predicates within the predicate register 402. The different comparisons that may be specified by a predicate-setting instruction include less than, greater than, equal to, not equal to, etc. Any other technically feasible comparison between two values is within the scope of the present invention. Importantly, a Boolean result of a comparison may be different for each thread executing the predicate-setting instruction and, thus, the Boolean result stored in the predicate registers 402 corresponding to different threads may be different. [0061] An ISETP instruction compares two source operands (e.g. general-purpose registers, immediate operands, or constants) representing integer values to generate a Boolean result. An FSETP instruction compares two source operands representing single-precision floating point values to generate a Boolean result. The source operands of an FSETP instruction can be specified with different options such as "with negation sign" and "absolute value." A DSETP instruction compares two source operands representing double-precision values to generate a Boolean result. An example ISETP predicate-setting instruction used to implement the if-then-else statement if ( R0< R1 ) then { A; } else { Bj } is: ISETP . lt PZ. RO. Rlj # P2=( R0<R1 ? 1 : 0)When this instruction is executed for a particular thread, the integer values stored in registers RO and R1 associated with the thread are compared. Specifically, the "It" (less than) comparison is applied to RO and R1. The Boolean result of the comparison is stored in the P2 predicate within the corresponding predicate register 402 associated with the thread. Branch instructions to implement statements A and B can then be predicated on P2 being True or False, or the branches can be eliminated with a transformation to predicated instructions known as if -conversion. Instructions implementing statement A can be predicated on P2 being True, while instructions implementing statement B can be predicated on P2 being False. [0062] Predicate-setting instructions can also be used for setting multiple predicates with multiple or compound Boolean functions. For example, setting multiple predicates based on compound Boolean functions is useful for if-conversion of nested if-then-else structures and for evaluating compound Boolean expressions. To generate efficient code in these cases, the SETP instructions include an optional predicate source operand and Boolean function. The predicate source operand may be optionally negated. [0063] Several predicated code schemas require computing pairs of related predicates. For example, in a simple nested if-then-else structure, a pair of predicates is needed to guard the then and else blocks in the nested statement. Consider if-conversion of the following nested if-then-else structure: if (R0<R1) then { A; } else { if (R2>R3) then { B; } else { C; } } Instructions in block A are executed if the first condition, R0<R1 , is True. Instructions in block B are executed only if the first condition, R0<R1 , is False and the second condition, R2>R3, is True; instructions in block C are executed only if the first condition, R0<R1 , is False and the second condition, R2>R3, is also False. The corresponding guard conditions are computed easily using SETP instructions: ISETP.lt Pl.RO.Rlj # PI = (R0<R1) ISETP.gt.and P2, R2, R3, ! PI; # P2 = ! PI & (R2>R3) ISETP.le.and P3, R2, R3, ! PI; # P3 = ! PI & !(R2>R3) In addition, the number of SETP instructions needed in predicated code can be reduced when the SETP instructions are extended to store results within two different predicates ofthe predicate registers 402. The second Boolean result is computed much the same as the first Boolean result, except that the complement of the comparison is used. In the above example, P2 and P3 can be computed by a single ISETP instruction that sets two destination predicates with two Boolean operations: ISETP . lt Pl. RO. Rlj ISETP . gt . and P2, P3 , R2, R3 , ! PI; # P2=( R2>R3 )& ! PI; P3= ! ( R2>R3)&! PI [0064] PSETP, CSETP and VSETP are three additional predicate-setting instructions. PSETP instructions allow for performing general Boolean functions on predicates. A PSETP instruction specifies two or more predicate source operands within the predicate register 402, one or more comparison operations and one or more destination predicates within the predicate register 402. For example, a PSETP instruction PSETP . bop0. bopl Pu , Pv, { ! }Pp, { ! }Pq J { ! }Pr; sets two destination predicates Pu and Pv within the predicate register 402 to Boolean values based on the compound Boolean operations bopO and bopl of optionally negated source predicate operands Pp, Pq, and Pr: Pu = ( { ! }Pp bop0 { ! }Pq ) bopl { ! }Pr; Pv = ( ( ! { ! }Pp) bop0 { ! }Pq ) bopl { ! }Pr; [0065] A CSETP instruction, when executed, tests a condition code (CC) register with a specific test, combines the Boolean result with predicate operand Pp using a specific Boolean operation bop and sets one or more destination predicates within the predicate register 402 based on the test. The test may include signed numeric tests, signed or unordered tests, and unsigned integer tests. [0066] A VSETP instruction, when executed, extracts a sub-word (a byte or a short) or word values from one or more source registers specified in the VSET instruction, sign extends the values to 33-bit signed values and performs a specified comparison to produce a Boolean intermediate result. The intermediate result is then combined with an optionally negated predicate source operand using a specified Boolean operation and one or more destination predicates within the predicate register 402 are set based on the results. VSETP is useful when working with signed or unsigned integer word or subword values, such as in media processing algorithms.[0067] The different predicate setting instructions described above have corresponding general purpose register setting instructions, such as ISET, FSET, PSET, etc. As with the predicate setting instructions, these instructions compute a Boolean result, but the result is converted to a 32-bit value which is stored within one or more general purpose registers of the local register file 304. The 32-bit value may be an integer value or a single precision floating point value, determined by the instruction's result format type. When integer results are selected, Boolean results 0 and 1 are converted into integer values 0x0000000 and OxFFFFFFFF, respectively. When floating-point results are selected, Boolean results 0 and 1 are converted into single precision floating point values 0. Of and 1.Of, respectively. [0068] The P2R instruction copies the predicate register 402 into the low or high half portion of a 32-bit register within the local register file 304. This instruction includes a mask operand that allows a subset of the bits to be written to the general-purpose register. This is useful for implementing register allocation and function calling conventions. The R2P instruction copies selected bits from a low or high half portion of a general-purpose register under control of a mask into the predicate register 402. [0069] Instructions executed within the SPM 310 have a guard predicate that controls conditional execution of the instructions in each thread. A guard predicate corresponds to a predicate within the predicate register 402 associated with each thread. If the guard predicate for a thread is true (has a value of 1 ), the instruction is executed normally by that thread. If the guard predicate for that thread is false (has a value of 0), the instruction is nullified for that thread and has no effect on machine state for that thread. [0070] The guard predicate may be optionally negated, so four instruction bits are needed to encode the guard predicate selecting one of seven predicates or PT for each instruction. Instructions may be executed unconditionally by guarding with predicate PT. The guard condition is written to the left of each instruction using the syntax "@Px" or "@ ! Px". Examples of predicated instructions for one thread: @P2 IADD Rlj R2j R3 j # executes if P2 true, else nullified @PT IMUL R2 j R4j R6 j # executes unconditionally @ ! P1 FMAX R1, R2, R3 ; # executes if PI false, else nullified[0071] The most common uses of predication are to eliminate short forward branches and eliminate simple, single-level if-then-else structures. For these common cases, negate-on-use reduces the number of live predicate registers by up to half. This reduction in state is especially advantageous for SIMT and SIMD parallel processors which have many threads and thus many instances of per-thread state. [0072] To eliminate branches in more complicated control flow regions such as nested if-then structures more general predicate conditions are computed. For example, consider the following code: if ( p) then { A; } else { if (q ) then { Bj } else { Cj } } When condition p is true, A should execute and both B and C should be nullified. Attempting to guard B and C with a predicate and its complement, say q and !q respectively, leads to one of them being executed incorrectly even when p is true. The correct guard for B is (!p & q) and for C is (!p & !q), using C language syntax. These guards can be computed efficiently with a single SETP instruction having two destination predicates. [0073] In addition to guarded (predicated) computational instructions (e.g. arithmetic, logical, memory load/store operations), predicates are used to guard control flow altering instructions such as conditional and predicated BRA, BRX, JMP, and JMX. The branch/jump condition is based on either a guard predicate, a test of a condition code register, or a combination of both. [0074] Predicates are also used in SEL instructions, MNMX instructions, VOTE instructions and LDLK instructions. A SEL instruction selects either a first or a second source operand specified in the SEL instruction based on an optionally negated predicate within the predicate register 402. The selected operand is copied to a specified destination register. [0075] The IMNMX, FMNMX, and DMNMX instructions choose the minimum or maximum of two source operands based on the value of an optionally negated predicate. For example, if, for a particular thread, the predicate specified in the MNMX instruction is false, then a minimum operation is performed on the two source operands. Conversely, if the predicate specified in the MNMX instruction is true, then a maximum operation isperformed on the two source operands. The result of the selected operation is copied to a specified destination register. [0076] The VOTE instruction performs a reduce-and-broadcast of predicates across all active threads in a thread group. The result of the vote operation is shared across all active threads in the thread group. The vote operations are .ALL (True iff the source predicate is true across all active threads), .ANY (true iff at least one source predicate is true across all active threads), and .EQ (true iff the source predicate is true across all active threads OR the source predicate is false across all active threads). [0077] The load-and-lock instruction (LDLK) and load-shared-and-lock (LDSLK) instructions load a value from memory and attempt to acquire a lock associated with the memory address; these instructions write a predicate destination register to indicate whether the lock was acquired (writes True) or not (writes False). [0078] Figure 5 is a flow diagram of method steps for setting predicates in the predicate register file and accessing predicates for conditional instruction execution, according to one embodiment of the present invention. Although the method steps are described in conjunction with the systems for Figures 1 -4, persons skilled in the art will understand that any system configured to perform the method steps, in any order, is within the scope of the invention. The method 500 begins at step 502, where the SPM 310 receives an instruction for execution within each thread of a thread group. As previously described herein, each thread executes the same instruction with different source operands to generate different outputs. At step 504, the SPM 310 determines whether the instruction is a predicated and/or conditional instruction. [0079] If the instruction is not a predicated and/or conditional instruction, then the method proceeds to step 520, where the instruction is executed unconditionally for each thread. If the instruction is a predicated and/or conditional instruction, then the method proceeds to step 506, where, for each thread, the SPM 310 determines whether the corresponding predicate(s) and/or the condition specified in the instruction are true. The method continues to step 508, where the guard predicate and/or branch condition is tested for each thread.[0080] At step 508, for each thread, if the guard predicate and/or condition is False, the method continues to step 530 where SPM 310 nullifies the instruction for that thread. At step 508, if the guard predicate and/or condition is True, the method continues to step 510. [0081] At step 510, if the instruction is a predicate-setting instruction, then the method proceeds to step 512, where, for each thread, one or more predicates within the predicate registers 402 are set based on the source operands and the comparison operation(s) specified by the instruction. Importantly, since the instruction is executed for each thread with different source operands, the values of the predicates may be different for each thread in the thread group. At step 510, if the instruction is not a predicate-setting instruction, then the method proceeds to step 540, where for each thread, the instruction is executed. [0082] Advantageously, the invention described herein provides a mechanism for cost efficient predicated execution that minimizes the per-thread state in SIMT/SIMD parallel processors. In addition, optional negation of predicates further saves additional bits per- thread that would otherwise be needed to store negated predicates. Further, efficient code can be generated for conditional program regions of parallel multithreaded programs. [0083] One embodiment of the invention may be implemented as a program product for use with a computer system. The program(s) of the program product define functions of the embodiments (including the methods described herein) and can be contained on a variety of computer-readable storage media. Illustrative computer-readable storage media include, but are not limited to: (i) non-writable storage media (e.g., read-only memory devices within a computer such as CD-ROM disks readable by a CD-ROM drive, flash memory, ROM chips or any type of solid-state non-volatile semiconductor memory) on which information is permanently stored; and (ii) writable storage media (e.g., floppy disks within a diskette drive or hard-disk drive or any type of solid-state random-access semiconductor memory) on which alterable information is stored. [0084] The invention has been described above with reference to specific embodiments. Persons skilled in the art, however, will understand that variousmodifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The foregoing description and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. |
The present invention is a flash memory manufacturing process that facilitates efficient fabrication of a flash memory cell. In one embodiment, a silicide (e.g., CoSi) is utilized as a diffusion source. A layer of silicide is deposited over a source area and drain area. The dopant is implanted into the CoSi and diffuse out conformably along CoSi-Si interface at a relatively low temperature. The low temperature diffusion facilitates fabrication of a Flash core cell with a very shallow source/drain junction, and as a result a robust DIBL. The present invention also facilitates fabrication of memory cells with smaller spacers and shorter gate length. |
1. A memory cell comprising:a control gate component having a capacity to receive a charge;an oxide region having electrical charge insulation characteristics and electrical charge penetration characteristics, said oxide region coupled to said control gate;a floating gate having a charge trapping region, said floating gate coupled to said oxide region;a well component having a charge doping characteristic, said well coupled to said floating gate component;a source component having opposite charge doping characteristics formed by implantation of a dopant and diffusion of said dopant from a silicide in a source metal contact region, said source component coupled to said well component; anda drain component having similar doping charge characteristics to said source component and formed by implantation of a dopant and diffusion of said dopant from said silicide in a drain metal contact region, said drain component coupled to said well component.2. The memory cell of claim 1 wherein said source and drain form shallow junctions.3. The memory cell of claim 1 wherein some of said dopant is trapped in said silicide layer during an implantation of dopants in said source and drain areas.4. The memory cell of claim 1 wherein said diffusion is performed in a temperature range of about 600 to 800 Celsius.5. The memory cell of claim 1 wherein said silicide includes cobalt silicide.6. The memory cell of claim 1 wherein said dopant includes arsenic.7. The memory cell of claim 1 further comprising sidewall spacers that have a thickness of about 50 Ȧ to about 800 Ȧ, wherein said silicide is deposited between a pair of said sidewall spacers. |
TECHNICAL FIELDThe present claimed invention relates to the field of memory fabrication. More particularly, the present invention relates to a flash memory cell drain and source fabrication system and method that utilizes silicide as a diffusion source.BACKGROUND ARTElectronic systems and circuits have made a significant contribution towards the advancement of modern society and are utilized in a number of applications to achieve advantageous results. Numerous electronic technologies such as digital computers, calculators, audio devices, video equipment, and telephone systems have facilitated increased productivity and reduced costs in analyzing and communicating data, ideas and trends in most areas of business, science, education and entertainment. Frequently, these advantageous results are realized through the use of information stored on a memory media and manipulated by a processing device. The fabrication of memory devices often involves complex processes that require precise operations to achieve desired delicate balances.Numerous electronic devices include processors that operate by executing software comprising a series of instructions for manipulating data in the performance of useful tasks. The instructions and associated data are typically stored in a memory at locations identified by a unique indicator or address. The ability to access a memory and transfer information quickly and conveniently usually has a significant impact on information processing latency and often limits the utility a device can provide. The configuration of a memory usually affects the speed at which memory locations are accessed.Certain types of memories built upon flash memory technologies usually offer the potential for relatively fast information access. Flash memories typically include flash memory cells arranged in a matrix in which each cell is characterized by a voltage operating range. A charge level in a floating gate of the flash memory cell controls whether or not a flash memory cell turns "on" or "off" when a threshold voltage level within the operating range is applied to a gate of the flash memory cell. Flash memory arrays usually offer a number of desirable characteristics. Flash memories are typically non-volatile and can retain information even if power is turned off, allow random access to data and in-system programmability, and have the ability to withstand common shock vibrations and environmental conditions.Integrated circuit fabrication usually involves multi-step processes that attempt to produce precise components that operate properly. Many integrated circuit processes involve repeated deposition and removal of material layers to fabricate components and it is often very difficult to achieve optimized results within requisite narrow tolerances. The multi-step processes also often include diffusion and implantation operations to create regions with particular electrical characteristics. These regions can be adversely impacted by subsequent process steps in a manner that significantly affects performance. In typical traditional processes, dopants are implanted directly into the silicon (Si), which causes damage in the Si. A high temperature thermal cycle is usually required to anneal out the damage. For example, high temperature annealing can result in diffusion region migration that adversely changes the characteristics of a source or drain junction (e.g., resistivity, drain induced barrier leakage, etc.).Semiconductor integrated circuit manufacturing efforts are usually complicated by ever increasing demands for greater functionality. More complicated circuits are usually required to satisfy the demand for greater functionality. For example, there is usually a proportional relationship between the number of components included in an integrated circuit and the functionality, integrated circuits with more components typically provide greater functionality. However, including more components within an integrated circuit often requires the components to be densely packed in relatively small areas and reliably packing a lot of components in relatively small areas of an IC is usually very difficult.One traditional focus for achieving greater densities has been directed towards reducing the size of individual components (e.g., transistors). The components of an integrated circuit are usually fabricated on a single silicon substrate and maintaining both the integrity of the system as a whole as well as the individual basic device characteristics is very important for proper operation. Proper relational characteristics are very helpful in achieving these objectives and without them there is a tendency for detrimental interactions to occur. Thus, it is important for integrated circuit fabrication technologies to provide an advantageous balance between component integrity and increased component density.Transistor source and drain formation usually include a diffusion process. It is important for source and drain dopants to be accurately applied to ensure proper operation without defects. It is also desirable for the source and drain formation to be efficient and low cost. Diffusion of high quality dopants with the ability to provide shallow junctions can be challenging. Implantation is usually performed before CoSi formation in a typical memory cell formation process. The implantation energy usually has to be high to ensure the CoSi layer is above N+/P junction, which often results in a deeper junction and worse DIBL. Therefore, the ability to precisely form source and drain sections in a convenient and efficient manner is very important.SUMMARY OF THE INVENTIONThe present invention is a flash memory manufacturing process that facilitates efficient fabrication of a flash memory cell. In one embodiment, a silicide (e.g., CoSi) is utilized as a diffusion source. A layer of silicide is deposited over a source area and drain area. The silicide, source area and drain area are implanted with a dopant (e.g., arsenic). The wafer is then subjected to a diffusion process which forces the dopants from the silicide into the source area and drain area. The diffusion process can be performed at relatively low temperatures reducing the probability of region alignment or shift problems. The present invention also enables shallow source and drain junction formation in a manner that facilitates reduced drain induced barrier lowering (DIBL) and reduced source to drain resistance. In addition, utilizing the silicide as a diffusion source enables the use of narrower side wall spacers (e.g., nitride spacers) permitting a greater number of components concentrated in smaller areas.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a flow chart of a flash memory source and drain formation process in accordance with one embodiment of the present invention.FIG. 2A illustrates one exemplary implementation of a silicide layer deposited on a source area and a drain area in accordance with one embodiment of the present invention.FIG. 2B illustrates one exemplary dopant implantation into a silicide layer, a source area and a drain area of the wafer substrate.FIG. 2C illustrates one present invention embodiment of diffusing arsenic dopants from a silicide layer into a source area and a drain area of wafer substrate.FIG. 3 is a block diagram illustration of a flash memory cell in accordance with one embodiment of the present invention.FIG. 4 is a flow chart of one embodiment of a present invention flash memory formation method.DETAILED DESCRIPTION OF THE INVENTIONReference will now be made in detail to the preferred embodiments of the invention, examples of which are illustrated in the accompanying drawings. While the invention will be described in conjunction with the preferred embodiments, it will be understood that they are not intended to limit the invention to these embodiments. On the contrary, the invention is intended to cover alternatives, modifications and equivalents, which may be included within the spirit and scope of the invention as defined by the appended claims. Furthermore, in the following detailed description of the present invention, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be obvious to one ordinarily skilled in the art that the present invention may be practiced without these specific details. In other instances, well known methods, procedures, components, and circuits have not been described in detail as not to unnecessarily obscure aspects of the current invention.The present invention is a flash memory manufacturing process that facilitates efficient fabrication of a flash memory cell. In one embodiment, a silicide (e.g., CoSi) is utilized as a diffusion source. The present invention provides a shallow core drain junction that permits improved Drain Induced Barrier Lowering (DIBL) and drain to source resistitivity. The present invention also facilitates cell size reduction. For example, the present invention enables reduced nitride spacer thickness on both the source and drain sides of the flash memory cell.FIG. 1 is a flow chart of flash memory source and drain formation process 100 in accordance with one embodiment of the present invention. Flash memory source and drain formation process 100 includes utilization of a silicide as a diffusion source. The dopants are implanted in the silicide and then diffused into a wafer substrate. The diffusion can occur at a relatively low temperature greatly reducing thermal budget.In step 110 a silicide layer is deposited on a source and drain area. In one embodiment of the present invention, the silicide layer material includes cobalt silicide. FIG. 2A illustrates one exemplary implementation of a silicide layer 210 deposited on source area 220 and drain area 230. The silicide is placed on top of the wafer surface and between side wall spacers 271 and 272 and 273 and 274 respectively.A dopant 250 is implanted in the source area 220 and drain area 230 at step 120. In one embodiment the dopant includes arsenic. In one exemplary implementation, the implanting introduces some of the dopant atoms into the silicide layer. FIG. 2B illustrates one exemplary implantation of dopant 250 (e.g., arsenic, boron, phosphorus, antimony, etc.) into the silicide layer 210, source area 220 and drain are 230 of the wafer substrate. In one example, the dopant provides an electrical charge characteristic to the source and drain areas. Implanting dopant into the CoSi causes minimal or no damage to the silicon. Thus, a present invention Flash core cell process (e.g., flash memory source and drain formation process 100) can use a low temperature anneal (e.g., less than or equal to 900 centigrade) which facilitates fabrication of memory cells with improved DIBL.In step 130, a diffusion process is performed on the source and drain area. In one embodiment of the present invention, the diffusion process is performed at a relatively low temperature (e.g., between 800 and 900 degrees Celsius). In one exemplary implementation of the present invention, the dopants in the silicide layer diffuse to the source and drain areas during the anneal process. FIG. 2C illustrates one exemplary implementation of diffusing arsenic dopants 250 from the silicide layer 210 into the source area 220 and drain area 230 of the wafer substrate. In one embodiment of the present invention, the dopant implanted (e.g. with a relatively low implantation energy) after CoSi formation diffuses out conformably along the CoSi-Si interface resulting in a shallower junction and a better DIBL. A reduced Source/Drain lateral diffusion is also achieved and helps scaling down spacer thickness (e.g., 272 and 273 in FIG. 2), gate length (e.g., 310 in FIG. 3) and consequently the cell size.FIG. 3 is a block diagram illustration of a flash memory cell 300 in accordance with one embodiment of the present invention. Flash memory cell 300 includes control gate 310, charge storing region 315 (e.g., a floating gate), insulation region 317 (e.g., an oxide region), source 320, drain 330, sidewalls 331, well region 350 (e.g., a substrate) and current conducting channel 375. In one exemplary implementation, a source extension region 221 and drain extension region 231 are formed by very shallow implantation. Source 320 and drain 330 are formed by implantation of a dopant (e.g., arsenic) and diffusion of a dopant from silicide layer 170 (e.g., a cobalt silicide layer). Control gate 310 is coupled to insulation region 317 which is coupled to floating charge trapping region 315 and well region 350. Well region 350 is coupled to source 320 and drain 330. For ease of use and convention, charge storing region 315 and semi-permeable insulating region 317 are referred to as a floating gate and an oxide region respectively, but are not necessarily limited to these implementations.The components of flash memory cell 300 cooperatively operate to store information. Current conducting channel 375 has doped characteristics of a first charge nature (e.g., positive or negative) and enables current flow depending upon charge levels in control gate 310 and floating gate 315. Well region 350 supplies bulk charges to current conducting channel 375 and thereby facilitates conduction of current in current conducting channel 375. Source 320 includes charge doping characteristics opposite of the first charge nature and supplies current to current conducting channel 375. Drain 330 has charge doping characteristics also opposite of the first charge nature and drains current from current conducting channel 375. Oxide region 317 has insulating characteristics that also act as a barrier to charges entering or leaving floating gate 315 depending upon memory cell voltage levels (e.g., voltage level differential applied to control gate 310 and drain 330). Control gate 310 has a capacity to receive a voltage and collect charge levels that control current flow in current conducting channel 370. Floating gate 315 "traps" or "stores" charges which can impact the "control" (e.g., shift the threshold voltage) of control gate 310 and thereby store information.Flash memory cell 300 stores information by establishing a charge level (e.g., "write" or "erase" charge level) in the floating gate 315 corresponding to a logical value and sensing the impact on the flow of current in current conducting channel 370 during a read operation. In one exemplary implementation, the status of current flow between the source 320 and the drain 330 in a read condition is utilized to establish storage of a logical 1 value or a logical 0 value. For example, a logical 1 can be assigned to an indication of a current flow between source 320 and drain 330 and a logical 0 can be assigned to an indication of no current flow between source 320 and drain 330, or vise versa. Since the charge level state in the floating gate 315 can impact the current flow in current conduction channel 317, there is a correlation between a logical 1 value or a logical 0 value and the charge in floating gate 315. The charge level of the floating gate determines the flash memory cell state by shifting the threshold voltage. An erased state occurs when a first charge level in the floating gate does not significantly impact (e.g., no appreciable shift in the threshold voltage) the memory cell's turn-on/off threshold voltage. A written state occurs when a second charge level does significantly impact the memory cell's turn-on/off threshold voltage (e.g., there is an appreciable shift in threshold voltage).FIG. 4 is a flow chart of flash memory formation method 400, one embodiment of a flash memory formation method in accordance with the present invention. Flash memory formation method 400 includes a silicide layer over the source and drain areas that facilitates diffusion of dopants into the source and drain areas. This diffusion process provides shallow junction formation with reduced resistivity. In addition, utilizing the silicide as a diffusion source enables the use of narrower side wall spacers (e.g., 50 Ȧ to 800 Ȧ thick).In step 410, a wafer substrate is prepared for lithographic processes. In one embodiment of the present invention, the wafer surface is made smooth and level, for example by chemical mechanical polishing (CMP). A protective layer of oxide and a subsequent layer of nitride are desposited on the surface. In one exemplary implementation, additional polishing is performed to provide a smooth and level surface after the protective oxide and nitride layers are added.A gate formation process is executed at step 420. In one embodiment of the present invention, an insulating layer (e.g., oxide) is deposited. A floating gate area is created in the insulating layer. For example, a floating gate area is etched in the insulating layer and a charge trapping material (e.g., a polysilicide) is deposited in the floating gate area. Excess charge trapping material is removed and additional insulating material deposited. A control gate material (e.g., silicide, metal, etc.) is deposited on top of the insulating material. The materials deposited during the gate formation process are removed (e.g, etched) from areas not included in the gate (e.g., areas above a source and drain). In one exemplary implementation, a sidewall spacer material is deposited on the sides of the gate area and excess sidewall spacer material is removed.Flash memory formation method 400 includes a source and drain extension implant step and spacer formation step in one embodiment of the present invention. In one exemplary implementation, a dopant is implanted in step 421 to form a source extension area and a drain extension area (e.g., source extension area 221 and a drain extension area 231). Spacers are formed in step 422. The spacers (e.g, spacers 272 and 273) are relatively narrow and permit greater component density.In step 430, a silicide source and drain formation process is performed. The source and drain area are prepared for implantation and diffusion. For example, excess material from the gate formation process and the protective layer materials over the source and drain areas are removed. A silicide layer (e.g., CoSi) is deposited on the source and drain area and a dopant (e.g. arsenic) is implanted in the source and drain area. In one exemplary implementation, some of the dopant is trapped in the silicide layer during the implantation of dopants in the source and drain areas. A diffusion process is performed on the source and drain area to "push" doping agents included in the silicide layer through the surface of the wafer substrate into the source and drain areas.In step 440, a metal layer is deposited over the source and drain areas respectively. In one embodiment of the present invention, a plurality of metal layers are deposited and each of the respective metal layers are separated by insulating layers. The metal layers couple the source and drain to other components included on the wafer.Thus, the present invention facilitates precise formation of source and drain sections in a convenient and efficient manner. Utilization of a silicide implanted with dopants as a diffusion source enables the diffusion to occur at a relatively low temperature reducing annealing problems. This diffusion process provides shallow drain and source junction formation which improves the drain induced voltage leakage characteristics of the cell. The shallow junctions also enable reductions in source to drain resistance. In addition, utilizing the silicide as a diffusion source enables the use of narrower side wall spacers (e.g., nitride spacers) permitting a greater number of component concentrated in smaller areas.The foregoing descriptions of specific embodiments of the present invention have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed, and obviously many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the claims appended hereto and their equivalents. |
Structurally-stable, tall capacitors having unique three-dimensional architectures for semiconductor devices are disclosed. The capacitors include monolithically-fabricated upright microstructures, i.e., those having large height/width (H/W) ratios, which are mechanical reinforcement against shear forces and the like, by a brace layer that transversely extends between lateral sides of at least two of the free-standing microstructures. The brace layer is formed as a microbridge type structure spanning between the upper ends of the two or more microstructures. |
What is claimed and desired to be secured by United States Letters Patent is: 1. A semiconductor storage capacitor, comprising:a semiconductor substrate; a plurality of upright free-standing capacitor storage node microstructures formed over the substrate, said microstructures having vertical surfaces; a brace transversely extending between the vertical surfaces of at least two of the free-standing microstructures, said brace being located substantially near the upper ends of said vertical surfaces of said microstructures; and a vertical space between said brace and said substrate. 2. The semiconductor storage capacitor according to claim 1, wherein the brace interconnects substantially all of the microstructures.3. The semiconductor storage capacitor according to claim 1, wherein the brace has a width approximately equal to or less than the largest cross-sectional dimension of the microstructures.4. The semiconductor storage capacitor according to claim 1, wherein the brace comprises a microbridge structure extending above the substrate and between two or more of the microstructures.5. The semiconductor storage capacitor according to claim 1, where the microstructures each comprise a conductor material portion standing upright over the substrate, and wherein the brace interconnects the conductor material portions of two or more of the microstructures.6. The semiconductor storage capacitor according to claim 1, wherein the microstructures comprise generally cylindrical container shapes and the brace comprises a microbridge structure.7. The semiconductor storage capacitor according to claim 1, where the brace comprises a dielectric material.8. The semiconductor storage capacitor according to claim 1, further comprising a dielectric layer between the substrate and the brace, where the brace is vertically spaced from the dielectric layer.9. The semiconductor storage capacitor according to claim 1, wherein the microstructures comprise conductive material and the brace comprises a dielectric.10. The semiconductor storage capacitor according to claim 1, wherein the microstructures are defined within an active circuit area, and further comprising a die having non-active circuit areas located adjacent the active circuit area, wherein the brace further interconnects at least two of the microstructures with non-active areas of the die.11. The semiconductor storage capacitor according to claim 1, wherein the microstructures are container capacitors.12. The semiconductor storage capacitor according to claim 1, wherein the microstructures comprise double-sided container capacitors.13. A memory circuit, comprising:a semiconductor substrate having a memory cell including diffusion regions; a dielectric layer on the substrate; conductive plugs extending vertically from an upper surface of the dielectric layer to respective diffusion regions; a plurality of upright standing capacitor storage node microstructures, having vertical surfaces, each formed over the dielectric layer and a respective conductive plug; and a brace transversely extending between and laterally supporting at least two of the vertical surfaces of said microstructures, said brace being suspended over at least one layer of material which extends between said dielectric layer and said brace. 14. The memory circuit according to claim 13, wherein the circuit comprises a DRAM.15. The memory circuit according to claim 13, wherein the brace interconnects substantially all of the microstructures.16. The memory circuit according to claim 13, where the brace is located substantially near upper ends of the microstructures.17. The memory circuit according to claim 13, wherein the brace has a width approximately equal to or less than the largest cross-sectional dimension of the microstructures.18. The memory circuit according to claim 13, wherein the brace comprises a microbridge structure extending above the substrate and between two or more of the microstructures.19. The memory circuit according to claim 13, where the microstructures each comprise a conductor material portion standing upright over the substrate, and wherein the brace interconnects the conductor material portions of two or more of the microstructures.20. The memory circuit according to claim 13, wherein the microstructures comprise generally cylindrical container shapes and the brace comprises a microbridge structure.21. The memory circuit according to claim 13, where the brace comprises a dielectric material.22. The memory circuit according to claim 13, where the brace is vertically spaced from the dielectric layer.23. The memory circuit according to claim 13, wherein the microstructures comprise conductive material and the brace comprises a dielectric.24. The memory circuit according to claim 13, wherein the microstructures are defined within an active circuit area, and further comprising a die having non-active circuit areas located adjacent the active circuit area, wherein the brace further interconnects at least two of the microstructures with non-active areas of the die.25. The memory circuit according to claim 13, wherein the microstructures are container capacitors.26. The memory circuit according to claim 13, wherein the microstructures comprise double-sided container capacitors.27. A memory device, comprising:a memory chip comprising a memory circuit fabricated on the memory chip, said memory circuit comprising: a semiconductor substrate having a memory cell including diffusion regions; a dielectric layer on the substrate; conductive plugs extending vertically from an upper surface of the dielectric layer to respective diffusion regions; a plurality of upright capacitor storage node microstructures, having lateral sides with a lower portion and an upper portion, each formed over the dielectric layer and the respective conductive plug; and a brace for laterally supporting respective lateral sides of at least two of the microstructures, said brace transversely extending between only said upper portions of said microstructures, wherein a space is disposed between said dielectric layer and said brace. 28. A memory module, comprising:a die substrate comprising a circuit board; a plurality of memory chips mounted on the die substrate, wherein one or more of the memory chips comprise a memory circuit fabricated on the semiconductor chip communicating with the processor, said memory circuit comprising: a semiconductor substrate having a memory cell including diffusion regions; a dielectric layer on the substrate; conductive plugs extending vertically from an upper surface of the dielectric layer to respective diffusion regions; a plurality of capacitor storage node microstructures each formed over the dielectric layer and a respective conductive plug; and a brace transversely extending between and laterally supporting respective lateral sides of at least two of the microstructures, wherein at least one layer of material and a space is disposed between said dielectric layer and said brace. 29. A processor system, comprising:a processor; and a memory circuit fabricated on a semiconductor chip communicating with the processor, said memory circuit comprising: a semiconductor substrate having a memory cell including diffusion regions; a dielectric layer on the substrate; conductive plugs extending vertically from an upper surface of the dielectric layer to respective diffusion regions; a plurality of upright capacitor storage node microstructures, having lateral sides, each formed over the dielectric layer and a respective conductive plug; and a brace transversely suspended between upper portions of said lateral sides of at least two of the microstructures for supporting said lateral sides. 30. The processor system according to claim 29, wherein the memory circuit comprises a DRAM.31. The processor system according to claim 29, wherein the capacitor microstructures comprises capacitor containers.32. A semiconductor capacitor comprising:at least two vertical support structures over a substrate each with a lower portion and an upper portion; a horizontal brace suspended between said upper portions of said at least two vertical support structures; a first conductive material deposited over said at least two vertical support structures and over and in contact with portions of said horizontal brace adjacent said structures; a dielectric material deposited over said conductive material and over and in contact with said portions of said horizontal brace; and a second conductive material deposited over said dielectric material and over and in contact with said portions of said horizontal brace. |
CROSS REFERENCE TO RELATED APPLICATIONSThis application is a continuation-in-part of U.S. application Ser. No. 09/386,316 filed Aug. 31, 1999, now abandoned, entitled "Structurally-Stabilized Capacitors and Method of Making of Same" the disclosure of which is incorporated by reference herein.BACKGROUND OF THE INVENTION1. The Field of the InventionThe present invention generally relates to capacitors for semiconductor circuit memory storage devices. More particularly, the present invention relates to highly stable, robust capacitor structures in semiconductor circuit memory storage devices.2. The Relevant TechnologyIn dynamic semiconductor memory storage devices it is essential that storage node capacitor cell plates be large enough to retain an adequate charge in spite of parasitic capacitances and noise that may be present during circuit operation. The ability to maintain required storage node capacitance levels in densely packed storage cells is particularly important as the density of DRAM arrays continues to increase for the foreseeable future generations of memory devices.One known method for maintaining, as well as increasing, storage node size in densely packed memory devices is through use of self-aligned stacked-capacitor cells for 64-MB DRAMs formed as three-dimensional cylindrical container structures. FIG. 1A illustrates conventional double-sided cylindrical container structures 10 configured as a double crown structure. The cylindrical capacitor container structures 10 are formed over a first dielectric layer 1 that lies on a semiconductor substrate 12. Each of the cylindrical capacitor container structures 10 are connected to one of the source and drain impurity regions 14 and 14' of one of the transistors 13 via a conductive plug 15. The container structures 10 are double-sided in that poly cylinders 16 have a conductively doped hemispherical grain (HSG) poly layer 17 formed on both the inside and outside thereof, and a capacitor dielectric film 18 surrounds the entire surface HSG layer of the storage node electrode. Then, a top capacitor electrode 19, such as poly, is formed to complete the storage cell 10.Referring now to FIG. 1B which shows a portion of the process for fabricating the FIG. 1A conventional cylindrical container structures, a second dielectric layer 2 is formed on the first dielectric layer 1, and a via hole 3 is formed through the second dielectric layer 2 in alignment with the plug 15 previously formed in the first dielectric layer 1, and then the polysilicon layer 16 is deposited on the cylindrical walls of the via hole. The polysilicon is removed from the upper surface of the second dielectric layer 2 by planarization (e.g., CMP) to yield the intermediate structure shown in FIG. 1B. In the next process step, the second dielectric layer 2 is selectively etched away until the first dielectric layer 1 and plug 15 is reached with the resulting structure as shown in FIG. 1C. A free standing cylindrical structure 16 is left exposed without structural support over the first dielectric layer 1 after removing the second dielectric layer 2. In further processing, the HSG 17, capacitor dielectric film 18 and electrode 19 are sequentially formed on the cylinder structures 10 to yield the double crown structure (double container cell) shown in FIG. 1A.In FIGS. 2A-2D, a conventional fabrication scheme is shown for fabricating capacitor studs used in a high density array. In fabricating the conventional stud structures, as shown in FIG. 2A, via holes 27 are formed through a second dielectric layer 26 which is provided over a first dielectric layer 21 arranged on a semiconductor substrate 22. The substrate 22 has a transistor 23 including source and drain regions 24 and 24', and one of which is connected to the via holes 27 via conductive plug 25. After the via hole 27 is formed through the second dielectric layer 26 in alignment with the plug 25 previously formed in the first dielectric layer 1, a metal or other conductive material 28 is deposited so as to fill the via hole 27 and form the stud 28. The metal is removed from the surface of the second dielectric layer 26 by planarization (e.g., CMP) to yield the intermediate structure shown in FIG. 2B. In the next process step, the second dielectric layer 26 is selectively etched away until the first dielectric layer 21 and plug 25 is reached with the resulting structure as shown in FIG. 2C. A free standing stud structure 28 is left exposed without structural support over the first dielectric layer 21 after removing the second dielectric layer 26. In further processing, the studs 28 have a conductively doped hemi-spherical grain (HSG) poly layer 200 formed on their exterior profile, and a capacitor dielectric film 201 surrounds the entire surface HSG layer 200 of the storage node electrode. Then, a top capacitor electrode 202, such as polysilicon, is formed to complete the storage cell 20.The present inventors have determined that the yields of double-sided container or stud structures in high density memory arrays such as illustrated in FIGS. 1A and 2D above, respectively, has been lowered because of falling problems with the containers or studs that occur during device fabrication. Namely, the containers and studs are susceptible to falling over and breaking during etch back (i.e., removal of the second dielectric layer) or other further processing operations such as deposition of the capacitor dielectric film. The conventional studs or containers have relatively high sidewalls and a relatively small supporting "footprint" and thus do not have a strong foundation at their bottoms. Consequently, they are very susceptible to toppling over when subjected to handling and/or processing forces. Nonetheless, as demand for reduced feature size continues, there remains a need to fabricate very tall studs (e.g., 1.5 [mu]m) and tall double sided containers with relatively small "footprints". However, the fabrication of taller studs (i.e., larger height-to-width (H/W) structures) exacerbates the falling problem as a given base dimension must support even taller walls. When the conventional stud or container structures fall over they can short to an adjacent storage node poly, which will render the adjacent storage cells shorted out. In a 64 M DRAM, for instance, even if there were only one out of 100 K cells that had a short due to such falling, this would cause 640 random failures in the 64 M DRAM. This number of failures would usually exceed the limited number of redundant elements available for repair, and the entire memory device would be rendered unusable.Consequently, a need exists in the art for container and stud structures that are not susceptible to falling problems during device fabrication and for a methodology for imparting such increased resistance to falling.SUMMARY OF THE INVENTIONThe present invention resolves the above and other problems that have been experienced in the art. More particularly, the present invention provides structurally-stable, tall capacitors having unique three-dimensional architectures for semiconductor devices. Although the concepts of this invention are particularly useful in DRAM fabrication, the invention nonetheless has wider applicability to encompass semiconductor devices in general where monolithically-fabricated upright microstructures, i.e., those having large height/width (H/W) ratios, need mechanical reinforcement against shear forces and the like that are experienced during processing and handling.In one general embodiment, this invention concerns a monolithic semiconductor device comprising a semiconductor substrate over which are formed a plurality of upright free-standing microstructures. A brace layer is formed that transversely extends between lateral sides of at least two of the free-standing microstructures. The brace layer is formed as a microbridge type structure spanning between the upper ends of the two or more microstructures. In order to form the braces, a dielectric layer is used as a sacrificial layer in which a narrow groove is formed and within which the brace layer is formed. Then, the sacrificial dielectric layer is removed after the brace is formed to leave a reliable three-dimensional microstructure in which a container or stud is transversely supported very robustly by the brace layer. The brace layer is vertically spaced from a remaining dielectric layer to yield a braced, free-standing three-dimensional architecture that does not fall. Preferably, each brace layer ultimately extends to the edges of the IC die active circuit area, where the brace locks to solid non-active portions of the die surrounding the fabricated circuitry.In one preferred embodiment, a method is provided to prevent the falling of studs or double-sided containers in which a small width channel is made after metal filling and planarization in the case of metal studs, or after container planarization in the case of containers for capacitors. This small channel is filled with a dielectric different from the dielectric layer in which the via hole was formed for the stud or container, and having good adhesion with electrode material. The channel formation procedure is followed by etch back of the dielectric layer, hemispherical grain deposition, capacitor dielectric deposition, and top electrode deposition, to complete formation of a capacitor.This invention permits further maximization of capacitor storage cell surface area in a high density/high volume DRAM fabrication process. The capacitor design of the present invention defines a stacked capacitor storage cell that is useful in DRAM fabrication, however, it will be evident to one skilled in the art to incorporate these steps into other processes for providing memory cells or other integrated circuit microstructures where a large height-to-width structure is required.BRIEF DESCRIPTION OF THE DRAWINGSThe foregoing and other features, aspects and advantages of the present invention will become more apparent from the following detailed description of the present invention when considered in conjunction with the accompany drawings, in which:FIGS. 1A through 1C are cross sectional views illustrating a conventional fabrication scheme for manufacturing cylindrical capacitor containers.FIGS. 2A through 2D are cross sectional views illustrating a conventional fabrication scheme for manufacturing studs for a capacitor.FIGS. 3A through 3F are cross sectional views illustrating a first embodiment for manufacturing cylindrical capacitor containers according to the present invention.FIGS. 4A through 4B are top views of the cylindrical containers of FIGS. 3A-3F at several intermediate stages of processing.FIG. 4C is a top view of the connection of capacitor microstructures to each other and to non-active portions of a die using a microbridge brace layer according to the present invention.FIG. 5A is a plan view of a memory module having memory chips constructed in accordance with the present invention.FIG. 5B is a block diagram of a processor-based system using RAM having memory chips constructed in accordance with the present invention.FIGS. 6A through 6E are cross sectional views illustrating an embodiment for manufacturing studs for a capacitor according to the present invention.FIGS. 7A and 7B are top views of the cylindrical containers of FIGS. 6A-6E at several intermediate stages of processing.FIG. 7C is a top view representation of an array of capacitors interconnected by a dielectric bracing layer of this invention.It will be understood that the drawings are provided for illustrative purposes and that the depicted features are not necessarily drawn to scale.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTSThe present invention is particularly directed to maximizing storage cell surface area, as well as providing uniform and repeatable, defect free, storage cell structures across a given substrate, in high density/high volume DRAM fabrication processes, although it is thought that the invention has wider applicability as will become apparent from the exemplary embodiments.Referring now to FIGS. 3A-3F, a fabrication scheme for forming a double sided container capacitor of this invention is illustrated. Referring to FIG. 3A, a silicon wafer substrate 31 is prepared using conventional process steps to provide circuit elements 32 each having a conventional gate stack formed of an oxide, a conductor, such as polysilicon, and dielectric sidewall layers, and a doped diffusion regions 33 and 33'. That is, circuit elements 32 are illustrated as being transistors having source and drain impurity regions 33 and 33'. One of source and drain regions 33 and 33' of each transistor has a conductive plug 35 connected thereto which will be used to connect the transistor with a capacitor container. The term "substrate" is meant to encompass a wafer, active and passive devices formed within the wafer, and layers on the wafer such as passivation and/or metallization layers, as well as SOI and the like. A first dielectric layer 34 blankets the substrate 31. The first dielectric layer 34 typically is planarized after its deposition, such as by conventional chemical-mechanical-polishing (CMP) or reactive ion etching (RIE) used for this purpose. A polysilicon plug 35 fills a contact hole formed through the first dielectric layer 34. CMP is used to remove those portions of the poly that are deposited on the surface of the first dielectric layer 34. The wafer has been processed up to the point of processing an array of storage cell capacitors. Capacitor cell fabrication will now follow. The storage capacitor of each memory cell will make contact to the underlying diffusion region 33 via poly plug 35.Referring to FIG. 3B, a second dielectric layer 36 is formed over the first dielectric layer 34. Then, openings (via holes) 37 are formed through the second dielectric layer 36 by an anisotropic etching technique, exposing plug 35. The first and second dielectric layers 34 and 36 preferably are selected from among Si3N4, SiO2, BPSG or Ta2O5. A polysilicon layer 38 is formed over the second dielectric layer including over the walls and the plug at the bottom of the via holes. The polysilicon layer is formed of in situ doped polysilicon (poly). An appropriate planarization technique, such as CMP or RIE etching, is used to remove polysilicon on the horizontal flats of the second dielectric layer 36 in order to isolate the poly layer 38 at each container which provides the intermediate structure shown in FIG. 3B.Unlike conventional double-sided container processing, such as illustrated in FIGS. 1A-1C, the present invention does not next proceed to an etch back of the second dielectric layer 36 to the plugs 35 at this juncture of the processing. Instead, as shown in FIG.3C, a narrow channel 39 is etched into the surface 36' of the second dielectric layer 36, such as by using photolithographic techniques, such that the narrow channel 39 intersects a plurality of the polysilicon cylinders 38 at the upper ends of the cylinders 38. For example, a photoresist can be spun onto the surface of second dielectric layer 36 and into via holes 37 and then is patterned to define the location of channel 39 while protecting the rest of the surface of the second dielectric layer 36 and the openings 37 inside the poly layer 38. FIG. 4A is a top view of the corresponding intermediate structure showing the narrow channel 39 having sidewalls 39' and 39'' formed in the surface of the second dielectric layer 36. The channel width dimension "w" between sides 39' and 39'' of the channel 39 is preferably sized to be approximately the container diameter "d" or less (i.e., the largest cross-sectional dimension of the formation or less), to approximately one-half (50%) of the container diameter. The container at this stage of fabrication is a hollow cylinder.In the next processing step, illustrated in cross-section in FIG. 3D and as a top view in FIG. 4B, a dielectric layer 390 made of a different dielectric material than second dielectric layer 36 is deposited in channel 39 to form a dielectric brace layer 390 extending between polysilicon container layers 38. The container is kept masked during this step, such as with a photoresist 38', so as to prevent unwanted dielectric from entering the container during filling of channel 39. After depositing brace layer 390, CMP is conducted to planarize the surface of the device while the container is still masked.This dielectric brace layer 390, which is deposited to prevent falling of tall containers, and studs as illustrated in another embodiment described herein, can be Si3N4, SiO2, BPSG, Ta2O5 with the proviso that it is a different material from the second dielectric material such that the second dielectric can be selectively etched away (wet or dry etching) while leaving the brace dielectric layer intact in a subsequent processing step.As shown in FIG. 3E, the second dielectric layer 36 is selectively etched away until the first dielectric layer 34 and plug 35 is reached while leaving the dielectric brace layer 390 intact. The brace layer 390 remains suspended between outer lateral sides 16' of the two poly cylinders 38 at their upper ends (e.g., within the upper 50%, preferably the top 25%, and more preferably the upper 10%, of the cylinder height) as a microbridge type of structure. A vertical space "s" or gap exists between the dielectric brace layer 390 and the upper surface 34' of the first dielectric layer 34. In this manner, the second dielectric layer 36 is used as a type of sacrificial layer. For example, if the second dielectric layer is BPSG or SiO2 and the dielectric brace layer 390 is silicon nitride, the second dielectric layer 36 can be selectively etched away using HF or HF+water, which will not remove the silicon nitride brace layer 390. On the other hand, if silicon nitride is used as the second dielectric 36 while SiO2, BPSG, or Ta2O5 is used as the dielectric brace layer 390, the silicon nitride can be selectively etch removed using phosphoric acid. A free standing cylindrical structure 38 is left exposed with transverse structural support from brace layer 390 over the first dielectric layer 34 after removing the second dielectric layer 36.The dielectric brace layer 390 usually will extend to other containers not shown in the figures so as to form a mechanical bracing support spanning between a considerable series of different containers along the common linkage of brace layer 390. Although a plurality of separate brace layers 390 can be used, it is also possible to provide more than one dielectric brace layer where they intersect at a container (or containers) such that a two-dimensional network or lattice of dielectric brace layers is formed through-out the array of containers. Also, the depth of the channels 39 formed that determines the thickness of the dielectric brace layer 390 is a function of the H/W container dimensions, the dielectric material used, and other factors. From a functional standpoint, the size of the dielectric brace layer must be selected to be large enough to provide lateral buttressing forces sufficient to substantially if not completely prevent the falling problems, yet not be so large that the relative weight of the brace layer becomes a factor. As to the width "w" of the brace layer390, the brace generally has a width equal to or less than the largest cross-sectional dimension of the microstructures, which is the cylinder diameter "d" for the embodiment shown in FIG. 4A.Referring to FIG. 4B, the transverse or lateral directions mentioned herein indicate the x-and y-directions, or a combined vector thereof, across the flat major surfaces of the dielectric layers. The dielectric brace layer 390 can be deposited by chemical vapor deposition techniques conventionally used to deposit these materials. The dielectric brace material also must have good enough adhesion to a top electrode material to be applied in a later processing step such that there is no peeling during further processing.Referring to FIG. 4C, each brace layer 390 not only connects a plurality of container capacitor microstructures 38 near their respective tops but it also ultimately extends to the edges of the IC die active circuit area 395, where the brace 390 locks to solid non-active portions 396 and 396' of a die 397 provided at the same elevation level as the brace layer 390. The non-active portions 396 and 396' of the die 397 are adjacent the fabricated circuitry 395. The brace layer 390 can extend linearly between the tops of capacitor microstructures 38 between non-active portions 396 and 396' of the die 397, or, as illustrated in FIG. 4C, the brace layer 390 can follow a non-linear path before being anchored at its respective ends 390' and 390'' at non-active areas 396 and 396' of the die 397. This provides an anchored system of braced-tall containers (or braced-tall stud capacitors according to a separate embodiment of this invention described in connection with FIG. 6E). In this way, the containers 38 are afforded good mechanical support in at least transverse or lateral directions to fortify the three-dimensional free-standing container microstructures to be defined during removing the second dielectric 36 and subjecting the in-process wafer to further handling and processing operations which are described below.As shown in FIG. 3F, in further processing to complete the container structure after forming the brace layer 390, a conductively doped hemi-spherical grain (HSG) poly 391 is formed on both the inside and outside of the poly layer 38 thereof. This is done so that a double sided container can be fabricated. The hemispherical grain layer (HSG) can be formed by deposition or vacuum annealing the poly layer 38 according to known techniques. If the HSG is deposited, a blanket etch of the HSG typically follows that results in the formation of HSG poly that is texturized or rugged poly. A capacitor dielectric film 392 is formed that surrounds the entire surface HSG layer 391 of the storage node electrode. The capacitor dielectric can be formed of Si3N4, Ta2O5, BST, PZT, SBT, or SiO2 and the like. It can be deposited by LPCVD, PECVD, and so forth, to a desired thickness with regard to the capacitance of the device. The thin dielectric film 392 can be annealed to stabilize the film. Then, a top electrode 393 is formed to provide two containers 394 configured as a double crown structure (double container cell) as shown. The electrode material can be polysilicon, HSG, Pt, RuOx, Ru, Ir, Pt+Rh, TiN, WNx, or TaN and the like. The top electrode 393 typically is a doped conformal poly layer that blanket covers the capacitor dielectric 392 and serves as a common capacitor cell plate to the entire array of containers formed.The dielectric brace layer 390 takes up relatively little circumferential room around the upper end of the container (i.e., the end opposite the end in contact with first dielectric layer 34), so the HSG layer 391, capacitor dielectric 392 and top electrode 393 can be formed without being disturbed by the presence of the dielectric brace layer 390. The gap "z" between the top electrode 392 and the surface of the first dielectric layer 34 being approximately 2 [mu]m for many capacitor structures of DRAMs. Conventional process steps are performed from this point on to complete the semiconductor device.FIG. 5A is plan view of a memory module 500 having memory chips 50-58 including semiconductor memory devices constructed in accordance with the present invention. That is, chips 50-58 have a DRAM cell such as described in connection with FIG. 3F (or FIG. 6E infra). Memory module 500 is a SIMM (single in line memory module) having nine memory chips (IC's) 50-58 aligned on one side of a printed circuit board substrate. The number of such memory chips in the SIMM typically will vary between 3 to 9. The circuit board 501 has an edge connector 502 along one longitudinal edge to permit it to plug into a memory socket on a computer motherboard of conventional design (not shown). A wiring pattern (not shown), which can be a conventionally known design for this purpose, is formed on the board 501 and connects the terminals or leads shown comprising the edge connector 502 to the memory chips 50-58. Small ceramic decoupling capacitors 59 are also mounted on substrate 501 to suppress transient voltage spikes. Other than the inventive memory device structures used in memory chips 50-58, the general layout of the SIMM 500 can be a conventional construction.FIG. 5B is a block diagram of a processor-based system 504 using RAM 512 constructed in accordance with the present invention. That is, RAM 512 uses a DRAM cell such as described in connection with FIG. 3E (or FIG. 6E infra). The processor-based system 504 may be a computer system, a process control system or any other system employing a processor and associated memory. The system 504 includes a central processing unit (CPU) 505, e.g., a microprocessor, that communicates with the RAM 512 and an I/O device 508 over a bus 511. It must be noted that the bus 511 may be a series of buses and bridges commonly used in a processor-based system, but for convenience purposes only, the bus 511 has been illustrated as a single bus. A second I/O device 510 is illustrated, but is not necessary to practice the invention. The processor-based system 504 also includes read-only memory (ROM) 514 and may include peripheral devices such as a floppy disk drive 507 and a compact disk (CD) ROM drive 509 that also communicates with the CPU 505 over the bus 511 as is well known in the art.FIGS. 6A through 6E are cross sectional views illustrating an embodiment for manufacturing studs for a capacitor according to the present invention. In fabricating the inventive stud capacitors, as shown in FIG. 6A, via holes 67 are formed through a second dielectric layer 66 over a first dielectric layer 61 arranged on a semiconductor substrate 62. The substrate 62 has a circuit element 63, such as a transistor, including impurity source and drain regions 64 and 64'. One of the source or drain regions 64 is connected to the via holes 67 via conductive plug 65. After the via hole 67 is formed through the second dielectric layer 66 in alignment with the plug 65 previously formed in the first dielectric layer 61, a metal or other conductive material 68 (e.g., Al, Al-alloys, W, highly doped poly) is deposited so as to fill the via hole 67 and form the stud 68. The metal is removed from the surface of the second dielectric layer 66 by planarization (e.g., CMP) to yield the intermediate structure shown in FIG. 6B. In the next process step, a narrow channel is formed in the surface 66' of the second dielectric layer 66 between the studs 68 and other studs not shown in the partial view using the techniques described above in connection with channel 39 in FIG. 3C. FIG. 7A is a top view of the corresponding intermediate structure showing a narrow channel 69 having sidewalls 69' and 69'' formed in the surface of the second dielectric layer 66. The channel width dimension "w" between sides 69' and 69'' of the channel 69 is preferably sized to be approximately the stud diameter "d" or smaller, such as approximately 50% of the diameter "d" although not limited thereto.As shown in FIG. 6C, the channel is then filled with a dielectric brace layer 690 similar to brace layer 390 discussed in connection with FIG. 3D except that the dielectric brace layer 690 here interconnects metal studs instead of poly cylinders. The result is also shown in the top view of FIG. 7B.The second dielectric layer 66 is then selectively etched away by methods described above in connection with FIG. 3E until the first dielectric layer 61 and plug 65 is reached with the resulting structure shown in FIG. 6D. A free standing stud structure 68 is left exposed with transverse structural support from brace layer 690 over the first dielectric layer 61 after removing the second dielectric layer 66. In further processing, the studs 68 have a conductively doped hemi-spherical grain (HSG) poly layer 600 formed on their exterior profile, and a capacitor dielectric film 601 is provided over the entire surface HSG layer 600 of the storage node electrode. Then, a top capacitor electrode 602, such as poly, is formed to complete the storage cell 60. The HSG film, capacitor dielectric film and top electrode layers can be of the constructions described above.As previously discussed above with reference to FIG. 4C in connection with the container capacitors illustrated in FIG. 3F, but as equally applicable to the stud capacitors of this embodiment, the brace layer 690 may extend to the edge of the die active area for anchoring purposes. In FIG. 4C, each brace layer 390 ultimately extends to the edges of the IC die active circuit area 395, where the brace 390 locks to solid non-active portions 396 and 396' of a die 397 around or adjacent to the fabricated circuitry to further anchor the braced-tall capacitor microstructures. FIG. 7C shows an analogous top view of an IC die active circuit area where the brace 690 extends between studs 68. In this way, the studs 68 are afforded good mechanical support in at least transverse or lateral directions during removal of the second dielectric 66 and further wafer handling and processing operations.For the embodiments described herein, additional conductive and passivation layers are formed thereover to complete the DRAM devices as is known to those skilled in the art. While the figures only show a limited number of capacitors being formed for sake of clarity, it will be understood that a multitude of cells will be simultaneously fabricated in a similar manner on the substrate. Also, the capacitor can be used in other chips in addition to DRAMs. That is, the invention is applicable to any semiconductor devices needing a capacitor, such as DRAM and embedded DRAM. Although illustrated in connection with cylindrical container, or stud structures, the invention also could be used for a storage node formed as a pillar or villus structure. Also, non-cylindrical shaped containers or studs are also contemplated for practice within the scope of the invention such as bar or rectangular shapes, oval, and so forth. Additionally, the principles and teachings of this invention are generally applicable to other tall microstructures, and are not necessarily limited to features of a capacitor.While the present invention is described herein with reference to illustrative embodiments for particular applications, it should be understood that the invention is not limited thereto. Those having ordinary skill in the art and access to the teachings provided herein will recognize additional modifications, applications, and embodiments within the scope of the present invention. |
To prevent short defects between source/d rains of transistors of a complementary cell circuit, isolation walls (414A) are formed in an isolation region between the source/d rains of the transistors prior to growing a P-type epitaxial layer and an N-type epitaxial layer on respective sides of the isolation region. The isolation walls provide a physical barrier to prevent formation of short defects that can otherwise form between the P-type (412P) and N-type (412N) epitaxial layers. Thus, the isolation walls prevent circuit failures resulting from electrical shorts between source/d rain regions of transistors in complementary cell circuits. A width of the isolation region between a P-type transistor and an N-type transistor in a circuit cell layout can be reduced so that a total layout area of the complementary cell circuit can be reduced without reducing product yield. A gate cut may be formed in the dummy gate with a process of forming the isolation walls. |
What is claimed is:1. A complementary cell circuit, comprising: a semiconductor substrate comprising: a P-type region; an N-type region; and an isolation region between the P-type region and the N- type region, the isolation region having a width extending in a direction of a first axis; a gate extending longitudinally in the direction of the first axis, the gate extending across portions of each of the P-type region, the isolation region, and the N-type region; a first P-type epitaxial (epi) source/drain (S/D) (epi-S/D) formed on the P-type region on a first side of the gate, the first P-type epi-S/D extending above the isolation region in a first direction of the first axis; a first N-type epi-S/D formed on the N-type region on the first side of the gate, the first N-type epi-S/D extending above the isolation region in a second direction of the first axis; a first isolation wall on the first side of the gate extending from the isolation region in a third direction orthogonal to the first axis, the first isolation wall isolating the first P-type epi-S/D from the first N-type epi-S/D; a second P-type epi-S/D formed on the P-type region on a second side of the gate, the second P-type epi-S/D extending above the isolation region in the first direction of the first axis; a second N-type epi-S/D formed on the N-type region on the second side of the gate, the second N-type epi-S/D extending above the isolation region in the second direction of the first axis; and a second isolation wall on the second side of the gate extending from the isolation region in the third direction orthogonal to the first axis, the second isolation wall isolating the second P-type epi-S/D from the second N-type epi-S/D.2. The complementary cell circuit of claim 1, further comprising:
a gate cut disposed at an end of the gate, the gate cut comprising a material of which the first isolation wall and the second isolation wall are formed.3. The complementary cell circuit of claim 1, wherein: a bottom end of the first isolation wall and a bottom end of the second isolation wall are below a top surface of the isolation region.4. The complementary cell circuit of claim 2, wherein: the material of the gate cut, the first isolation wall, and the second isolation wall comprises at least one of Silicon Nitride (SiN), Silicon Oxi-Nitride (SiON), Silicon Carbide (SiC), and Aluminum Oxide (AIO).5. The complementary cell circuit of claim 1, wherein: the first isolation wall and the second isolation wall each extend longitudinally in a fourth direction orthogonal to the gate.6. The complementary cell circuit of claim 1, wherein: the N-type region comprises an N-type fin extending in the third direction from the semiconductor substrate; the first N-type epi-S/D and the second N-type epi-S/D are formed on the N-type fin; the P-type region comprises a P-type fin extending in the third direction from the semiconductor substrate; and the first P-type epi-S/D and the second P-type epi-S/D are formed on the P-type fin.7. The complementary cell circuit of claim 1, wherein: the first and second N-type epi-S/Ds are formed on at least one N-type gate-all- around (GAA) structure extending longitudinally in a fourth direction orthogonal to the first direction and the third direction; the first and second P-type epi-S/Ds are formed on at least one P-type GAA structure extending longitudinally in the fourth direction; and
the N-type GAA structure and the P-type GAA structure each comprise a nanosheet, a nanoslab, or a nanowire.8. The complementary cell circuit of claim 1, wherein: a planar N-type transistor comprises the first N-type epi-S/D and the second N- type epi-S/D; and a planar P-type transistor comprises the first P-type epi-S/D and the second P-type epi-S/D.9. The complementary cell circuit of claim 1 integrated in an integrated circuit (IC).10. The complementary cell circuit of claim 1, integrated into a device selected from the group consisting of: a set top box; an entertainment unit; a navigation device; a communications device; a fixed location data unit; a mobile location data unit; a global positioning system (GPS) device; a mobile phone; a cellular phone; a smart phone; a session initiation protocol (SIP) phone; a tablet; a phablet; a server; a computer; a portable computer; a mobile computing device; a wearable computing device; a desktop computer; a personal digital assistant (PDA); a monitor; a computer monitor; a television; a tuner; a radio; a satellite radio; a music player; a digital music player; a portable music player; a digital video player; a video player; a digital video disc (DVD) player; a portable digital video player; an automobile; a vehicle component; avionics systems; a drone; and a multicopter.11. A method of forming a complementary cell circuit including isolation structures, the method comprising: forming a P-type region on a first side of an isolation region extending longitudinally in a first direction on a semiconductor substrate; forming an N-type region on a second side of the isolation region on the semiconductor substrate; forming a dummy gate extending longitudinally in a second direction orthogonal to the first direction and extending across portions of the P-type region, the isolation region, and the N-type region;
depositing a dielectric layer on the P-type region, the isolation region, and the N- type region on a first side and a second side of the dummy gate; etching a first trench through the dielectric layer on the first side of the dummy gate and a second trench through the dielectric layer on the second side of the dummy gate; forming isolation structures, comprising: filling the first trench with an isolation material to form a first isolation wall; and filling the second trench with the isolation material to form a second isolation wall; forming a first N-type epitaxial (epi) source/drain (S/D) (epi-S/D) on the N-type region on the first side of the dummy gate, the first N-type epi-S/D extending above the isolation region on a first side of the first isolation wall; forming a second N-type epi-S/D on the N-type region on the second side of the dummy gate, the second N-type epi-S/D extending above the isolation region on a first side of the second isolation wall; forming a first P-type epi-S/D on the P-type region on the first side of the dummy gate, the first P-type epi-S/D extending above the isolation region on a second side of the first isolation wall and isolated from the first N-type epi-S/D by the first isolation wall; and forming a second P-type epi-S/D on the P-type region on the second side of the dummy gate, the second P-type epi-S/D extending above the isolation region on a second side of the second isolation wall and isolated from the second N-type epi-S/D by the second isolation wall.12. The method of claim 11, further comprising: forming a trench mask on the dielectric layer, comprising: depositing a trench mask layer; and patterning the trench mask layer to create openings for forming trenches in the dielectric layer.13. The method of claim 11, wherein:
etching the first trench through the dielectric layer further comprises etching the first trench into a surface of the isolation region on the first side of the dummy gate; and etching the second trench through the dielectric layer further comprises etching the second trench into the surface of the isolation region on the second side of the dummy gate.14. The method of claim 11, wherein: filling the first trench and the second trench with the isolation material further comprises filling the first trench and the second trench with the isolation material to a height of the dummy gate.15. The method of claim 11, further comprising: forming a gate cut mask, comprising: depositing a gate cut mask layer; and patterning the gate cut mask layer to create an opening above the dummy gate for forming a trench in the dummy gate.16. The method of claim 15, further comprising: etching a gate cut trench through the dummy gate; wherein forming the isolation structures further comprises filling the gate cut trench with the isolation material to form a gate cut.17. The method of claim 16, wherein: filling the gate cut trench with the isolation material further comprises filling the gate cut trench with the isolation material to a height of the dummy gate.18. The method of claim 11, wherein: forming the P-type region further comprises forming a P-type planar region on the semiconductor substrate; and forming the N-type region further comprises forming an N-type planar region on the semiconductor substrate.
19. The method of claim 11, wherein: forming the P-type region further comprises forming at least one P-type fin extending orthogonally to the semiconductor substrate; and forming the N-type region further comprises forming at least one N-type fin extending orthogonally to the semiconductor substrate.20. The method of claim 11, wherein: forming the P-type region further comprises forming at least one P-type gate-all- around (GAA) structure extending longitudinally in a direction substantially parallel to the semiconductor substrate; and forming the N-type region further comprises forming at least one N-type GAA structure extending longitudinally in a direction substantially parallel to the semiconductor substrate; wherein a GAA structure comprises a nanosheet, a nanoslab, or a nanowire. |
COMPLEMENTARY CELL CIRCUITS EMPLOYING ISOLATION STRUCTURES FOR DEFECT REDUCTION AND RELATED METHODS OFFABRICATIONCLAIM OF PRIORITY UNDER 35 U.S.C. §119 [0001] The present Application for Patent claims priority to Non-provisional Application No. 16/798,947 entitled “COMPLEMENTARY CELL CIRCUITS EMPLOYING ISOLATION STRUCTURES LOR DELECT REDUCTION AND RELATED METHODS OF FABRICATION” filed February 24, 2020, which is expressly incorporated by reference herein in its entirety.BACKGROUND I. Field of the Disclosure[0002] The field of the disclosure relates to complementary circuits that include N- type and P-type transistors to form integrated circuits (ICs) and, more particularly, to avoiding short defects when fabricating a circuit with N-type and P-type transistors.II. Background[0003] Integrated circuits (ICs) employ large numbers of transistors, which are essential to providing the many functions performed by electronic devices. For example, IC components such as central processing units (CPUs), digital signal processors (DSPs), and memory systems each employ large quantities of transistors in logic circuits and memory circuits. As the functions of electronic devices become more complex, the number of transistors needed to perform such functions increases. There is demand for electronic devices, such as mobile devices, to perform functions more quickly while simultaneously becoming smaller in size. To respond to these demands, the ICs within such devices, and the transistors within those ICs, must be made smaller. The area occupied by transistor circuits in ICs is minimized by efficiently arranging circuits. In this regard, IC developers employ standard cells, which are transistors and interconnect structures that provide a function (e.g., Boolean or memory) and have layouts determined to optimize area. Standard cell layouts reduce unused space. However, making standard cell circuit layouts smaller requires positioning circuit elements closer together, which creates certain technological challenges. One aspect of those challenges is explained with reference to the circuit layout example in Figure 1.
[0004] Figure 1 is an illustration of a standard cell circuit layout 100 of an inverter circuit 102. The inverter circuit 102 is an example of a complementary metal-oxide semiconductor (MOS) (CMOS) cell circuit, or complementary cell circuit, which employs one or more P-type transistors and one or more N-type transistors (e.g., in a complementary manner). In Figure 1 a P-type transistor 104 for the inverter circuit 102 is formed in a P-type diffusion region (“P-type region”) 106, which is a region of the surface of a semiconductor substrate 108, such as silicon, that is lightly doped with a trivalent impurity to create a large number of holes within the semiconductor substrate 108. An N-type transistor 110 is formed in an N-type diffusion region (“N-type region”) 112, which is a region of the semiconductor substrate 108 that is lightly doped with a pentavalent impurity to create a large number of free electrons. Between the P-type region 106 and the N-type region 112 is an isolation region 114 having a width Wiso.The isolation region 114 is an undoped region of the semiconductor substrate 108 that isolates the P-type region 106 on one side of the isolation region 114 from the N-type region 112 on the other side. The P-type transistor 104 includes a source 116P, a drain 118P, and a channel 120P. The N-type transistor 110 includes a source 116N, a drain 118N, and a channel 120N. In the example of the inverter circuit 102 as shown in Figure 1, both the P-type transistor 104 and the N-type transistor 110 are coupled to a common gate 122. The gate 122 spans both of the channel 120P and the channel 120N to control operation of the P-type transistor 104 and the N-type transistor 110 by a voltage applied to the gate 122. Details of operation of the inverter circuit 102 are understood by persons of ordinary skill and are, therefore, not discussed further herein.[0005] The sources 116P, 116N and the drains 118P, 118N of the P-type transistor 104 and the N-type transistor 110 are formed of a crystal material having properties that are beneficial to CMOS circuits. Silicon crystal material, for example, is formed by silicon epitaxial deposition, or epitaxy, which is a process of growing a crystalline epitaxial layer on a substrate. The source 116P and the drain 118P of the P-type transistor 104 are formed in the P-type region 106 in a first epitaxial process, and the source 116N and the drain 118N of the N-type transistor 110 are formed in the N-type region 112 in a second epitaxial process. As a crystalline structure grows vertically, it also extends horizontally. Thus, in the inverter circuit 102, an epitaxial layer in the source 112P of the P-type transistor 104 extends above the isolation region 114 (e.g., horizontally) towards
the N-type transistor 110. Similarly, an epitaxial layer in the source 116N of the N-type transistor 110 extends above the isolation region 114 towards the P-type transistor 104. [0006] One approach to minimizing the area occupied by the standard cell circuit layout 100 is to reduce the width Wiso of the isolation region 114, which reduces a distance between portions of the epitaxial layers of the sources 116P, 116N and the drains 118P, 118N extending above the isolation region 114. However, physical limitations of photolithographic methods and epitaxial growth processes present challenges to further decreasing the geometries of transistors in this regard. Small variations in those processes can result in defects that cause, for example, short circuits that lead to circuit failure. Thus, problems with process variation occurring in fabrication of planar and three- dimensional transistors are an obstacle to further reducing circuit area.SUMMARY OF THE DISCLOSURE[0007] Aspects disclosed herein include complementary cell circuits employing isolation structures for defect reduction. Related methods of fabricating complementary cell circuits that employ such isolation structures are also disclosed. As the distance between a P-type region and an N-type region of a complementary cell circuit is reduced in an effort to reduce circuit area, there is an increase in the number of short defects caused by process variations. In exemplary aspects disclosed herein, to reduce or avoid short defects between sources and drains (source/drains) of adjacent P-type and N-type transistors of a complementary cell circuit, isolation walls are formed in an isolation region between the source/drains of the P-type and N-type transistors. These isolation walls can be formed prior to growing a P-type epitaxial layer and an N-type epitaxial layer on respective sides of the isolation region. The isolation walls serve to limit growth of the respective epitaxial layers in a direction extending above the isolation region. The isolation walls provide a physical barrier to prevent formation of short defects that can otherwise form between the P-type and N-type epitaxial layers. Thus, the isolation walls can prevent circuit failures resulting from electrical shorts between source/drain regions of transistors in complementary cell circuits. In this manner, a width of the isolation region between a P-type transistor and an N-type transistor in a circuit cell layout can be reduced so that a total layout area of the complementary cell circuit can be reduced without reducing product yield. In another exemplary aspect, a gate cut, which is an
isolation structure that electrically isolates a gate of a complementary cell circuit from a gate of an adjacent cell circuit, may be formed with the isolation walls.[0008] In a first aspect, a complementary cell circuit is disclosed. The complementary cell circuit includes a semiconductor substrate including a P-type region, an N-type region, and an isolation region between the P-type region and the N- type region, the isolation region having a width extending in a direction of a first axis. The complementary cell circuit further includes a gate extending longitudinally in the direction of the first axis, the gate extending across portions of each of the P-type region, the isolation region, and the N-type region. The complementary cell circuit includes a first P-type epitaxial (epi) source/drain (S/D) (epi-S/D) formed on the P-type region on a first side of the gate, the first P-type epi-S/D extending above the isolation region in a first direction of the first axis, and a first N-type epi-S/D formed on the N-type region on the first side of the gate, the first N-type epi-S/D extending above the isolation region in a second direction of the first axis. The complementary cell circuit includes a first isolation wall on the first side of the gate extending from the isolation region in a third direction orthogonal to the first axis, the first isolation wall isolating the first P-type epi- S/D from the first N-type epi-S/D. The complementary cell circuit includes a second P- type epi-S/D formed on the P-type region on a second side of the gate, the second P-type epi-S/D extending above the isolation region in the first direction of the first axis, and a second N-type epi-S/D formed on the N-type region on the second side of the gate, the second N-type epi-S/D extending above the isolation region in the second direction of the first axis. The complementary cell circuit includes a second isolation wall on the second side of the gate extending from the isolation region in the third direction orthogonal to the first axis, the second isolation wall isolating the second P-type epi-S/D from the second N-type epi-S/D.[0009] In another aspect, a method of forming a complementary cell circuit including isolation structures is disclosed. The method includes forming a P-type region on a first side of an isolation region extending longitudinally in a first direction on a semiconductor substrate and forming an N-type region on a second side of the isolation region on the semiconductor substrate. The method includes forming a dummy gate extending longitudinally in a second direction orthogonal to the first direction and extending across portions of the P-type region, the isolation region, and the N-type region. The method includes depositing a dielectric layer on the P-type region, the isolation region, and the
N-type region on a first side and a second side of the dummy gate, and etching a first trench through the dielectric layer on the first side of the dummy gate and a second trench through the dielectric layer on the second side of the dummy gate. The method includes forming isolation structures, including filling the first trench with an isolation material to form a first isolation wall, and filling the second trench with the isolation material to form a second isolation wall. The method includes forming a first N-type epi-S/D on the N- type region on the first side of the dummy gate, the first N-type epi-S/D extending above the isolation region on a first side of the first isolation wall, and forming a second N-type epi-S/D on the N-type region on the second side of the dummy gate, the second N-type epi-S/D extending above the isolation region on a first side of the second isolation wall. The method also includes forming a first P-type epi-S/D on the P-type region on the first side of the dummy gate, the first P-type epi-S/D extending above the isolation region on a second side of the first isolation wall and isolated from the first N-type epi-S/D by the first isolation wall, and forming a second P-type epi-S/D on the P-type region on the second side of the dummy gate, the second P-type epi-S/D extending above the isolation region on a second side of the second isolation wall and isolated from the second N-type epi-S/D by the second isolation wall.BRIEF DESCRIPTION OF THE FIGURES [0010] Figure 1 is a top view of a standard cell layout for one example of a conventional complementary cell circuit including a P-type diffusion region (“P-type region”) and an N-type diffusion region (“N-type region”) formed on a substrate;[0011] Figure 2A is a top view in a fabrication stage of a complementary cell circuit comprising fins for a P-type Fin Field-Effect Transistor (FET) (FinFET) (PFET) and an N-type FinFET (NFET)across which a dummy gate has been formed;[0012] Figure 2B is a cross-sectional side view of the complementary cell circuit in Figure 2A at a fabrication stage subsequent to formation of epitaxial regions on the P- type and N-type fins without a short defect resulting from process variations;[0013] Figure 2C is a cross-sectional side view of the complementary cell circuit in Figure 2A at a fabrication stage subsequent to formation of epitaxial regions on the P- type and N-type fins including a short defect resulting from process variations;
[0014] Figure 3A is a top view in a fabrication stage of a complementary cell circuit comprising a P-type gate-all-around (GAA) region and an N-type GAA region across which a dummy gate has been formed;[0015] Figure 3B is a cross-sectional side view of the complementary cell circuit in Figure 3A at a fabrication stage subsequent to formation of epitaxial regions on the P- type and N-type GAA regions without a short defect resulting from process variations; [0016] Figure 3C is a cross-sectional side view of the complementary cell circuit in Figure 3A at a fabrication stage subsequent to formation of epitaxial regions on the P- type and N-type GAA regions including a short defect resulting from process variations; [0017] Figure 4A is a top view of an exemplary complementary cell circuit comprising fins for a PFET and an NFET on a substrate across which a dummy gate has been formed, and comprising isolation walls formed between P-type and N-type regions (i.e., regions where source/drains (S/Ds) of the PFET and the NFET are grown) to prevent short defects resulting from process variations in the formation of epitaxial layers;[0018] Figure 4B is a cross-sectional side view of the exemplary complementary cell circuit in Figure 4A illustrating an isolation wall providing a barrier to growth of epitaxial S/D material in a direction above the isolation region;[0019] Figure 4C is a cross-sectional side view of the exemplary complementary cell circuit in Figure 4A showing that isolation walls are not formed in the dummy gate; [0020] Figure 5A is a top view in a fabrication stage of an exemplary complementary cell circuit comprising fins for a PFET and an NFET on a substrate and including a dummy gate, the complementary cell circuit including isolation walls formed between S/D regions of the PFET and the NFET and a gate cut formed at one end of the dummy gate to isolate the gate of the complementary cell circuit in Figure 5A from a gate of an adjacent circuit;[0021] Figure 5B is a cross-sectional side view of the exemplary complementary cell circuit in Figure 5A illustrating an isolation wall providing a barrier to growth of epitaxial S/D material in a direction extending above the isolation region;[0022] Figure 5C is a cross-sectional side view of the exemplary complementary cell circuit in Figure 5A illustrating a gate cut formed at an end of the dummy gate;[0023] Figures 6A and 6B are a flowchart illustrating an exemplary process in a method of fabricating the complementary cell circuit in Figure 5A including isolation
walls formed between a P-type region and an N-type region to prevent short defects resulting from process variations;[0024] Figure 7 A is a top view of a first fabrication stage of the complementary cell circuit, or FinFET circuit, in Figures 4A-4C including fins formed in a P-type region and an N-type region of a substrate and a dummy gate formed across the fins;[0025] Figure 7B is a cross-sectional side view of S/D regions of the FinFET circuit in Figure 7A illustrating fins extending from the substrate in the P-type region and the N- type region;[0026] Figure 7C is a cross-sectional side view through channels of the fins in the FinFET circuit in Figure 7A illustrating a dummy gate overlapping the fins extending from the substrate in the P-type region and the N-type region;[0027] Figure 8A is a top view of a fabrication stage in which a dielectric layer is deposited on the fins of the FinFET circuit in Figure 7A to a height of the dummy gate; [0028] Figure 8B is a cross-sectional side view of the S/D regions of the FinFET circuit in Figure 8A illustrating the dielectric layer deposited on the fins on the substrate on a side of the dummy gate;[0029] Figure 8C is a cross-sectional side view through the channels of the fins in the FinFET circuit in Figure 8A illustrating the dummy gate overlapping the fins extending from the substrate in the P-type region and the N-type region;[0030] Figure 9A is a top view of a fabrication stage in which a first patterned mask is formed on the FinFET circuit in Figure 8A, and voids are etched in the dielectric layer according to the first patterned mask;[0031] Figure 9B is a cross-sectional side view of the S/D regions of the FinFET circuit in Figure 9A illustrating the first patterned mask on the dielectric layer and a void etched in the dielectric layer in the isolation region according to the first patterned mask; [0032] Figure 9C is a cross-sectional side view through the channels of the fins in the FinFET circuit in Figure 9A illustrating the first patterned mask on the dummy gate; [0033] Figure 10A is a top view of a fabrication stage in which a second patterned mask is deposited on the FinFET circuit in Figure 9A and a void is etched in the dummy gate according to the second patterned mask;[0034] Figure 10B is a cross-sectional side view of the S/D regions of the FinFET circuit in Figure 10A illustrating the second patterned mask deposited on the dielectric layer and in the void etched in the dielectric layer;
[0035] Figure IOC is a cross-sectional side view through a channel of the FinFET circuit in Figure 10A illustrating the second patterned mask deposited on the dummy gate and the void etched in the dummy gate according to the second patterned mask;[0036] Figure 11 A is a top view of a fabrication stage in which the second patterned mask is removed from the FinFET circuit in Figure 10A;[0037] Figure 1 IB is a cross-sectional side view of the S/D regions of the FinFET circuit in Figure 11A illustrating the second patterned mask removed from the dielectric layer and from the void etched in the dielectric layer in the isolation region;[0038] Figure 11C is a cross-sectional side view through the channels of the fins in the FinFET circuit in Figure 11A illustrating the second patterned mask removed from the dummy gate;[0039] Figure 12A is a top view of a fabrication stage in which the voids in the dielectric layer and the dummy gate of the FinFET circuit in Figure 11 A have been filled to form isolation structures including isolation walls and a gate cut;[0040] Figure 12B is a cross-sectional side view of the S/D regions of the FinFET circuit in Figure 12A illustrating the isolation walls formed in the void etched in the dielectric layer in the isolation region;[0041] Figure 12C is a cross-sectional side view through the channels of the fins in the FinFET circuit in Figure 12A illustrating the gate cut formed in the void etched in the dummy gate;[0042] Figure 13 A is a top view of a fabrication stage in which the dielectric layer of the FinFET circuit in Figure 12A has been removed and epitaxial material is formed on the fins in the P-type region and the N-type region;[0043] Figure 13B is a cross-sectional side view of the S/D regions of the FinFET circuit in Figure 13 A illustrating the isolation walls which prevent short defects between the epitaxial material formed on the P-type region and the epitaxial material formed on the N-type region;[0044] Figure 13C is a cross-sectional side view through the channels of the fins in the FinFET circuit in Figure 13 A illustrating the gate cut formed in the void etched in the dummy gate;[0045] Figure 14A is a top view of another exemplary complementary cell circuit that is a GAA circuit in which isolation walls are formed on each side of a dummy gate to
prevent formation of short defects between epitaxial material in the P-type region and epitaxial material in the N-type region;[0046] Figure 14B is a cross-sectional side view of epitaxial (epi) S/D (epi-S/D) regions of the GAA circuit in Figure 14A illustrating isolation walls providing a barrier to prevent formation of short defects between the epitaxial material formed on the P-type region and the epitaxial material formed on the N-type region;[0047] Figure 15 is a block diagram of an exemplary processor-based system that can include an IC including a complementary cell circuit employing isolation walls for preventing short defects between epitaxial regions, as illustrated in any of Figures 4A- 4C, 5A-5C, and 12A-14B; and[0048] Figure 16 is a block diagram of an exemplary wireless communications device that includes radio frequency (RF) components formed from an IC, including a complementary cell circuit employing isolation walls for preventing short defects between epitaxial regions, as illustrated in any of Figures 4A-4C, 5A-5C, and 12A-14B.DETAILED DESCRIPTION[0049] With reference now to the drawing figures, several exemplary aspects of the present disclosure are described. The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects.[0050] Aspects disclosed herein include complementary cell circuits employing isolation structures for defect reduction. Related methods of fabricating complementary cell circuits that employ such isolation structures are also disclosed. As the distance between a P-type region and an N-type region of a complementary cell circuit is reduced in an effort to reduce circuit area, there is an increase in the number of short defects caused by process variations. In exemplary aspects disclosed herein, to reduce or avoid short defects between sources and drains (source/drains) of adjacent P-type and N-type transistors of a complementary cell circuit, isolation walls are formed in an isolation region between the source/drains of the P-type and N-type transistors. These isolation walls can be formed prior to growing a P-type epitaxial layer and an N-type epitaxial layer on respective sides of the isolation region. The isolation walls serve to limit growth of the respective epitaxial layers in a direction extending above the isolation region. The isolation walls provide a physical barrier to prevent formation of short defects that can
otherwise form between the P-type and N-type epitaxial layers. Thus, the isolation walls can prevent circuit failures resulting from electrical shorts between source/drain regions of transistors in complementary cell circuits. In this manner, a width of the isolation region between a P-type transistor and an N-type transistor in a circuit cell layout can be reduced so that a total layout area of the complementary cell circuit can be reduced without reducing product yield. In another exemplary aspect, a gate cut, which is an isolation structure that electrically isolates a gate of a complementary cell circuit from a gate of an adjacent cell circuit, may be formed with the isolation walls.[0051] Before discussing examples of complementary cell circuits that include isolation walls formed between P-type and N-type regions where source/drains (S/Ds) of the PFET and the NFET can be grown to prevent or reduce short defects resulting from process variations in the formation of epitaxial layers starting at Figure 4 A, examples of complementary cell circuits of different three-dimensional (3D) transistors that do not include such isolation walls are first illustrated in Figures 2A-2C and 3A-3C and discussed.[0052] Figures 2A-2C illustrate an example of a complementary cell circuit 201 that includes a Fin Field-Effect Transistor (FET) (FinFET) circuit 202. Figure 2A is a top view of the FinFET circuit 200 including a P-type region 202P and an N-type region 202N of a semiconductor substrate 204. The P-type region 202P and the N-type region 202N are on opposite sides of an isolation region 206. The semiconductor substrate 204 extends in a plane including an X-axis and a Y-axis orthogonal to each other. Fins 208P and 208N extend longitudinally in a direction substantially parallel to the Y-axis (“Y-axis direction”), and a dummy gate 210 extends longitudinally in a direction substantially parallel to the X-axis (“X-axis direction”). The dummy gate 210 is formed across channels 212P and 212N of the fins 208P and 208N. In this context, “substantially parallel” means parallel or within a few degrees (e.g., 3 degrees) of variation from parallel.[0053] Figure 2B is a side view at cross-section A- A’ of Figure 2A after epitaxial (epi) source/drains (S/Ds) (epi-S/Ds) 214P and 214N are formed on the fins 208P and 208N, respectively. Figure 2B shows that the fins 208P extend in a Z-axis direction from the P-type region 202P, orthogonal to the plane of the semiconductor substrate 204, and the fins 208N extend in the Z-axis direction from the N-type region 202N. A shallow trench isolation (STI) layer 216 is deposited between the fins 208P and 208N. As the
crystalline structure 218 of the epi-S/Ds 214P and 214N grows on the fins 208P and 208N, the epi-S/Ds 214P and 214N extend horizontally and couple to the crystalline structures 218 on adjacent fins 208P and 208N. The extent of such growth is determined by various factors including time and loading effects. To keep the epi-S/Ds 214P separate from the epi-S/Ds 214N, the isolation region 206 is provided between the P-type region 202P and the N-type region 202N. In a normal process, a time for growth of the crystalline structures 218 is set to allow the epi-S/Ds 214P and 214N to extend horizontally far enough to couple to each other, but not far enough to extend across the isolation region 206.[0054] Figure 2C is another cross-sectional side view, along line B-B’ in Figure 2A, after a short defect is formed due to process variations. To explain how the short defect in Figure 2C is created, a high-level description of processes for forming the epi-S/Ds 214P and 214N is provided. A first mask (not shown) is formed on the fins 208P in Figure 2A. In a first epitaxial growth process, the epi-S/Ds 214N are formed on the fins 208N. Next, the first mask is removed from the recessed fins 208P, and a second mask 220 is formed on the epi-S/Ds 214N. The second mask 220 extends horizontally into the isolation region 206 to a point that is farther than an epi-S/D 214N would horizontally extend assuming no process variation has affected the size of the epi-S/D 214N. The extent of the second mask 220 is based on an expected size of the crystalline structure 218. Then, in a second epitaxial growth process, the epi-S/Ds 214P are formed on the fins 208P.[0055] Certain process factors (e.g., loading effects) may vary during the first epitaxial growth process. As a result, the epi-S/Ds 214N in Figure 2C are larger than expected. As a result, the second mask 220 does not extend far enough horizontally above the isolation region 206 to fully cover the epi-S/D 214N. Hence, there is an exposed portion of the epi-S/D 214N not covered by the second mask 220 and, in the second epitaxial growth process, an unintended crystalline structure 222 is also formed on the exposed portion of the epi-S/D 214N. The crystalline structure 222 extends horizontally across the isolation region 206 and comes into contact with the epi-S/D 214P, creating an electrical connection or short defect. As a result, the FinFET circuit 200 fails to operate as expected.[0056] In a second example of a complementary cell circuit, a gate-all-around (GAA) circuit 300 is illustrated in Figures 3A-3C. Figure 3 A is a top view of the GAA circuit
300 including P-type region 302P and N-type region 302N on a semiconductor substrate 304. The P-type region 302P and the N-type region 302N are on opposite sides of an isolation region 306. The P-type region 302P and the N-type region 302N include nanosheets (or nanoslabs) 308P and 308N, respectively. The semiconductor substrate 304 extends in a plane including an X-axis and a Y-axis orthogonal to each other. The P- type region 302P and the N-type region 302N each extend longitudinally in a Y-axis direction, and a dummy gate 310 extends longitudinally in an X-axis direction. The dummy gate 310 is formed on channels 312P and 312N of the P-type region 302P and the N-type region 302N, respectively.[0057] Figure 3B is a side view at cross-section A-A’ of Figure 3A after epi-S/Ds 314P and 314N are formed on the nanosheets 308P and 308N, respectively. As shown in Figure 3B, the epi-S/Ds 314P and 314N are also formed on P-type and N-type portions 316P and 316N, respectively, extending in a Z-axis direction from the semiconductor substrate 304. A STI layer 318 is deposited between the P-type and N-type portions 316P and 316N. Positioned above the P-type and N-type portions 316P and 316N in the Z-axis direction are the nanosheets 308P and 308N, which are separated from each other by gaps 320. N-type epi-S/D 314N is formed on N-type portion 316N and nanosheets 308N in a first epitaxial growth process. P-type epi-S/D 314P is formed on P-type portion 316P and nanosheets 308P in a second epitaxial growth process. In the first epitaxial growth process, similar to the first epitaxial growth process described with respect to Figure 2C, the N-type epi-S/D 314N is grown to an intended size extending horizontally above the isolation region 306 in a Y-axis direction toward the P-type region 302P. In the second epitaxial growth process, the P-type epi-S/D 314P is grown to an intended size extending horizontally above the isolation region 306 in a Y-axis direction toward the N-type region 302N. No short defects are present in the GAA circuit 300 in Figure 3B.[0058] Figure 3C is another side view at cross-section A-A’ of Figure 3 A after a short defect is formed due to process variations. The process flow for forming epi-S/Ds 314N and 314P is similar to the process described with reference to Figure 2C, above. In Figure 3C, the epi-S/D 314N is larger than expected due to process variations, extending farther horizontally across the isolation region 306 than expected. As a result, the normally grown epi-S/D 314P, which also extends horizontally across the isolation region 306, comes into contact with the over-sized epi-S/D 314N, creating an electrical connection or short defect. As a result, the GAA circuit 300 fails to operate as expected.
[0059] Figures 4A-4C are views of an exemplary FinFET circuit 400 that is one example of a complementary cell circuit including isolation walls formed in an isolation region between P-type and N-type transistors to limit growth of epitaxial layers of the respective transistors in a direction extending above the isolation region, as disclosed herein. Figure 4A is a top view of the FinFET circuit 400 including a P-type region 402P and an N-type region 402N of a semiconductor substrate 404 that extends in a plane including an X-axis and a Y-axis. The semiconductor substrate 404 includes an isolation region 406 between the P-type region 402P and the N-type region 402N. The isolation region 406 has a width W extending in a direction of the X-axis (“X-axis direction”). The FinFET circuit 400 includes a dummy gate 408 extending longitudinally in the X-axis direction across portions of the P-type region 402P, the isolation region 406, and the N- type region 402N. In the completed FinFET circuit 400, the dummy gate 408 is replaced by a conductive gate for controlling transistors formed by the P-type region 402P and the N-type region 402N. The P-type region 402P includes fins 410P extending in a Y-axis direction. The N-type region 402N includes fins 410N extending in the Y-axis direction. [0060] With reference to the FinFET circuit 400 in Figure 4A, an epi-S/D 412P is formed on the fins 41 OP in the P-type region 402P on a side Z4A of the dummy gate 408. The epi-S/D 412P extends above the isolation region 406 in a first X-axis direction (i.e., toward the N-type region 402N). An epi-S/D 412N is formed on the fins 410N in the N- type region 402N on the side Z4A of the dummy gate 408, and the epi-S/D 412N extends above the isolation region 406 in a second X-axis direction (i.e., toward the P-type region 402P). The isolation region 406 includes an isolation wall 414A on the side Z4A of the dummy gate 408. The isolation wall 414A extends from the isolation region 406 in a Z- axis direction to a height HWALL that is tall enough to block the epi-S/Ds 412N and 416N from contacting the epi-S/Ds 412P and 416P above the isolation region 406.[0061] The isolation wall 414A is formed in the isolation region 406 before the epi- S/D 412P and the epi-S/D 412N are grown. The isolation wall 414A is inserted between the spaces that would be occupied by a short defect if either of the epi-S/D 412P or the epi-S/D 412N are incorrectly grown due to process variations. Rather than relying solely on the accuracy of fabrication (e.g., photolithographic) processes to avoid short defects, the isolation wall 414A is a physical barrier that prevents or at least reduces the creation of short defects even when process variations do occur.
[0062] With further reference to the FinFET circuit 400 in Figure 4 A an epi-S/D 416P is formed on the fins 410P in the P-type region 402P on a side Z4B of the dummy gate 408. The epi-S/D 416P extends above the isolation region 406 in the first X-axis direction (i.e., toward the N-type region 402N). An epi-S/D 416N is formed on the fins 410N in the N-type region 402N on the side Z4B of the dummy gate 408 and the epi-S/D 416N extends above the isolation region 406 in the second X-axis direction (i.e., toward the P- type region 402P). The isolation region 406 also includes an isolation wall 414B on the side Z4B of the dummy gate 408 that extends to the height HWALL from the isolation region 406 in the Z-axis direction (e.g., orthogonal to the dummy gate 408). The isolation wall 414B isolates the epi-S/D 416P from the epi-S/D 416N. The isolation wall 414A and the isolation wall 414B may be formed of at least one of Silicon Nitride (SiN), Silicon Oxi-Nitride (SiON), Silicon Carbide (SiC), and Aluminum Oxide (AIO), for example, or other isolation material for providing electrical isolation.[0063] Figure 4B is a cross-sectional side view at line A-A’ of the FinFET circuit 400 in Figure 4A. In Figure 4B, a STI 418 is formed between the respective fins 410P and 410N, including in the isolation region 406. Here, it can be seen that the horizontal growth of the epi-S/Ds 412P and 412N in the X-axis direction above the isolation region 406 is limited by the isolation wall 414A. In this regard, the epi-S/Ds 412P and 412N are prevented from forming short defects even in the presence of a process variation in which the epi-S/Ds 412N resulting from the epitaxial growth process become larger than intended. In addition, because the isolation of the epi-S/Ds 412N during a second epitaxial growth process, in which the epi-S/Ds 412P are formed, is not provided solely by a mask that is based on an expected size of the epi-S/Ds 412N, short defects cannot be created during formation of the epi-S/Ds 412P. As shown in Figure 4B, a bottom end 420A of the isolation wall 414A is below the top surface of the isolation region 406, which is a top surface of the STI 418.[0064] In one non-limiting example, the FinFET circuit 400 may be formed with the following dimensions. The isolation walls 414A and 414B have a width WWALL in the range of 10 nanometers (nm) to 30 nm. The isolation walls 414A and 414B extend to a height HWALL in the range of 50 nm to 150 nm above the substrate 406 to correspond to a height HFIN of the fins 410N and 410P above the substrate 406 (see Figure 4C). The fins 410N and 410P each have a fin width WFIN in the range of 3 nm to 12 nm and are separated at a pitch PFIN of 15 nm to 40 nm. The epi-S/Ds 416P, 416N, 412P, 412N are formed on
portions of the fins 4 ION and 41 OP extending orthogonally for a length LEPI (see Figure 4A) of 30nm to 80nm from the dummy gate 408. The dummy gate 408 has a width WDMY of 6nm to 200nm.[0065] Figure 4C is a cross-sectional side view at line B-B’ of the FinFET circuit 400 (i.e., complementary cell circuit) in Figure 4A. Figure 4C shows that the dummy gate 408 is formed across channels 422P and 422N of the fins 410P and 410N, respectively. Figure 4C also shows that the FinFET circuit 400 does not have an isolation wall in the dummy gate 408 between the P-type region 402P and the N-type region 402N.[0066] In the FinFET circuit 400 in Figures 4A-4C, the N-type region 402N includes the N-type fins 410N extending in the Z-axis direction from the semiconductor substrate 404. The N-type epi-S/D 412N and the N-type epi-S/D 416N are formed on the N-type fins 410N. The P-type region 402P includes the P-type fins 410P extending in the Z-axis direction from the semiconductor substrate 404. The P-type epi-S/D 412P and the P-type epi-S/D 416P are formed on the P-type fins 410P. Thus, forming the N-type region 402N includes forming N-type fins 410N extending orthogonal to (i.e., in the Z-axis direction) the semiconductor substrate 404, and forming the P-type region 402P includes forming P-type fins 410P extending orthogonal to (i.e., in the Z-axis direction) the semiconductor substrate 404.[0067] Figures 5A-5C illustrate another example of the FinFET circuit 400 of Figures 4A-4C having a gate cut 500 disposed at an end of the dummy gate 408. Figure 5 A is a top view of the FinFET circuit 400 similar to the view in Figure 4A, but Figure 5A shows the gate cut 500 extending across the dummy gate 408. Thus, when the dummy gate 408 is replaced by a conductive gate at a subsequent fabrication stage, the conductive gate on the FinFET circuit 400 will be separate from (i.e., electrically isolated from) a conductive gate formed on the opposite side of the gate cut 500 where a dummy gate section 502 is shown. Thus, the gate cut 500 will be disposed at an end of the conductive gate. The dummy gate section 502 may extend across an adjacent circuit and be replaced with a conductive gate. As discussed in more detail below, the gate cut 500 may be formed of the same material from which the isolation walls 414A and 414B are formed as part of a common process.[0068] Figure 5B is a cross-sectional side view at line A-A’ of the FinFET circuit 400 in Figure 5A that illustrates the epi-S/Ds 412P and 412N separated by the isolation wall 414A and does not show the gate cut 500 in the FinFET circuit 400.
[0069] Figure 5C is a cross-sectional side view at line B-B’ of the FinFET circuit 400 in Figure 5A, like Figure 4C, but including the gate cut 500. The gate cut 500 is disposed at an end of the dummy gate 408, providing a barrier that electrically isolates a conductive gate in the FinFET circuit 400 from an adjacent circuit (not shown).[0070] Figures 6A and 6B are a flowchart illustrating an exemplary method 600 of forming a complementary cell circuit including isolation structures, such as the FinFET circuit 400 in Figures 4A-4C and 5A-5C. The isolation structures in the FinFET circuit 400 include the isolation walls 414A and 414B, and may include the gate cut 500. The method 600 is described with reference to Figures 7A-7C through Figures 13A-13C. [0071] Figures 7A-7C illustrate the FinFET circuit 400 in Figures 4A-4C in a first fabrication stage 700. The first fabrication stage 700 includes forming the P-type region 402P on a first side of the isolation region 406 on the semiconductor substrate 404 (block 602 in Figure 6A). The first fabrication stage 700 further includes forming the N-type region 402N on a second side of the isolation region 406 on the semiconductor substrate 404 (block 604 in Figure 6A). The first fabrication stage 700 also includes forming a dummy gate 408 extending longitudinally across the P-type region 402P, the isolation region 406, and the N-type region 402N (block 606 in Figure 6A). In the FinFET circuit 400 in Figure 7A, the fins 410P and 410N extending in the Y-axis direction are formed in the P-type region 402P and N-type region 402N, respectively, and the dummy gate 408 is formed to extend longitudinally in the X-axis direction.[0072] Figure 7B is a cross-sectional side view at line A-A’ of the FinFET circuit 400 in Figure 7A illustrating the STI 418 between the fins 410P and 410N and in the isolation region 406. The fins 410P and 410N extend above a top surface of the STI 418. Figure 7C is a cross-sectional side view at line B-B’ of the FinFET circuit 400 in Figure 7A illustrating the dummy gate 408 formed across the channels 422P and 422N and a portion of the STI 418.[0073] Figures 8A-8C illustrate the FinFET circuit 400 in a fabrication stage 800. The fabrication stage 800 includes, as shown in Figure 8A, depositing a dielectric layer 802 on the P-type region 402P, the isolation region 406, and the N-type region 402N on a first side and a second side of the dummy gate 408 (block 608 in Figure 6A). In a photolithographic process, the dielectric layer 802 is deposited on the fins 410N and 410P, the dummy gate 408, and the STI 418 to protect these structures of the FinFET circuit
400. In subsequent fabrication stages, the dielectric layer 802 provides a medium in which the isolation walls 414A and 414B are formed.[0074] Figures 8B and 8C are cross-sectional side views at lines A-A’ and B-B’, respectively, of the FinFET circuit 400 in Figure 8A. The dielectric layer 802 may be planarized to a height HDMY of the dummy gate 408 above the semiconductor substrate 404, which is higher than the fins 410P and 410N.[0075] Figures 9A-9C illustrate the FinFET circuit 400 in a fabrication stage 900. The fabrication stage 900 includes forming a trench mask 902 on the dielectric layer 802 and the dummy gate 408. Figure 9 A is a top view of the FinFET circuit 400 in Figure 8 A showing the trench mask 902 deposited on the dielectric layer 802 and the dummy gate 408. The trench mask 902 is a patterned layer formed of a material that is not vulnerable to the etching process. The trench mask 902 is employed to protect areas that are not to be etched and expose areas that are to be etched. Forming the trench mask 902 includes depositing a trench mask layer 902L and patterning the trench mask layer 902L to create openings 904A and 904B for forming trenches 906A and 906B in the dielectric layer 802. The openings 904A and 904B expose the areas of the FinFET circuit 400 in which the trenches 906A and 906B are formed. Figure 9A shows the trench mask 902 patterned above the isolation region 406 to create the openings 904A and 904B below which the trenches 906 A and 906B are etched into the dielectric layer 802.[0076] The fabrication stage 900 also includes etching the trench 906A through the dielectric layer 802 on the first side of the dummy gate 408 and etching the trench 906B through the dielectric layer 802 on the second side of the dummy gate 408 (block 610 in Figure 6A). Figure 9B is a cross-sectional side view at line A-A’ of the FinFET circuit 400 in Figure 9A illustrating the trench 906A etched in the dielectric layer 802. The trenches 906A and 906B extend into the surface of the isolation region 406. Thus, etching the trench 906A includes etching into the surface of the isolation region 406 on the first side of the dummy gate 408, and etching the trench 906B includes etching into the surface of the isolation region on the second side of the dummy gate 408. Figure 9B shows that the trench 906 A is etched through the dielectric layer 802 and into the STI 418 in the isolation region 406 below the opening 904A in the trench mask 902. Thus, the trench 906A extends in the Z-axis direction into the isolation region 406, which includes the STI 418. Etching the trenches 906A and 906B into the dielectric layer 802 and the STI 418 provides a mold or hollow defining the shape/size of the isolation walls 414A and 414B
to be formed. In addition, after the isolation walls 414A and 414B are formed and the dielectric layer 802 is removed, the portions of the trenches 906A and 906B in the STI 418 provide support for the isolation walls 414A and 414B through subsequent processing. Figure 9C is a cross-sectional side view at line B-B’ of the FinFET circuit 400 in Figure 9A. Figure 9C illustrates that the trench mask 902 is formed above the dummy gate 408 to protect the dummy gate during the etching process forming the trenches 906A and 906B . This portion of the trench mask 902 differs from the mask for forming the gate cut 500 in Figures 10A-10C.[0077] Figures 10A-10C illustrate the FinFET circuit 400 in an optional fabrication stage 1000. The fabrication stage 1000 is an optional fabrication stage employed in fabricating the FinFET circuit 400 to include the gate cut mask 500. The fabrication stage 1000 includes removing the trench mask 902 from the FinFET circuit 400 in Figure 9A. The fabrication stage 1000 includes optional steps for forming a gate cut 500 as shown in the example in Figures 5A and 5C. Fabrication stage 1000 includes forming a gate cut mask 1002 having an opening 1004 above the dummy gate 408. Figure 10A is a top view of the FinFET circuit 400 in Figure 9 A showing the gate cut mask 1002 formed above the dielectric layer 802 and the dummy gate 408. Figure 10A shows the opening 1004 above the dummy gate 408 and extending onto the dielectric layer 802 on each side of the dummy gate 408. In this regard, forming the gate cut mask 1002 includes depositing a gate cut mask layer 1002L on the dielectric layer 802 and the dummy gate 408, and into the trenches 906A and 906B.[0078] Figure 10B is a cross-sectional side view at line A-A’ of the FinFET circuit 400 in Figure 10A illustrating the gate cut mask 1002 on the dielectric layer 802 and in the trench 906A. Forming the gate cut mask 1002 includes patterning the gate cut mask 1002 to create the opening 1004 above the dummy gate 408 for forming the gate cut trench 1006 in the dummy gate 408. The dummy gate 408 beneath the gate cut mask 1002 is exposed to the etching process, which is controlled by, for example, time and concentration to remove material of the dummy gate 408 and some material of the STI 418 below the opening 1004. The fabrication stage 1000 includes etching the gate cut mask 1002 having the opening 1004 above the dummy gate 408, and etching the gate cut trench 1006 through the dummy gate 408 (block 612 in Figure 6A). Figure IOC is a cross- sectional side view at line B-B’ of the FinFET circuit 400 in Figure 10A illustrating the gate cut trench 1006 etched through the dummy gate 408 below the opening 1004. Figure
IOC shows that the gate cut trench 1006 extends into the STI 418 but may not extend fully through the STI 418 to the semiconductor substrate 404.[0079] Figures 11A-11C illustrate the FinFET circuit 400 in an optional fabrication stage 1100 employed for fabricating the gate cut 500 . The fabrication stage 1100 includes removing the gate cut mask 1002 (block 614 in Figure 6A), which was deposited in fabrication stage 1000. Figure 11 A is a top view of the FinFET circuit 400 in Figure 10A with the gate cut mask 1002 removed. Figure 1 IB is a cross-sectional side view at line A-A’ of the FinFET circuit 400 in Figure 11A showing that the gate cut mask 1002 has been removed from the dielectric layer 802, and from the trench 906A. Figure 11C is a cross-sectional side view at line B-B’ of the FinFET circuit 400 in Figure 11A showing the gate cut mask 1002 removed from the dummy gate 408, and the gate cut trench 1006 that was formed in fabrication stage 1000.[0080] Figures 12A-12C illustrate the FinFET circuit 400 in a fabrication stage 1200. The fabrication stage 1200 includes forming isolation structures (block 616 in Figure 6B). Forming the isolation structures includes filling the trench 906A with isolation material 1202 to form the isolation wall 414A and filling the trench 906B with the isolation material 1202 to form the isolation wall 414B (block 618 in Figure 6B). Figure 12A is a top view of the FinFET circuit 400 in Figure 11A with the trenches 906A and 906B and the gate cut trench 1006 filled with the isolation material 1202. The trenches 906 A and 906B are filled to the top of the dielectric layer 802.[0081] Since the dielectric layer 802 is planarized to the height HDMY of the dummy gate 408 (see Figure 12C), filling the trenches 906 A and 906B includes filling the trench 906 A and trench 906B with the isolation material 1202 to a height HDMY of the dummy gate. Figure 12B is a cross-sectional side view at line A-A’ of the FinFET circuit 400 in Figure 12A showing the isolation material 1202 in the trench 906 A to form the isolation wall 414A between the P-type region 402P and the N-type region 402N to limit growth of epitaxial material horizontally above the isolation region 406.[0082] In addition, if the optional steps for creating the gate cut 500 are performed, depositing the isolation material 1202 further includes filling the gate cut trench 1006 with the isolation material 1202 to the height HDMY of the dummy gate 408 to form the gate cut 500 (block 620 in Figure 6B). The isolation material 1202 may include at least one of SiN, SiON, SiC, and AIO or other insulating material. Figure 12C is a cross- sectional side view at line B-B’ of the FinFET circuit 400 in Figure 12A showing the gate
cut trench 1006 filled with the isolation material 1202 to form the gate cut 500 to electrically isolate a conductive gate (not shown) that replaces the dummy gate section 502 of the dummy gate 408 in an adjacent circuit.[0083] Figures 13A-13C illustrate the FinFET circuit 400 in a fabrication stage 1300. The fabrication stage 1300 includes forming the N-type epi-S/D 412N on the N-type region 402N on the first side of the dummy gate 408 such that the first N-type epi-S/D 412N extends above the isolation region 406 on a first side of the first isolation wall 414A (block 622 in Figure 6B). The fabrication stage 1300 includes forming a second N-type epi-S/D 416N on the N-type region 402N on the second side of the dummy gate 408, the second N-type epi-S/D 416N extending above the isolation region 406 on a first side of the second isolation wall 414B (block 624 in Figure 6B). The fabrication stage 1300 includes forming a first P-type epi-S/D 412P on the P-type region 402P on the first side of the dummy gate 408, the first P-type epi-S/D 412P extending above the isolation region 406 on a second side of the first isolation wall 414A and isolated from the first N-type epi-S/D 412N by the first isolation wall 414A (block 626 in Figure 6B). The fabrication stage 1300 includes forming a second P-type epi-S/D 416P on the P-type region 402P on the second side of the dummy gate 408, the second P-type epi-S/D 416P extending above the isolation region 406 on a second side of the second isolation wall 414B and isolated from the second N-type epi-S/D 416N by the second isolation wall 414B (block 628 in Figure 6B).[0084] Figures 13A-13C correspond to Figures 5A-5C illustrating the FinFET circuit 400 including the gate cut 500. Figure 13 A is a top view of the FinFET circuit 400 in Figure 12A with the epi-S/Ds 412N and 416N formed on the fins 410N in the N-type region 402N and epi-S/Ds 412P and 416P formed on the fins 410P in the P-type region 402P. Figure 13B is a cross-sectional side view at line A-A’ of the FinFET circuit 400 in Figure 13A showing that growth of the epi-S/Ds 412P and 412N in a horizontal direction above the isolation region 406 is limited by the isolation wall 414A. In this manner, creation of short defects due to process variations is prevented. Figure 13C is a cross-sectional side view at line B-B’ of the FinFET circuit 400 in Figure 13A showing the gate cut 500 disposed at an end of the dummy gate 408.[0085] Figure 14A is a top view of an exemplary GAA circuit 1400, which is another example of a complementary cell circuit as disclosed herein. In Figure 14A, the GAA circuit 1400 includes a P-type region 1402P and an N-type region 1402N of a
semiconductor substrate 1404. The P-type region 1402P and the N-type region 1402N each extend in a Y-axis direction on opposite sides of an isolation region 1406. The GAA circuit 1400 also includes a dummy gate 1408 extending longitudinally in the X-axis direction.[0086] Figure 14B is a cross-sectional side view at line A-A’ of the GAA circuit 1400 in Figure 14A. The N-type region 1402N includes nanosheets 1410N with an epi-S/D 1412N formed on and around the nanosheets 1410N. The P-type region 1402P includes nanosheets 1410P with an epi-S/D 1412P formed on and around the nanosheets 1410P. In another example of a GAA circuit according to the present disclosure, the nanosheets 1410P and 1410N could alternatively be nanoslabs, nanowires, or other GAA structures, as known in the art. Figure 14B shows that an isolation wall 1414A is formed between the P-type region 1402P and the N-type region 1402N to limit growth of the epi-S/D 1412N in a horizontal direction above the isolation region 1406, and limit growth of the epi-S/D 1412P in a horizontal direction above the isolation region 1406. In this manner, short defects may be prevented in the GAA circuit 1400.[0087] In the GAA circuit 1400, the N-type epi-S/D 1412N is formed on at least one N-type GAA structure extending longitudinally in the Y-axis direction, and the P-type epi-S/D 1412P is formed on at least one P-type GAA structure extending longitudinally in the Y-axis direction. The N-type GAA structures and the P-type GAA structures are the nanosheets 1410N and 1410P, respectively, as shown in Figure 14B, but may also be other GAA structures (e.g., nanoslabs, nanowires, etc.). Thus, forming the P-type region 1402P in Figures 14A and 14B includes forming at least one P-type GAA structure extending longitudinally in a direction substantially parallel to the semiconductor substrate 1404, and forming the N-type region 1402N includes forming at least one N- type GAA structure extending longitudinally in a direction substantially parallel to the semiconductor substrate 1404.[0088] In another example, a complementary cell circuit (not shown) employing planar transistors can be fabricated without short defects between P-type and N-type regions by including an isolation wall in an isolation region. In such complementary cell circuit, a planar N-type transistor comprises a first N-type epi-S/D and a second N-type epi-S/D, and a planar P-type transistor comprises a first P-type epi-S/D and a second P- type epi-S/D. In the fabrication of such complementary cell circuit, forming a P-type region further comprises forming a P-type planar region on a semiconductor substrate,
and forming an N-type region further comprises forming an N-type planar region on the substrate.[0089] A complementary cell circuit including isolation walls formed between an N- type region and a P-type region to limit growth of P-type epi-S/Ds and N-type epi-S/Ds in a horizontal direction above an isolation region between the N-type region and the P- type region to prevent short defects resulting from process variations in a process for forming epitaxial layers, as illustrated in any of Figures 4A-4C, 5A-5C, 13A-13C, and 14A-14B according to any aspects disclosed herein, may be provided in or integrated into any processor-based device. Examples, without limitation, include a set top box, an entertainment unit, a navigation device, a communications device, a fixed location data unit, a mobile location data unit, a global positioning system (GPS) device, a mobile phone, a cellular phone, a smart phone, a session initiation protocol (SIP) phone, a tablet, a phablet, a server, a computer, a portable computer, a mobile computing device, a wearable computing device (e.g., a smart watch, a health or fitness tracker, eyewear, etc.), a desktop computer, a personal digital assistant (PDA), a monitor, a computer monitor, a television, a tuner, a radio, a satellite radio, a music player, a digital music player, a portable music player, a digital video player, a video player, a digital video disc (DVD) player, a portable digital video player, an automobile, a vehicle component, avionics systems, a drone, and a multicopter.[0090] In this regard, Figure 15 illustrates an example of a processor-based system 1500 including a complementary cell circuit including isolation walls formed between an N-type region and a P-type region to limit growth of P-type epi-S/Ds and N-type epi-S/Ds in a horizontal direction above an isolation region between the N-type region and the P- type region to prevent short defects resulting from process variations in a process for forming epitaxial layers, as illustrated in any of Figures 4A-4C, 5A-5C, 13A-13C, and 14A-14B, and according to any aspects disclosed herein. In this example, the processor- based system 1500 includes one or more central processor units (CPUs) 1502, which may also be referred to as CPU or processor cores, each including one or more processors 1504. The CPU(s) 1502 may have cache memory 1506 coupled to the processor(s) 1504 for rapid access to temporarily stored data. As an example, the processor(s) 1504 could include a complementary cell circuit including isolation walls formed between an N-type region and a P-type region to limit growth of P-type epi-S/Ds and N-type epi-S/Ds in a horizontal direction above an isolation region between the N-type region and the P-type
region to prevent short defects resulting from process variations in a process for forming epitaxial layers, as illustrated in any of Figures 4A-4C, 5A-5C, 13A-13C, and 14A-14B, and according to any aspects disclosed herein. The CPU(s) 1502 is coupled to a system bus 1508 and can intercouple master and slave devices included in the processor-based system 1500. As is well known, the CPU(s) 1502 communicates with these other devices by exchanging address, control, and data information over the system bus 1508. For example, the CPU(s) 1502 can communicate bus transaction requests to a memory controller 1510 as an example of a slave device. Although not illustrated in Figure 15, multiple system buses 1508 could be provided, wherein each system bus 1508 constitutes a different fabric.[0091] Other master and slave devices can be connected to the system bus 1508. As illustrated in Figure 15, these devices can include a memory system 1512 that includes the memory controller 1510 and one or more memory arrays 1514, one or more input devices 1516, one or more output devices 1518, one or more network interface devices 1520, and one or more display controllers 1522, as examples. Each of the memory system 1512, the one or more input devices 1516, the one or more output devices 1518, the one or more network interface devices 1520, and the one or more display controllers 1522 can include a complementary cell circuit including isolation walls formed between an N-type region and a P-type region to limit growth of P-type epi-S/Ds and N-type epi-S/Ds in a horizontal direction above an isolation region between the N-type region and the P-type region to prevent short defects resulting from process variations in a process for forming epitaxial layers, as illustrated in any of Figures 4A-4C, 5A-5C, 13A-13C, and 14A-14B, and according to any aspects disclosed herein. The input device(s) 1516 can include any type of input device, including, but not limited to, input keys, switches, voice processors, etc. The output device(s) 1518 can include any type of output device, including, but not limited to, audio, video, other visual indicators, etc. The network interface device(s) 1520 can be any device configured to allow exchange of data to and from a network 1524. The network 1524 can be any type of network, including, but not limited to, a wired or wireless network, a private or public network, a local area network (LAN), a wireless local area network (WLAN), a wide area network (WAN), a BLUETOOTH™ network, and the Internet. The network interface device(s) 1520 can be configured to support any type of communications protocol desired.
[0092] The CPU(s) 1502 may also be configured to access the display controller(s) 1522 over the system bus 1508 to control information sent to one or more displays 1526. The display controller(s) 1522 sends information to the display(s) 1526 to be displayed via one or more video processors 1528, which process the information to be displayed into a format suitable for the display(s) 1526. The display(s) 1526 can include any type of display, including, but not limited to, a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display, a light emitting diode (LED) display, etc. The display controller(s) 1522, display(s) 1526, and/or the video processor(s) 1528 can include a complementary cell circuit including isolation walls formed between an N-type region and a P-type region to limit growth of P-type epi-S/Ds and N-type epi-S/Ds in a horizontal direction above an isolation region between the N-type region and the P-type region to prevent short defects resulting from process variations in a process for forming epitaxial layers, as illustrated in any of Figures 4A-4C, 5A-5C, 13A-13C, and 14A-14B, and according to any aspects disclosed herein.[0093] Figure 16 illustrates an exemplary wireless communications device 1600 that includes radio frequency (RF) components formed from an IC 1602, wherein any of the components therein can include a complementary cell circuit including isolation walls formed between an N-type region and a P-type region to limit growth of P-type epi-S/Ds and N-type epi-S/Ds in a horizontal direction above an isolation region between the N- type region and the P-type region to prevent short defects resulting from process variations in a process for forming epitaxial layers, as illustrated in any of Figures 4A- 4C, 5A-5C, 13A-13C, and 14A-14B, and according to any aspects disclosed herein. The wireless communications device 1600 may include or be provided in any of the above- referenced devices, as examples. As shown in Figure 16, the wireless communications device 1600 includes a transceiver 1604 and a data processor 1606. The data processor 1606 may include a memory to store data and program codes. The transceiver 1604 includes a transmitter 1608 and a receiver 1610 that support bi-directional communications. In general, the wireless communications device 1600 may include any number of transmitters 1608 and/or receivers 1610 for any number of communication systems and frequency bands. All or a portion of the transceiver 1604 may be implemented on one or more analog ICs, RF ICs (RFICs), mixed-signal ICs, etc.[0094] The transmitter 1608 or the receiver 1610 may be implemented with a super heterodyne architecture or a direct-conversion architecture. In the super-heterodyne
architecture, a signal is frequency-converted between RF and baseband in multiple stages, e.g., from RF to an intermediate frequency (IF) in one stage, and then from IF to baseband in another stage for the receiver 1610. In the direct-conversion architecture, a signal is frequency-converted between RF and baseband in one stage. The super-heterodyne and direct-conversion architectures may use different circuit blocks and/or have different requirements. In the wireless communications device 1600 in Figure 16, the transmitter 1608 and the receiver 1610 are implemented with the direct-conversion architecture. [0095] In the transmit path, the data processor 1606 processes data to be transmitted and provides I and Q analog output signals to the transmitter 1608. In the exemplary wireless communications device 1600, the data processor 1606 includes digital-to-analog converters (DACs) 1612(1), 1612(2) for converting digital signals generated by the data processor 1606 into the I and Q analog output signals, e.g., I and Q output currents, for further processing.[0096] Within the transmitter 1608, lowpass filters 1614(1), 1614(2) filter the I and Q analog output signals, respectively, to remove undesired signals caused by the prior digital-to-analog conversion. Amplifiers (AMPs) 1616(1), 1616(2) amplify the signals from the lowpass filters 1614(1), 1614(2), respectively, and provide I and Q baseband signals. An upconverter 1618 upconverts the I and Q baseband signals with I and Q transmit (TX) local oscillator (LO) signals through mixers 1620(1), 1620(2) from a TX LO signal generator 1622 to provide an upconverted signal 1624. A filter 1626 filters the upconverted signal 1624 to remove undesired signals caused by the frequency upconversion as well as noise in a receive frequency band. A power amplifier (PA) 1628 amplifies the upconverted signal 1624 from the filter 1626 to obtain the desired output power level and provides a transmitted RF signal. The transmitted RF signal is routed through a duplexer or switch 1630 and transmitted via an antenna 1632.[0097] In the receive path, the antenna 1632 receives signals transmitted by base stations and provides a received RF signal, which is routed through the duplexer or switch 1630 and provided to a low noise amplifier (LNA) 1634. The duplexer or switch 1630 is designed to operate with a specific receive (RX)-to-TX duplexer frequency separation, such that RX signals are isolated from TX signals. The received RF signal is amplified by the LNA 1634 and filtered by a filter 1636 to obtain a desired RF input signal. Downconversion mixers 1638(1), 1638(2) mix the output of the filter 1636 with I and Q RX LO signals (i.e., LO_I and LO_Q) from an RX LO signal generator 1640 to generate
I and Q baseband signals. The I and Q baseband signals are amplified by amplifiers (AMPs) 1642(1), 1642(2) and further filtered by lowpass filters 1644(1), 1644(2) to obtain I and Q analog input signals, which are provided to the data processor 1606. In this example, the data processor 1606 includes Analog to Digital Converters (ADCs) 1646(1), 1646(2) for converting the analog input signals into digital signals to be further processed by the data processor 1606.[0098] In the wireless communications device 1600 of Figure 16, the TX LO signal generator 1622 generates the I and Q TX LO signals used for frequency upconversion, while the RX LO signal generator 1640 generates the I and Q RX LO signals used for frequency downconversion. Each LO signal is a periodic signal with a particular fundamental frequency. A TX phase-locked loop (PLL) circuit 1648 receives timing information from the data processor 1606 and generates a control signal used to adjust the frequency and/or phase of the TX LO signals from the TX LO signal generator 1622. Similarly, an RX PLL circuit 1650 receives timing information from the data processor 1606 and generates a control signal used to adjust the frequency and/or phase of the RX LO signals from the RX LO signal generator 1640.[0099] Those of skill in the art will further appreciate that the various illustrative logical blocks, modules, circuits, and algorithms described in connection with the aspects disclosed herein may be implemented as electronic hardware, instructions stored in memory or in another computer readable medium and executed by a processor or other processing device, or combinations of both. The master and slave devices described herein may be employed in any circuit, hardware component, IC, or IC chip, as examples. Memory disclosed herein may be any type and size of memory and may be configured to store any type of information desired. To clearly illustrate this interchangeability, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. How such functionality is implemented depends upon the particular application, design choices, and/or design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.[0100] The various illustrative logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit
(ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration).[0101] The aspects disclosed herein may be embodied in hardware and in instructions that are stored in hardware, and may reside, for example, in Random Access Memory (RAM), flash memory, Read Only Memory (ROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, a hard disk, a removable disk, a CD-ROM, or any other form of computer readable medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a remote station. In the alternative, the processor and the storage medium may reside as discrete components in a remote station, base station, or server.[0102] It is also noted that the operational steps described in any of the exemplary aspects herein are described to provide examples and discussion. The operations described may be performed in numerous different sequences other than the illustrated sequences. Furthermore, operations described in a single operational step may actually be performed in a number of different steps. Additionally, one or more operational steps discussed in the exemplary aspects may be combined. It is to be understood that the operational steps illustrated in the flowchart diagrams may be subject to numerous different modifications as will be readily apparent to one of skill in the art. Those of skill in the art will also understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.
[0103] The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations. Thus, the disclosure is not intended to be limited to the examples and designs described herein but, is to be accorded the widest scope consistent with the principles and novel features disclosed herein. |
A touch sensor capable of detecting multiple touches thereto is coupled with a digital device having multi-touch decoding capabilities. These multi-touch decoding capabilities comprise touch data acquisition, touch identification, touch tracking and processed touch data output to a device associated with the touch sensor. Touch identification comprises touch location(s) peak detection, touch location(s) nudging and touch location(s) interpolation. Touch data acquisition locates potential touches on the touch sensor. Peak detection identifies where potential touch locations are on the touch sensor. Once a potential touch location(s) has been identified, touch location nudging examines each adjacent location thereto and interpolation examines the adjacent touch location values to generate a higher resolution location of the touch. Touch tracking compares time sequential "frames" of touch identification data and then determines which touches are associated between frames for further processing, e.g., determining gesturing actions. |
CLAIMS What is claimed is: 1. A method for decoding multiple touches on a touch sensing surface, said method comprising the steps of: scanning a plurality of channels aligned on an axis for determining self values of the channels; comparing the at least one self value to determine which one of the channels is a local maximum self value; scanning a plurality of nodes of the at least one channel that has the local maximum self value for determining mutual values of the nodes; and comparing the mutual values to determine which one of the nodes has the largest mutual value, wherein the node having the largest mutual value on the local maximum self value channel is a potential touch location. 2. The method according to claim 1, further comprising the steps of: determining if at least one of the self values is greater than a self touch threshold, wherein if yes then continue to the step of scanning a plurality of nodes of the at least one channel having the largest self value, and if no then end a touch detection frame as completed. 3. The method according to claim 1, further comprising the step of: determining left and right slope values for the at least one self value, wherein: the left slope value is equal to the at least one self value minus a self value of a channel to the left of the at least one channel, and the right slope value is equal to the at least one self value minus a self value of a channel to the right of the at least one channel. 4. The method according to claim 3, further comprising the steps of: determining if the left slope value is greater than zero (0) and the right slope value is less than zero (0), wherein if yes then return to the step of scanning the plurality of nodes of the at least one channel, and if no then continue to next step; determining if the left slope value is greater than zero (0) and greater than the right slope value, wherein if yes then return to the step of scanning the plurality of nodes of the at least one channel, and if no then continue to next step; determining if the left slope value is less than zero (0) and greater than a percentage of the right slope value, wherein if yes then return to the step of scanning the plurality of nodes of the at least one channel, and if no then continue to next step; determining if there is another self value, wherein if yes then return to the step of determining if at least one of the self values is greater than the self touch threshold value using the another self value, and if no then end a touch detection frame as completed. 5. The method according to claim 2, further comprising the steps of: determining if at least one of the mutual values is greater than a mutual touch threshold, wherein if yes then continue to the step of scanning a plurality of nodes of the at least one channel having the largest self value, and if no then end the touch detection frame as completed. 6. The method according to claim 5, further comprising the steps of: determining a next slope value, wherein the next slope value is equal to a current mutual value minus a next mutual value of a next node; and determining a previous slope value, wherein the previous slope value is equal to the current mutual value minus a previous mutual value of a previous node. 7. The method according to claim 6, further comprising the steps of: determining if the next slope value is less than zero (0) and the previous slope value is greater than zero (0), wherein if yes then begin the step of validating the node, and if no then continue to next step; determining if the next slope value is greater than zero (0) and less than a percentage of the previous slope value, wherein if yes then begin the step of validating the node, and if no then continue to next step; determining if the next slope value is less than zero (0) and greater than the previous slope value, wherein if yes then begin the step of validating the node, and if no then continue to next step; determining if there is another mutual value, wherein if yes then return to the step of determining if at least one of the mutual values is greater than the mutual touch threshold, and if no then continue to the next step; and determining if there is another self value, wherein if yes then examine another self value and return to the step of determining if at least one of the self values is greater than a self touch threshold, and if no then end the touch detection frame as completed. The method according to claim 7, wherein the step of validating the node the steps of: identifying the node having a local maximum mutual value as a current node; determining if there is a valid node north of the current node, wherein if no then continue to the step of determining if there is a valid node south of the current node, and if yes then perform a mutual measurement on the north node and continue to the next step; determining if the north node is greater then the current node, if yes then make the north node the current node and continue to the step of determining whether a touch point already exists at this node, and if no then continue to the next step; determining if there is a valid node south of the current node, wherein if no then continue to the step of determining if there is a valid node east of the current node, and if yes then perform a mutual measurement on the south node and continue to the next step; determining if the south node is greater then the current node, wherein if yes then make the south node the current node and continue to the step of determining whether a touch point already exists at this node, and if no then continue to the next step; determining if there is a valid node east of the current node, wherein if no then continue to the step of determining if there is a valid node west of the current node, and if yes then perform a mutual measurement on the east node and continue to the next step; determining if the east node is greater then the current node, if yes then make the east node the current node and continue to the step of determining whether a touch point already exists at this node, and if no then continue to the next step; determining if there is a valid node west of the current node, wherein if no then continue to the step of determining if there is a valid node left of the current node, and if yes then perform a mutual measurement on the west node and continue to the next step; determining if the west node is greater then the current node, if yes then make the west node the current node and continue to the step of determining whether a touch point already exists at this node, and if no then continue to the next step; determining if there is a valid node left of the current node, wherein if no then define a left mutual value as a center mutual value minus a right mutual value and continue to the step of determining a fine position for the node, and if yes then perform a mutual measurement on the left node and continue to the next step; determining if there is a valid node right of the current node, wherein if no then define the mutual value as the center mutual value minus the left mutual value and continue to the step of determining the fine position for the node, and if yes then perform a mutual measurement on the right node and continue to the next step; defining a fine position of the node by subtracting the left value from the right value, dividing this difference by the center value and multiplying the result thereof by 64 and continue to the next step; and determining whether interpolation was performed for each axis, wherein if yes, then add another touch point to a list of all detected touch points and return to the step of determining if there are additional mutual values, and if no, then interpolate an other axis by using left and right nodes of the other axis for starting again at the step of determining if there is a valid node left of the current node. 9. A method for tracking previously found and current touch locations on a touch sensing surface, said method comprising the steps of: determining if there is at least one current touch location, wherein if yes then select one of the current touch locations, and if no then continue to next step; determining if there is at least one previous touch location, wherein if no then end tracking, and if yes then select one of the previous touch locations and continue to next step; determining if the previous touch location is associated with the current touch location, wherein if no then a touch is no longer present at the previous touch location, stop tracking that previous touch location and continue to the step of determining if there is at least one more previous touch location, and if yes continue to the next step; and determining if there is at least one more previous touch location, wherein if yes then select a next previous touch location and continue to the step of determining if the previous touch location is associated with the current touch location using the next previous touch location for the previous touch location, and if no then output touch locations tracked. 10. The method according to claim 9, wherein the step of selecting one of the current touch locations further comprises the steps of: determining if there is at least one previous touch location, wherein if no then new touch to track at current touch location and continue to the step of determining if there is at least one more current touch location, and if yes then set a temporary weight value to a maximum weight value, select a previous touch location and continue to next step; measuring a distance between the selected current touch location and the selected previous touch location, use this distance as a current weight value for determining pairing of the selected current touch location and the previous touch location, and continue to next step; determining if the current weight value is less than the temporary weight value, wherein if no then continue to the step of determining if there is at least one more previous touch location, and if yes then set the temporary weight value to the current weight value, record the selected previous touch location as a temporary touch location and continue to next step; determining if there is at least one more previous touch location, wherein if yes then select the next previous touch location and return to the step of measuring the distance between the selected current touch location and the selected previous touch location, and if no then continue to next step; determining if the temporary location is already assigned to a different current location, wherein, if yes then calculate a next worst weight value for the current location and for an assigned current location then continue to the step of determining if the next worst weight value for the current location is less than the next worst weight value for the assigned location, and if no then continue to next step; determining if the weight value is below a maximum association threshold, wherein if yes then assigning the temporary location to the current location and continue to the step of determining if there is at least one more current touch location, and if no then a new touch location is identified for tracking thereof and continue to next step; determining if there is at least one more current touch location, wherein if no then return to the step of determining if there is at least one other previous touch locations, and if yes then select a next current touch location and return to the step of determining if there is at least one previous touch location; determining if the next worst weight value for the current location is less than the next worst weight value for the assigned location, wherein if yes then setting the temporary location to the next worst location and returning to the step of determining if there is at least one more current touch location, and if no then setting the assigned location to the next worst weight value, selecting a moved assignment location and returning to the step of determining if there is at least one previous touch location. 11. A method for caching mutual touch values of a plurality of touch columns, said method comprising the steps of: receiving a mutual scan location request; determining if a cache memory contains scan data of the requested mutual scan location, wherein if yes then continue to the step of determining if the scan data is valid, and if no then continue to next step; determining if the requested mutual scan location is beyond a right edge of the cache memory, wherein if yes then de-allocate the scan data in a left-most column of the cache memory, allocate the de-allocated scan data to a right edge of the cache memory and invalidate values thereof, and if no then de-allocate the scan data in a right-most column of the cache memory, allocate the de-allocated scan data to a left edge of the cache memory and invalidate values thereof; determining if the scan data is valid, wherein if yes then return the requested the scan data for further processing thereto, and if no then perform a mutual scan at the requested location, place the resulting scan data in the cache memory and return the requested scan data for further processing thereto. 12. A system for decoding multiple touches using the method according to claim 1 , said system comprising: a first plurality of electrodes arranged in a parallel orientation having a first axis, wherein each of the first plurality of touch electrodes has a self capacitance; a second plurality of electrodes arranged in a parallel orientation having a second axis substantially perpendicular to the first axis, the first plurality of electrodes are located over the second plurality of electrodes to form a touch matrix wherein each of overlapping intersection of the first and second plurality of electrodes has a mutual capacitance; the self capacitance is measured for each of the first plurality of electrodes to produce respective self values; the mutual capacitance is measured for each of the overlapping intersections of the first and second plurality of electrodes to produce respective mutual values; the self and mutual capacitances are measured by an analog front end of a microcontroller; the self and mutual values are stored in a memory of the microcontroller; and a digital processor in the microcontroller uses the self and mutual values in determining at least one location of at least one touch per touch acquisition frame, and tracks changing locations of the at least one touch in subsequent touch acquisition frames. |
METHOD AND SYSTEM FOR MULTI-TOUCH DECODING RELATED PATENT APPLICATION This application claims priority to commonly owned United States Provisional Patent Application Serial Number 61/617,831 ; filed March 30, 2012; which is hereby incorporated by reference herein for all purposes. TECHNICAL FIELD The present disclosure relates to decoding of capacitive touch sensing, in particular, multi-touch decoding. BACKGROUND Human interface devices include touch control systems that are based on touch sensing surfaces, e.g., pads, screens, etc., using capacitive sensors that change capacitance values when touched. Transforming the touch(es) on the touch sensor into one or more touch locations is non-trivial. Tracking one or more touches on the touch sensor is also challenging. Advanced touch control systems are capable of detecting not only a single touch and/or movement on a touch sensing surface such as a touch screen, but also so-called multi- touch scenarios in which a user touches more than one location and/or moves more than one finger over the respective touch sensing surface, e.g. , gesturing. Key challenges of multi-touch systems are: limited processing speed of low cost systems, such as processing capabilities of, for example but not limited to, 8-bit microcontroller architectures as these architectures may be unable to do advanced math for processing the respective signals generated by the touch sensing device. There may also exist limited touch scanning performance, for example the entire system may be unable to reasonably sample the entire plane of the touch sensor or screen every "frame." Other challenges include having enough program memory space to provide for touch location determination programs that are concise, modular and general purpose. Limited random access memory (RAM) space may make the touch determination system unable to store multiple entire "images" of the touch detection and location(s) thereof simultaneously. Hence, there exists a need to improve and simplify touch determination methods. Conventional solutions were threshold based and required complex computations. Hence, there is a need for touch determination methods that are more robust and less computation intensive. Furthermore, there exists a need for high quality multi-touch decoding, in particular, a method and/or system that can be implemented with, for example but not limited to, a low-cost 8-bit microcontroller architecture. SUMMARY The aforementioned problems are solved, and other and further benefits achieved by the multi-touch decoding method and system disclosed herein. According to an embodiment, a method for decoding multiple touches on a touch sensing surface may comprise the steps of: scanning a plurality of channels aligned on an axis for determining self values of the channels; comparing the at least one self value to determine which one of the channels may be a local maximum self value; scanning a plurality of nodes of the at least one channel that may have the local maximum self value for determining mutual values of the nodes; and comparing the mutual values to determine which one of the nodes may have the largest mutual value, wherein the node having the largest mutual value on the local maximum self value channel may be a potential touch location. According to a further embodiment, the method may comprise the steps of: determining if at least one of the self values may be greater than a self touch threshold, wherein if yes then continue to the step of scanning a plurality of nodes of the at least one channel having the largest self value, and if no then end a touch detection frame as completed. According to a further embodiment, the method may comprise the steps of: determining left and right slope values for the at least one self value, wherein: the left slope value may be equal to the at least one self value minus a self value of a channel to the left of the at least one channel, and the right slope value may be equal to the at least one self value minus a self value of a channel to the right of the at least one channel. According to a further embodiment, the method may comprise the steps of: determining if the left slope value may be greater than zero (0) and the right slope value may be less than zero (0), wherein if yes then return to the step of scanning the plurality of nodes of the at least one channel, and if no then continue to next step; determining if the left slope value may be greater than zero (0) and greater than the right slope value, wherein if yes then return to the step of scanning the plurality of nodes of the at least one channel, and if no then continue to next step; determining if the left slope value may be less than zero (0) and greater than a percentage of the right slope value, wherein if yes then return to the step of scanning the plurality of nodes of the at least one channel, and if no then continue to next step; determining if there may be another self value, wherein if yes then return to the step of determining if at least one of the self values may be greater than the self touch threshold value using the another self value, and if no then end a touch detection frame as completed. According to a further embodiment, the method may comprise the steps of: determining if at least one of the mutual values may be greater than a mutual touch threshold, wherein if yes then continue to the step of scanning a plurality of nodes of the at least one channel having the largest self value, and if no then end the touch detection frame as completed. According to a further embodiment, the method may comprise the steps of: determining a next slope value, wherein the next slope value may be equal to a current mutual value minus a next mutual value of a next node; and determining a previous slope value, wherein the previous slope value may be equal to the current mutual value minus a previous mutual value of a previous node. According to a further embodiment, the method may comprise the steps of: determining if the next slope value may be less than zero (0) and the previous slope value may be greater than zero (0), wherein if yes then begin the step of validating the node, and if no then continue to next step; determining if the next slope value may be greater than zero (0) and less than a percentage of the previous slope value, wherein if yes then begin the step of validating the node, and if no then continue to next step; determining if the next slope value may be less than zero (0) and greater than the previous slope value, wherein if yes then begin the step of validating the node, and if no then continue to next step; determining if there may be another mutual value, wherein if yes then return to the step of determining if at least one of the mutual values may be greater than the mutual touch threshold, and if no then continue to the next step; and determining if there may be another self value, wherein if yes then examine another self value and return to the step of determining if at least one of the self values may be greater than a self touch threshold, and if no then end the touch detection frame as completed. According to a further embodiment of the method, the step of validating the node may comprise the steps of: identifying the node having a local maximum mutual value as a current node; determining if there may be a valid node north of the current node, wherein if no then continue to the step of determining if there may be a valid node south of the current node, and if yes then perform a mutual measurement on the north node and continue to the next step; determining if the north node may be greater then the current node, if yes then make the north node the current node and continue to the step of determining whether a touch point already exists at this node, and if no then continue to the next step; determining if there may be a valid node south of the current node, wherein if no then continue to the step of determining if there may be a valid node east of the current node, and if yes then perform a mutual measurement on the south node and continue to the next step; determining if the south node may be greater then the current node, wherein if yes then make the south node the current node and continue to the step of determining whether a touch point already exists at this node, and if no then continue to the next step; determining if there may be a valid node east of the current node, wherein if no then continue to the step of determining if there may be a valid node west of the current node, and if yes then perform a mutual measurement on the east node and continue to the next step; determining if the east node may be greater then the current node, if yes then make the east node the current node and continue to the step of determining whether a touch point already exists at this node, and if no then continue to the next step; determining if there may be a valid node west of the current node, wherein if no then continue to the step of determining if there may be a valid node left of the current node, and if yes then perform a mutual measurement on the west node and continue to the next step; determining if the west node may be greater then the current node, if yes then make the west node the current node and continue to the step of determining whether a touch point already exists at this node, and if no then continue to the next step; determining if there may be a valid node left of the current node, wherein if no then define a left mutual value as a center mutual value minus a right mutual value and continue to the step of determining a fine position for the node, and if yes then perform a mutual measurement on the left node and continue to the next step; determining if there may be a valid node right of the current node, wherein if no then define the mutual value as the center mutual value minus the left mutual value and continue to the step of determining the fine position for the node, and if yes then perform a mutual measurement on the right node and continue to the next step; defining a fine position of the node by subtracting the left value from the right value, dividing this difference by the center value and multiplying the result thereof by 64 and continue to the next step; and determining whether interpolation was performed for each axis, wherein if yes, then add another touch point to a list of all detected touch points and return to the step of determining if there may be additional mutual values, and if no, then interpolate an other axis by using left and right nodes of the other axis for starting again at the step of determining if there may be a valid node left of the current node. According to another embodiment, a method for tracking previously found and current touch locations on a touch sensing surface may comprise the steps of: determining if there may be at least one current touch location, wherein if yes then select one of the current touch locations, and if no then continue to next step; determining if there may be at least one previous touch location, wherein if no then end tracking, and if yes then select one of the previous touch locations and continue to next step; determining if the previous touch location may be associated with the current touch location, wherein if no then a touch may be no longer present at the previous touch location, stop tracking that previous touch location and continue to the step of determining if there may be at least one more previous touch location, and if yes continue to the next step; and determining if there may be at least one more previous touch location, wherein if yes then select a next previous touch location and continue to the step of determining if the previous touch location may be associated with the current touch location using the next previous touch location for the previous touch location, and if no then output touch locations tracked. According to a further embodiment of the method, the step of selecting one of the current touch locations may comprises the steps of: determining if there may be at least one previous touch location, wherein if no then new touch to track at current touch location and continue to the step of determining if there may be at least one more current touch location, and if yes then set a temporary weight value to a maximum weight value, select a previous touch location and continue to next step; measuring a distance between the selected current touch location and the selected previous touch location, use this distance as a current weight value for determining pairing of the selected current touch location and the previous touch location, and continue to next step; determining if the current weight value may be less than the temporary weight value, wherein if no then continue to the step of determining if there may be at least one more previous touch location, and if yes then set the temporary weight value to the current weight value, record the selected previous touch location as a temporary touch location and continue to next step; determining if there may be at least one more previous touch location, wherein if yes then select the next previous touch location and return to the step of measuring the distance between the selected current touch location and the selected previous touch location, and if no then continue to next step; determining if the temporary location may be already assigned to a different current location, wherein, if yes then calculate a next worst weight value for the current location and for an assigned current location then continue to the step of determining if the next worst weight value for the current location may be less than the next worst weight value for the assigned location, and if no then continue to next step; determining if the weight value may be below a maximum association threshold, wherein if yes then assigning the temporary location to the current location and continue to the step of determining if there may be at least one more current touch location, and if no then a new touch location may be identified for tracking thereof and continue to next step; determining if there may be at least one more current touch location, wherein if no then return to the step of determining if there may be at least one other previous touch locations, and if yes then select a next current touch location and return to the step of determining if there may be at least one previous touch location; determining if the next worst weight value for the current location may be less than the next worst weight value for the assigned location, wherein if yes then setting the temporary location to the next worst location and returning to the step of determining if there may be at least one more current touch location, and if no then setting the assigned location to the next worst weight value, selecting a moved assignment location and returning to the step of determining if there may be at least one previous touch location. According to yet another embodiment, a method for caching mutual touch values of a plurality of touch columns may comprise the steps of: receiving a mutual scan location request; determining if a cache memory contains scan data of the requested mutual scan location, wherein if yes then continue to the step of determining if the scan data may be valid, and if no then continue to next step; determining if the requested mutual scan location may be beyond a right edge of the cache memory, wherein if yes then de-allocate the scan data in a left-most column of the cache memory, allocate the de-allocated scan data to a right edge of the cache memory and invalidate values thereof, and if no then de-allocate the scan data in a right-most column of the cache memory, allocate the de-allocated scan data to a left edge of the cache memory and invalidate values thereof; determining if the scan data may be valid, wherein if yes then return the requested the scan data for further processing thereto, and if no then perform a mutual scan at the requested location, place the resulting scan data in the cache memoiy and return the requested scan data for further processing thereto. According to still another embodiment, a system for decoding multiple touches according to the methods claimed herein may comprise: a first plurality of electrodes arranged in a parallel orientation having a first axis, wherein each of the first plurality of touch electrodes may have a self capacitance; a second plurality of electrodes arranged in a parallel orientation having a second axis substantially perpendicular to the first axis, the first plurality of electrodes may be located over the second plurality of electrodes to form a touch matrix wherein each of overlapping intersection of the first and second plurality of electrodes may have a mutual capacitance; the self capacitance may be measured for each of the first plurality of electrodes to produce respective self values; the mutual capacitance may be measured for each of the overlapping intersections of the first and second plurality of electrodes to produce respective mutual values; the self and mutual capacitances may be measured by an analog front end of a microcontroller; the self and mutual values may be stored in a memory of the microcontroller; and a digital processor in the microcontroller uses the self and mutual values in determining at least one location of at least one touch per touch acquisition frame, and tracks changing locations of the at least one touch in subsequent touch acquisition frames. BRIEF DESCRIPTION OF THE DRAWINGS A more complete understanding of the present disclosure thereof may be acquired by referring to the following description taken in conjunction with the accompanying drawings wherein: Figure 1 illustrates a schematic block diagram of an electronic system having a capacitive touch sensor, a capacitive touch analog front end and a digital processor, according to the teachings of this disclosure; Figures 1A to ID illustrate schematic plan views of touch sensors having various capacitive touch sensor configurations, according to the teachings of this disclosure; Figures 1 E and 1 F illustrate schematic plan views of self and mutual capacitive touch detection of a single touch to a touch sensor, according to the teachings of this disclosure; Figures 1G to I illustrate schematic plan views of self and mutual capacitive touch detection of two touches to a touch sensor, according to the teachings of this disclosure; Figure 2 illustrates a schematic process flow diagram for multi-touch decoding of a touch sensor as shown in Figure 1 , according to specific example embodiments of this disclosure; Figure 3 illustrates a graph of single touch peak detection data, according to specific example embodiments of this disclosure; Figure 4 illustrates a schematic plan diagram of potential touch and mutual touch locations of a touch sensor, according to specific example embodiments of this disclosure; Figure 5 illustrates a schematic plan view diagram of a touch sensor showing a cache data window thereof, according to specific example embodiments of this disclosure; Figure 6 illustrates a graph of self scan values and a table of mutual scan values for two touch peak detection data, according to specific example embodiments of this disclosure; Figures 7 and 8 illustrate schematic diagrams of historic and current point locations used for a point weighting example, according to the teachings of this disclosure; Figure 9 illustrates schematic drawings of a normal finger touch and a flat finger touch, according to the teachings of this disclosure; and Figures 10 to 19 illustrate schematic process flow diagrams for touch decoding, according to specific example embodiments of this disclosure. While the present disclosure is susceptible to various modifications and alternative forms, specific example embodiments thereof have been shown in the drawings and are herein described in detail. It should be understood, however, that the description herein of specific example embodiments is not intended to limit the disclosure to the particular forms disclosed herein, but on the contrary, this disclosure is to cover all modifications and equivalents as defined by the appended claims. DETAILED DESCRIPTION According to various embodiments, a series of optimized processes may be provided that scan touch sensors comprising a plurality of (electrically) conductive columns and rows arranged in a matrix on a surface, e.g., touch sensor, and which identify and track a plurality of touches thereto. These processes may be further optimized for operation with a low cost 8-bit microcontroller, according to specific embodiments of this disclosure. According to various embodiments, these processes utilize both self and mutual scans to perform an optimized scan of the plurality of conductive columns and rows used for touch sensing. Using that as the basis, the proposed processes may use a subset of the data from the plurality of conductive columns and rows in order to do all necessary processing for touch location identification and tracking. The various embodiments specifically focus on a low- resource requirement solution for achieving touch location identification and tracking. According to various embodiments, self capacitances of either the conductive columns or rows may be measured first then mutual capacitances of only those conductive columns or rows may be measured in combination with the other axis of conductive rows or columns. The various embodiments disclosed herein overcome the problem of transforming these self and mutual capacitance measurements into one or more touches and tracking these one or more touches through multiple frames of the capacitance measurements of the conductive columns or rows as described hereinabove. According to various embodiments, at least one process may scan a plurality of conductive columns and rows arranged in a matrix, detect and track up to N touches, using various unique techniques disclosed and claimed herein. A process for peak detection examines slope ratios to accurately and quickly determine peak measurements. According to various embodiments, the challenge of tracking multiple touch locations may be solved through time on associated ones of the plurality of conductive columns or rows. The various embodiments may allow for N touches to compensate for touches of different finger positions, e.g., such as a flat finger, that prevents missed touches and substantially eliminates incorrect touches. According to various embodiments, a process is provided for quickly identifying accurate touches instead of only looking at true peaks, wherein a "virtual" peak may be found by examining slope ratios using various techniques disclosed herein for touch identification. A combination of unique processes, according to the teachings of this disclosure, may be used to achieve better accuracy and speed improvements for multi-touch decoding. For example, a peak detection process may be implemented as a "fuzzy" peak detection process that examines slope relationships, not just signs of the slopes between the conductive columns measured. Furthermore, a so-called "nudge technique" may be used that "nudges" a potential touch location to a best location by examining adjacent values thereto. "Windowed" data cache may be used to accelerate processing in a low capacity RAM environment, e.g., 8-bit microcontroller. Interpolation may be used to increase the touch location resolution based upon measured values adjacent thereto. Multi-touch tracking may be used to identify N touches through time. Multi-touch tracking may be used to track N touches through time. Weighted matching may be used in a weighting method to best match touch points over time. "Area" detection may use a process that allows easy area and/or pressure detection based upon the sum of the nudged values for a given touch location. Significant accuracy and speed of decoding improvements may use a combination of novel techniques for use in a low memory capacity and low cost digital processor, e.g., microcontroller, microprocessor, digital signal processor (DSP), application specific integrated circuit (ASIC), programmable logic array (PLA), etc. Various embodiments may track eight or more touches on, for example but not limited to, a 3.5 inch touch sensor capacitive sensor array. For example when using a Microchip PIC18F46K22 (64K ROM, <4K RAM) microcontroller. Conventional capacitive touch decoding neither uses the techniques described more fully hereinafter, nor exhibits these performance results. Referring now to the drawings, the details of example embodiments are schematically illustrated. Like elements in the drawings will be represented by like numbers, and similar elements will be represented by like numbers with a different lower case letter suffix. Referring to Figure 1 , depicted is a schematic block diagram of an electronic system having a capacitive touch X-Y sensor, a capacitive touch analog front end and a digital processor, according to the teachings of this disclosure. A microcontroller integrated circuit device 1 12 may comprise a digital processor and memory 106, an analog-to-digital converter (ADC) controller 108, and a capacitive touch analog front end (AFE) 1 10. The microcontroller integrated circuit device 112 may be coupled to a touch sensor 102 comprised of a plurality of conductive columns 104 and rows 105 arranged in a matrix. It is contemplated and within the scope of this disclosure that the conductive rows 105 and/or conductive columns 104 may be printed circuit board conductors, wires, ITO coatings on a clear substrate, e.g., display/touch screen, etc., or any combinations thereof. Referring to Figures 1A to ID, depicted are schematic plan views of touch sensors having various capacitive touch sensor configurations, according to the teachings of this disclosure. Figure 1A shows conductive columns 104 and conductive rows 105. Each of the conductive columns 104 has a "self capacitance" that may be individually measured when in a quiescent state, or all of the conductive rows 105 may be actively excited while each one of the conductive columns 104 has self capacitance measurements made thereof. Active excitation of all of the conductive rows 105 may provide a stronger measurement signal for individual capacitive measurements of the conductive columns 104. For example, if there is a touch detected on one of the conductive columns 104 during a self capacitance scan, then only that conductive column 104 having the touch detected thereon need be measured further during a mutual capacitance scan thereof. The self capacitance scan may only determine which one of the conductive columns 104 has been touched, but not at what location along the axis of that conductive column 104 where it was touched. The mutual capacitance scan may determine the touch location along the axis of that conductive column 104 by individually exciting (driving) one at a time the conductive rows 105 and measuring a mutual capacitance value for each one of the locations on that conductive column 104 that intersects (crosses over) the conductive rows 105. There may be an insulating non-conductive dielectric (not shown) between and separating the conductive columns 104 and the conductive rows 105. Where the conductive columns 104 intersect with (crossover) the conductive rows 105, mutual capacitors 120 are thereby formed. During the self capacitance scan above, all of the conductive rows 105 may be either grounded or driven with a logic signal, thereby forming individual column capacitors associated with each one of the conductive columns 104. Figures I B and 1 C show interleaving of diamond shaped patterns of the conductive columns 104 and the conductive rows 105. This configuration may maximize exposure of each axis conductive column and/or row to a touch (e.g., better sensitivity) with a smaller overlap between the conductive columns 104 and the conductive rows 105. Figure ID shows receiver (top) conductive rows (e.g., electrodes) 105a and transmitter (bottom) conductive columns 104a comprising comb like meshing fingers. The conductive columns 104a and conductive rows 105a are shown in a side-by-side plan view, but normally the top conductive rows 105a would be over the bottom conductive columns 104a. Self and mutual capacitive touch detection is more fully described in Technical Bulletin TB3064, entitled "mTouch™ Projected Capacitive Touch Screen Sensing Theory of Operation" by Todd O'Connor, available at www.microchip.com; and commonly owned United States Patent Application Publication No. US 2012/01 13047, entitled "Capacitive Touch System Using Both Self and Mutual Capacitance" by Jerry Hanauer; wherein both are hereby incorporated by reference herein for all purposes. Referring back to Figure 1, microcontrollers 112 now include peripherals that enhance the detection and evaluation of such capacitive value changes. More detailed descriptions of various capacitive touch system applications are more fully disclosed in Microchip Technology Incorporated application notes AN1298, AN1325 and AN1334, available at www.microchip.com, and all are hereby incorporated by reference herein for all purposes. One such application utilizes the capacitive voltage divider (CVD) method to determine a capacitance value and/or evaluate whether the capacitive value has changed. The CVD method is more fully described in Application Note AN 1208, available at www.microchip.com; and a more detailed explanation of the CVD method is presented in commonly owned United States Patent Application Publication No. US 2010/0181 180, entitled "Capacitive Touch Sensing using an Internal Capacitor of an Analog-To-Digital Converter (ADC) and a Voltage Reference," by Dieter Peter; wherein both are hereby incorporated by reference herein for all purposes. A Charge Time Measurement Unit (CTMU) may be used for very accurate capacitance measurements. The CTMU is more fully described in Microchip application notes AN1250 and AN1375, available at www.microchip.com, and commonly owned U.S. Patent Nos. US 7,460,441 B2, entitled "Measuring a long time period;" and US 7,764,213 B2, entitled "Current-time digital-to-analog converter," both by James E. Bartling; wherein all of which are hereby incorporated by reference herein for all purposes. It is contemplated and within the scope of this disclosure that any type of capacitance measurement circuit having the necessary resolution may be used in determining the capacitance values of the plurality of conductive columns 104, and that a person having ordinary skill in the art of electronics and having the benefit of this disclosure could implement such a capacitance measurement circuit. Referring to Figures IE and IF, depicted are schematic plan views of self and mutual capacitive touch detection of a single touch to a touch sensor, according to the teachings of this disclosure. In Figure IE a touch, represented by a picture of a part of a linger, is at approximately the coordinates of X05, Y07. During self capacitive touch detection each one of the rows YOl to Y09 may be measured to the determine the capacitance values thereof. Note that baseline capacitance values with no touches thereto for each one of the rows YOl to Y09 have been taken and stored in a memory (e.g., memory 106 - Figure 1). Any significant capacitance change to the baseline capacitance values of the rows YOl to Y09 will be obvious and taken as a finger touch. In the example shown in Figure IE the finger is touching row Y07 and the capacitance value of that row will change, indicating a touch thereto. However it is still unknown from the self capacitance measurements where on this row that the touch has occurred. Once the touched row (Y07) has been determined using the self capacitance change thereof, mutual capacitive detection may be used in determining where on the touched row (Y07) the touch has occurred. This may be accomplished by exciting, e.g., putting a voltage pulse on, each of the columns X01 to XI 2 one at a time while measuring the capacitance value of row Y07 when each of the columns X01 to XI 2 is individually excited. The column (X05) excitation that causes the largest change in the capacitance value of row Y07 will be the location on that row which corresponds to the intersection of column X05 with row Y07, thus the single touch is at point or node X05, Y07. Using self and mutual capacitance touch detection significantly reduces the number of row and column scans to obtain the X,Y touch coordinate on the touch sensor 102. In this example, nine (9) rows were scanned during self capacitive touch detection and twelve (12) columns were scanned during mutual capacitive touch detection for a total number of 9+12 = 21 scans. If individual x-y capacitive touch sensors for each node (location) were used then 9 x 12 = 108 scans would be necessary to find this one touch, a significant difference. It is contemplated and within the scope of this disclosure that the self capacitances of the columns X01 to X21 may be determined first then mutual capacitances determined of a selected column(s) by exciting each row Y01 to Y09 to find the touch location on the selected column(s). Referring to Figures 1G to IK, depicted are schematic plan views of self and mutual capacitive touch detection of two touches to a touch sensor, according to the teachings of this disclosure. In Figure 1G two touches, represented by a picture of parts of two fingers, are at approximately the coordinates of X05, Y07 for touch #1 and X02, Y03 for touch #2. During self capacitive touch detection each one of the rows Y01 to Y09 may be measured to the determine the capacitance values thereof. Note that baseline capacitance values with no touches thereto for each one of the rows Y01 to Y09 have been taken and stored in a memory (e.g., memory 106 - Figure 1). Any significant capacitance changes to the baseline capacitance values of the rows Y01 to Y09 will be obvious and taken as finger touches. In the example shown in Figure 1H the first finger is touching row Y07 and the second finger is touching row Y03, wherein the capacitance values of those two rows will change, indicating touches thereto. However it is still unknown from the self capacitance measurements where on these two row that the touches have occurred. Once the touched rows (Y07 and Y03) have been determined using the self capacitance changes thereof, mutual capacitive detection may be used in determining where on these two touched rows (Y07 and Y03) the touches have occurred. Referring to Figure I I, this may be accomplished by exciting, e.g., putting a voltage pulse on, each of the columns XOl to XI 2 one at a time while measuring the capacitance value of row Y07 when each of the columns XOl to XI 2 is individually excited. The column (X05) excitation that causes the largest change in the capacitance value of row Y07 will be the location on that row that corresponds to the intersection of column X05 with row Y07. Referring to Figure 1J, likewise measuring the capacitance value of row Y03 when each of the columns XOl to XI 2 is individually excited determines where on column Y03 the touch #2 has occurred. Referring to Figure I , the two touches are at points or nodes (X05, Y07) and (X02, Y03). It is contemplated and within the scope of this disclosure that if the capacitances of more then one of the selected rows, e.g., Y07 and Y03, can be measured simultaneously, then only one set of individual column X01 to XI 2 excitations is needed in determining the two touches to the touch sensor 102. Referring to Figure 2, depicted is a schematic process flow diagram for multi-touch decoding of a touch sensor as shown in Figure 1 , according to specific example embodiments of this disclosure. A process for multi-touch decoding may comprise the steps of Data Acquisition 202, Touch Identification 204, Touch Tracking 206 and Data Output 208. The step of Touch Identification 204 may further comprise the steps of Peak Detection 210, Nudge 212 and Interpolation 214, more fully described hereinafter. Data Acquisition Data Acquisition 202 is the process of taking self and mutual capacitance measurements of the plurality of conductive columns 104 or conductive rows 105 to acquire touch identification data. The touch identification data may be further processed to locate potential touches on the touch sensor 102 using the process of Touch Identification 204 as more described fully hereinafter. Touch Identification Touch Identification 204 is the process of using the touch identification data acquired during the process of Data Acquisition 202 to locate potential touches on the touch sensor 102. The following are a sequence of process steps to determine which ones of the plurality of conductive columns 104 or conductive rows 105 to select that have a touch(es) thereto using self capacitance measurements thereof, and where on the selected conductive columns 104 or conductive rows 105 the touch(es) may have occurred using mutual capacitance measurements thereof Peak Detection Peak detection 210 is the process of identifying where potential touch locations may be on the touch sensor 102. However according to the teachings of this disclosure, instead of only looking at actual detected "peaks," peak detection may purposely be made "fuzzy," e.g., identifying potential peaks by looking for ratios of differences of slope values as well as slope "signs," not just a low-high-low value sequence. A "virtual" peak may be detected by examining slope ratios, e.g., 2: 1 slope ratio, wherein a change in slope may be identified as a potential peak. This may be repeated until no additional peaks are detected. Nudge Nudge 212 is the process of examining each adjacent location of a potential touch location once it has been identified. If the adjacent location(s) has a greater value than the existing touch potential location then eliminate the current potential touch location and identify the adjacent location having the greater value as the potential touch location (see Figure 5 and the description thereof hereinafter). Interpolation Once a touch location has been identified, Interpolation 214 is the process that examines the adjacent values to generate a higher resolution location. Touch Tracking Touch Tracking 206 is the process of comparing time sequential "frames" of touch identification data and then determining which touches are associated between sequential frames. A combination of weighting and "best guess" matching may be used to track touches through multiple frames during the process of Data Acquisition 202 described hereinabove. This is repeated for every peak detected and every touch that was identified on the previous frame. A "frame" is the set of self and mutual capacitive measurements of the plurality of capacitive touch sensors 104 in order to capture a single set of touches at a specific time. Each full set of scans (a "frame") of the self and mutual capacitance measurements of the plurality of conductive columns 104 or conductive rows 105 to acquire touch identification data of the touch sensor 102 at a given time associated with that frame. Touch Tracking 206 associates a given touch in one frame with a given touch in a subsequent frame. Touch tracking may create a history of touch frames and may associate the touch locations of a current frame with the touch locations of a previous frame or frames. In order to associate a previous touch location to a current potential touch location a "weighting" function may be used. The weight values ("weight" and "weight values" will be used interchangeably herein) between time sequential touch locations (of different frames) represent the likelihood that time sequential touch locations (of different frames) are associated with each other. Distance calculations may be used to assign weight values between these associated touch locations. A "true" but complex and processor intensive calculation for determining weight value between touch locations is: Weight Value = SQRT [(Xprevious - Xcunent) 2+ ( previous - Ycurrent) 2] Eq. ( 1 ) A simplified distance (weight value) calculation may be used that measures ΔΧ and ΔΥ and then sums them together: Weight value' = ABS (X previous - c unent) "' "ABS (Yprevious - Ycurrent) Eq. (2) The above simplified weight value calculation, Eq. (2), creates a diamond shaped pattern for a given weight value instead of a circular pattern of the more complex weight value calculation, Eq. (1). Use of Eq. (2) may be optimized for speed of the weight value calculations in a simple processing system, distance may be calculated based upon the sum of the change of the X-distances and the change in the Y-distances, e.g., Eq. (2) herein above. A better weight value may be defined as a smaller distance between sequential touch locations. For each new touch location a weight value may be calculated for all touch locations from the previous frame. The new touch location is then associated with the previous touch location having the best weight value therebetween. If the previous touch location already has an associated touch location from a previous frame, a secondary second-best weight value for each touch location may be examined. The touch location with the lower-cost second- best weight value may then be shifted to its second best location, and the other touch location may be kept as the best touch location. This process is repeated until all touch locations have been associated with previous frame touch locations, or have been identified as "new touches" having new locations with no touch locations from the previous frame being close to the new touch location(s). An alternative to the aforementioned weighting process, may be a vector-based process utilizing a vector created from the previous two locations to create the most likely next location. This vector-based weighting process may use the same distance calculations as the aforementioned weighting process, running it from multiple points and modifying the weight values based upon from which point the measurement was taken. By looking at the previous two locations of a touch, the next "most likely" location of that touch may be predicted. Once the extrapolated location has been determined that location may be used as the basis for a weighting value. To improve matching on the extrapolated location an "acceleration model" may be used to add weighting points along the vector to the extrapolated locations and past the extrapolated locations.. These additional points assist in detecting changes in speed of the touch movement, but may not be ideal for determining direction of the touch motion. Referring to Figures 7 and 8, depicted are schematic diagrams of historic and current point locations used for a point weighting example, according to the teachings of this disclosure. Once weights have been generated, the best combination of weight values and associated touches may be generated. Certain touch scenarios may cause nearly identical weight values, in which case the second best weight values should be compared and associations appropriately shifted. Depending upon the order of operations, points A and D may be associated first. As the weight values for B are generated BD is a better match then BC. In this case look at secondary weight values. Is it less costly to shift A to be associated with C or to shift B to be associated with C? By extending this sequence of operations, all points can have associations shifted for the best overall match, not just the best local match. Some caution may be needed to prevent infinite loops of re-weighing. This may be accomplished by limiting the number of shifts to a finite number. Referring now to Figure 8, points A and B are existing points, and points 1 and 2 are "new" points that need to be associated. Step 1) Calculate weight values between touch locations: A - 1 weight = 5 ((ΔΧ = 2) + (ΔΥ = 3) = 5) A→2 weight = 4 B r+ 1 weight = 10 B 2 weight = 5 Step 2) Select the "best" pair (lowest weight) for each existing touch location: A <→· 2 weight = 4 and B 2 weight = 5 Step 3) If more than one existing touch location pairs with a given new touch location, then look at the second-best touch locations for each and the difference in weight values from the best to the second best pair (the "cost"). A <-» 1 (weight: 5) Cost = 1 : (A <-> 1 weight) - (A 2 weight 4) B <- 1 (weight: 10) Cost = 5: (B o l weight) - (B <-> 2 weight 5) Step 4) Shift the pairing to the lowest cost pair thereby allowing the other touch location to maintain the original pairing. A -» 1 B <→2 Step 5) Repeat steps 2) through 4) until all pairing are 1 :1. If there are more touch locations than existing touch locations then start tracking a new touch location. If fewer new touch locations than existing "worst match" touch locations then these worst match touch locations may be lost and no longer tracked. Flat Finger Identification Referring to Figure 9, depicted are schematic drawings of a normal finger touch and a flat finger touch, according to the teachings of this disclosure. One challenge of identifying a touch is the "flat finger" scenario. This is when the side or flat part of a finger 1020, rather then the finger tip 1022, is placed on the touch sensor 102. Note that a flat finger 1020 may generate two or more potential touch locations 1024 and 1026. It is possible using the teaching of this disclosure to detect a flat finger 1020 by accumulating the sum of the values of all nodes nudged to each peak. If the sum of these values surpasses a threshold then it is likely caused by a flat finger touch. If a flat finger touch is detected then other touches that are near the flat finger peak(s) may be suppressed. Data Output Referring back to Figure 2, Data Output 208 is the process of providing determined touch location coordinates in a data packet(s) to a host system for handling thereof. Touch Determination Given an array of touch data, examine the differences between the values thereof and flag certain key scenarios as potential peaks for further examination. All touch data values below a threshold value may be ignored when determining touch locations. Key Scenario 1 : True Peak Referring to Figure 3, identify the transition from a positive to a negative slope as a potential peak. This would be the point circled in column 7 of the example data values shown in Figure 3. Key Scenario 2: Slope Ratio Beyond Threshold ("Fuzzy" Peak Detection) A key threshold of slope ratios may be used to flag additional peaks. The threshold value used may be, for example but is not limited to, 2: 1 ; so instances where there is a change of slope greater than 2: 1 may be identified as potential peaks. This applies to positive and negative slopes. This would be the point circled in column 6 of the example data values shown in Figure 3. Why not just look at the slope signs? Since the self scan is only one axis of a two-axis sensor array (e.g., conductive rows 105 and conductive columns 104 of touch sensor 102, Figure 1), it is possible for two touches that are off by a single "bar" (e.g., column) to only show a single peak. With the example data, there could be two touches, one at 6,6 and another at 7,7 (see Figures 3 and 6). Without the additional peak detection, the touch at 6,3 may not be detected. Nudge Location Refinement Once a potential touch location is identified, each adjacent touch location may be examined to determine if they have a greater value. If a greater value is present, eliminate the current potential touch location and identify the touch location of the greater value as a potential touch location. This process is repeated until a local peak has been identified. Referring to Figure 3, depicted is a graph of single touch peak detection data, according to specific example embodiments of this disclosure. An example graph of data values for one column (e.g., column 7) of the touch sensor 102 is shown wherein a maximum data value determined from the self and mutual capacitance measurements of column 7 occurs at the capacitive touch sensor 104 area located a row 7, column 7. All data values that are below a threshold value may be ignored, e.g., below about 12 in the graphical representation shown in Figure 3. Therefore only data values taken at row 6 (data value = 30) and at row 7 (data value = 40) need be processed in determining the location of a touch to the touch sensor 102. Slope may be determined by subtracting a sequence of adjacent row data values in a column to produce either a positive or negative slope value. When the slope value is positive the data values are increasing, and when the slope value is negative the data values are decreasing A true peak may be identified as a transition from a positive to a negative slope as a potential peak. A transition from a positive slope to a negative slope is indicated at data value 422 of the graph shown in Figure 3. However another touch may have occurred at column 6 and was not directly measured in the column 7 scan, but shows up as data value 420 during the column 7 scan. Without another test besides the slope sign transition, the potential touch at column 6 may be missed. Therefore a threshold of slope ratios may further be used to flag additional potential peaks. Slope is the difference between two data values of adjacent conductive columns 104. This threshold of slope ratios may be, for example but is not limited to, 2.T so instances where there is a change of slope greater than 2: 1 may be identified as another potential peak. This may apply to both positive and negative slopes. For example, the data value 420, taken at row 6, has a left slope of 23:1 (30 - 7) and a right slope of 10: 1 (40 - 30). The data value 422, taken at row 7, has a left slope of 10: 1 (40 - 30) and right slope of -30:1 (10 - 40). The slope ratio for row 6 of 23: 10, exceeds the example 2: 1 threshold and would be labeled for further processing. All other data values are below the data value threshold and may be ignored. Referring to Figure 4, depicted is a schematic plan diagram of potential touch and mutual touch locations of a touch sensor, according to specific example embodiments of this disclosure. Once a potential touch location is identified, each adjacent location thereto may be examined to determine whether any one of them may have a gieater data value than the current potential touch location (labeled "C" in Figure 4(a) & (b)). If a greater data value is found, then the current potential touch location may be eliminated and the touch location having the greater value may be identified as a potential touch location. This is referred to herein as a Nudge 212 process and may be repeated until a data peak has been identified. During a data acquisition scan of a column of rows, only tier one nodes (labeled " 1 " in Figure 4(a) and 4(b) - adjacent locations to the current potential touch location) are examined. If any of these tier one nodes has a larger data value than the data value of the current potential touch location, a new current touch location is shifted ("nudged") to that node having the highest data value and the Nudge process 212 is repeated. If a tier one node is already associated with a different potential peak, then no further searching is necessary and the current data peak may be ignored. Tier two nodes (labeled "2" in Figure 4(a) & (b) - adjacent locations to the tier one nodes) are examined when there is a potential of a large area activation of the touch sensor 102. After one conductive column 104 has been scanned for mutual capacitance values, the Nudge process 212 may be speeded up by storing the mutual capacitance data values of that one column in a cache memory, then doing the Nudge process 212 first on the tier one nodes, and then on the tier two nodes of that one column from the mutual capacitance data values stored in the cache memory. Then only after there are no further nudges to do in that one column will the Nudge process 212 examine the tier one and tier two nodes from the mutual capacitance measurement scans of the two each adjacent columns on either side of the column having the Nudge process 212 performed thereon. Interpolation of the potential touch location may be performed by using the peak data value node (touch location) as well as each adjacent node thereto (e.g., tier one nodes from a prior Nudge process 212) to create sub-steps between each node. For example, but not limited to, 128 steps may be created between each node. Referring to Figure 4(c), node A is the potential touch location and nodes B, C, D and E are tier one nodes adjacent thereto. The interpolated X, Y location may be found using the following equations: Location x= (Dvalue - B Value)/A Value*64 Location y= (Evalue - C Value)/Avalue*64 It is contemplated and within the scope of this disclosure that variations of the above equations may be used based upon the ratio of values and the signs of the numerator of the division. Referring to Figure 5, depicted is a schematic plan view diagram of a touch sensor showing a cache data window thereof, according to specific example embodiments of this disclosure. The conductive columns 104 of the touch sensor 102 may be scanned column by column for self capacitance values until all conductive columns 104 have been scanned. Each conductive column 104 indicating a potential touch from the self capacitance data may be sequentially scanned for determining mutual capacitive values thereof (touch data) and when peaks are discovered they may be processed contemporaneously with the column scan. Furthermore, touch data may be stored in a cache memory for further processing. Since the Nudge process 212 looks at the first tier nodes then the second tier nodes, if necessary, not all of the touch data from all of the conductive columns 104 need be stored at one time. This allows a simple caching system using a minimum amount of random access memory (RAM). For example, storing five columns of touch data in a cache. The five columns are contiguous and a cache window may move across the columns 104 of the touch sensor 102 one column 104 at a time. It is contemplated and within the scope of this disclosure that more or fewer than five columns of touch data may be stored in a cache memory and processed therefrom, and/or self capacitance scanning by rows instead of columns may be used instead. All descriptions herein may be equally applicable to self capacitance scanning of rows then mutual capacitance scanning by columns of those row(s) selected from the self capacitance scan data. Whenever a Mutual Scan of a first or second tier node (capacitive sensor 104) is requested, it may be first called from the cache memory. If the requested node touch data is present in the cache memory, the cache memory returns the requested touch data of that first or second tier node. However, if the requested touch data is not present in the cache memory then the following may occur: 1 ) If the column of the requested touch data is in the range of the cache window then perform the mutual scan of that column and add the touch data to the cache memory, or 2) If the column of the requested touch data is not in the range of the present cache window then shift the cache window range and perform the mutual scan of the new column and add the resulting touch data from the new cache window to the cache memory. Referring to Figure 6, depicted are a graph of self scan values and a table of mutual scan values for two touch peak detection data, according to specific example embodiments of this disclosure. Since a self scan is performed in only one axis (e.g., one column), it is possible for two touches that are off by a single column to only show a single peak. For the example data values shown in Figure 6, two touches may have occurred, one at self scan data value 422 and the other indicated at self scan data value 420. Without being aware of change of slopes greater than 2:1, the potential touch represented by self scan data value 420 may have been missed. A first touch may cause data value 422 and a second touch may cause data value 420. Peak Detection and nudging, as described hereinabove, may further define these multiple touches as described herein. Referring to Figures 10 to 19, depicted are schematic process flow diagrams for touch decoding, according to specific example embodiments of this disclosure. Figure 10 shows a general overview of possible processes for multi-touch decoding in a touch sensor 102 enabled device. It is contemplated and within the scope of this disclosure that more, fewer and/or some different processes may be utilized with a touch sensor 102 enabled device and still be within the scope, intent and spirit of this disclosure. In step 1050 a device is started, actuated, etc., when in step 1052 power is applied to the device. In step 1054 the device may be initialized, and thereafter in step 1056 the process of touch identification may begin. In step 1058 touch tracking may be performed on those touches identified in step 1056. In step 1060 the touch data may be further processed if necessary, otherwise it may be transmitted to the processing and control logic of the device for display and/or control of the device's intended purpose(s) in step 1062. In the descriptions of the following process steps references to "top" or "north" channel or node will mean the channel or node above another channel or node, "bottom" or "south" channel or node will mean the channel or node below another channel or node, "left" or "west" channel or node will mean the channel or node to the left of another channel or node, and "right" or "east" channel or node will mean the channel or node to the right of another channel or node. Referring to Figure 1 1 , a flow diagram of a touch identification process 204 is shown and described hereinafter. In step 1 102 the touch identification process 204 (Figure 2) begins. In step 1104 a self scan of all channels on one axis may be performed, e.g., either all columns or all rows. In step 1106 the first self scan value may be examined. In step 1 108 the (first or subsequent) self scan value may be compared to a self touch threshold value. A self peak detection process 1100 may comprise steps 1110 to 1118, and is part of the overall Peak Detection process 210 (Figure 2). If the self scan value is less than the self touch threshold value as determined in step 1 108, then step 1238 (Figure 12) may determine whether there are any additional self scan values to be examined. However, if the self scan value is equal to or greater than the self touch threshold value as determined in step 1108, then step 1110 may calculate a left slope between the self scan value and a self scan value of the channel to the left of the present channel. Then step 1 1 12 may calculate a right slope between the self scan value and a self scan value of the channel to the right of the present channel. Step 1114 determines whether the left slope may be greater than zero (positive slope) and the right slope may be less than zero (negative slope), identifying a peak. If a yes result in step 1114, then step 1 120 may perform mutual scan measurements on each node of the channel selected from the self scan data. If a no result in step 1 1 14, then step 1 1 16 determines whether the left slope may be greater than zero (positive slope) and greater than the right slope may be, for example but is not limited to, two times (twice) greater than the right slope. If a yes result in step 1 1 16, then in step 1 120 mutual scan measurements may be performed on each node of the selected self scan channel. If a no result in step 1 1 16, then step 1 1 18 determines whether the left slope may be, for example but is not limited to, less than zero (negative slope) and greater than a percentage of the right slope, e.g., fifty (50) percent. If a yes result in step 1116, then step 1120 may perform mutual scan measurements on each node of the channel selected from the self scan data. If a no result in step 1 1 16, then step 1238 (Figure 12) may determine whether there are any additional columns to be examined based upon the self scan values thereof. Step 1 122 may examine a first mutual scan value. Referring to Figure 12, a mutual peak detection process 1244 may comprise steps 1226 to 1234, and is part of the overall Peak Detection process 210 (Figure 2). Step 1224 may compare the (first or subsequent) mutual scan value to a mutual touch threshold value, wherein if the mutual scan value is less than the mutual touch threshold value then step 1236 may determine whether there are any additional mutual scan values to be examined. However, if the mutual scan value is equal to or greater than the mutual touch threshold value then step 1226 may calculate a slope to the next mutual scan value node, then step 1228 may calculate a slope to the previous mutual scan value node. Step 1230 determines whether the next slope may be less than zero (negative slope) and the previous slope may be greater than zero (positive slope). If a yes result in step 1230, then step 1350 (Figure 13) may start the nudge process 212 and/or the interpolation process 214 (Figure 2). If a no result in step 1230, then step 1232 determines whether the next slope may be, for example but is not limited to, greater than zero (positive slope) and less than a percentage of the previous slope. If a yes result in step 1232, then step 1350 (Figure 13) may start the nudge process 212 and/or the interpolation process 214 (Figure 2). If a no result in step 1232, then step 1234 determines whether the next slope may be, for example but is not limited to, less than zero (negative slope) and greater than the previous slope. If a yes result in step 1234, then step 1350 (Figure 13) may start the nudge process 212 and/or the interpolation process 214 (Figure 2). If a no result in step 1234, then step 1236 determines whether there may be any additional mutual values to be examined. If a yes result in step 1236, then step 1242 may examine a next mutual value. If a no result in step 1236, then step 1238 determines whether there may be any additional self scan values to be examined. If a yes result in step 1238, then step 1240 examines a next self scan value that may be returned to step 1 108 (Figure 1 1) for further processing thereof. If a no result in step 1238, then in step 1244 a touch detection frame may be complete. Referring to Figures 13-15, flow diagrams of a nudge process 212 and an interpolation process 214 (Figure 2) are shown and described hereinafter. Step 1350 may start the nudge process 212 and/or the interpolation process 214 by using a peak location from the touch identification process 204 (Figure 2) and may comprise the following process steps: Step 1352 determines whether there may be a valid node to the north. If a no result in step 1352, then continue to step 1360. If a yes result in step 1352, then step 1354 may make a mutual scan measurement of the node to the north. Step 1356 determines whether the mutual scan data of the north node may be greater than the current node. If a no result in step 1356, then continue to step 1360. If a yes result in step 1356, then in step 1358 the north node may become the current node, and then continue to step 1486 (Figure 14). Step 1360 determines whether there may be a valid node to the south. If a no result in step 1360, then continue to step 1470 (Figure 14). If a yes result in step 1360, then step 1362 may make a mutual scan measurement of the node to the south. Step 1364 determines whether the mutual scan data of the south node may be greater than the current node. If a no result in step 1364, then continue to step 1470 (Figure 14). If a yes result in step 1364, then in step 1366 the south node may become the current node, and then continue to step 1486 (Figure 14). Referring to Figure 14, step 1470 determines whether there may be a valid node to the east. If a no result in step 1470, then continue to step 1478. If a yes result in step 1470, then step 1472 may make a mutual scan measurement of the node to the east. Step 1474 determines whether the mutual scan data of the east node may be greater than the current node. If a no result in step 1474, then continue to step 1478. If a yes result in step 1474, then in step 1476 the east node may become the current node, and then continue to step 1486. Step 1478 determines whether there may be a valid node to the west. If a no result in step 1478, then continue to step 1502 (Figure 15). If a yes result in step 1478, then step 1480 may make a mutual measurement of the node to the west. Step 1482 determines whether the mutual scan data of the west node may be greater than the current node. If a no result in step 1482, then continue to step 1502 (Figure 15). If a yes result in step 1482, then in step 1484 the west node may become the current node. Step 1486 determines whether a touch point may already exist at the selected node. If a no result in step 1486, then continue to step 1352 (Figure 13). If a yes result in step 1486, then step 1488 may eliminate the current peak, and then continue to step 1236 (Figure 12). Referring to Figure 15, an interpolation process 214 may comprise steps 1502 to 1518. Step 1502 determines whether there may be a valid node to the left. If a no result in step 1502, then continue to step 1510 wherein the left node value may be defined as a center value minus a right value then continue to step 1506. If a yes result in step 1502, then step 1504 may perform a mutual scan measurement on the node to the left. Then step 1506 determines whether there may be a valid node to the right. If a no result in step 1506, then continue to step 1512 wherein the right node value may be defined as a center value minus a left value then continue to step 1516. If a yes result in step 1506, then step 1508 may perform a mutual scan measurement on the node to the right. Step 1516 may determine a fine position by subtracting the left value from the right value, dividing the difference thereof by the center value, and then multiplying the result by, for example but is not limited to, the number 64. It is contemplated and within the scope and spirit of this disclosure that many ways of determining valid peaks and nodes may be used as one having ordinary skill in the art of touch detection and tracking could readily implement by having knowledge based upon the teachings of this disclosure After step 1516 has completed the aforementioned calculations, step 1514 determines whether an interpolation may have been performed for each axis. If a no result in step 1514, then step 1518 may interpolate another axis, thereafter steps 1502 to 1516 may be repeated, with "above" replacing "left" and "below" replacing "right" in each step. If a yes result in step 1514, then step 1520 may add this touch point to a list of all detected touch points. Then step 1522 may return to step 1236 (figure 12) for any additional mutual scan values to be examined. Referring to Figures 16, 17 and 18, flow diagrams of a touch tracking process 206 are shown and described hereinafter. In step 1602 the touch tracking process 206 may start by using the previously found and current touch locations. Step 1604 determines whether there may be any current touch locations. If a yes result in step 1604, then step 1606 may select the first of the current touch locations, and thereafter may continue to step 1722 (Figure 17). If a no result in step 1604, then step 1610 determines whether there may be any previous touch location(s). If a yes result in step 1610, then step 1612 may select the first previous touch location. If a no result in step 1610, then at step 161 1 tracking is complete. Step 1614 determines whether the previous touch location may be associated with a current touch location. If a no result in step 1614, then step 1608 may assert an output of "touch no longer present at previous touch location, stop tracking," and then return to step 1616. If a yes result in step 1614, then step 1616 determines whether there may be any more previous touch locations. If a no result in step 1616, then at step 1620 tracking touch locations is complete and the touch location data may be transmitted as data output 208 (Figure 2) for further processing by the microcontroller 1 12 (Figure 1 ). If a yes result in step 1616, then step 1618 may select the next previous touch location, and thereafter return to step 1614. Referring to Figure 17, step 1722 determines whether there may be any previous touch locations. If a no result in step 1722, then continue to step 1868 (Figure 18) wherein a "New Touch to track is identified" at current location, and thereafter continue to step 1856 (Figure 18). If a yes result in step 1722, then step 1724 may set a temporary weight value to a maximum weight value. Step 1726 may select the first of the previous touch locations. Then step 1728 may measure a distance between the selected current touch location and the selected previous touch location to determine a current distance (weight value) therebetween. Step 1730 determines whether the current weight value may be less than the temporary weight value. If a yes result in step 1730, then step 1732 may set the temporary weight value to the current weight value and thereafter may record the selected previous touch location as a temporary location and continue to step 1734. If a no result in step 1730, then step 1734 determines whether there may be more previous touch locations. If a yes result in step 1734, then step 1736 may select the next previous touch location, and thereafter return to step 1728. If a no result in step 1734, then step 1738 determines whether the temporary location may have already been assigned to a different current location. If a yes result in step 1738, then step 1740 may calculate a next worst weight value for the current location and for an assigned current location, and thereafter continue to step 1860 (Figure 18). If a no result in step 1738, then continue to step 1850 (Figure 18). Referring to Figure 18, step 1850 determines whether the weight value may be below a maximum association threshold. If a no result in step 1850, then step 1854 may identify a new touch location for tracking. If a yes result in step 1850, then step 1852 may assign a new temporary location to the current location and then continue to step 1856. Step 1860 determines whether the next worst weight value for the current location may be less than the next worst weight value for the assigned location. If a yes result in step 1860, then step 1862 may set the temporary location to the next worst location and thereafter continue to step 1 856. If a no result in step 1860, then step 1864 may set the assigned location to the next worst weight value. Step 1866 may select a moved assignment location and thereafter return to step 1722 (Figure 17). Step 1856 determines whether there may be more current touch locations. If a yes result in step 1856, then step 1858 may select the next current touch location and thereafter return to step 1722 (Figure 17). Referring to Figure 19, depicted is a column cache process flow diagram, according to specific example embodiments of this disclosure. Step 1902 may received a mutual scan location request. Step 1904 determines whether the mutual scan area location requested may be stored in the cache memory. If a yes result in step 1904, then step 1920 determines whether the mutual scan data stored in the cache memory may be valid. If a yes result in step 1920, then step 1922 may return mutual scan data to the cache memory. If a no result in step 1920, then step 1918 may perform a mutual scan at the requested location, wherein step 1916 may write the mutual scan data to a location in the cache memory and then return back to step 1922. If a no result in step 1904, then step 1906 determines whether the requested touch location may be beyond the right edge of the cache. If a yes result in step 1906, then step 1908 may de-allocate the left-most column of mutual scan data from the cache memory. In step 1910 the de-allocated mutual scan data may be allocated to the right edge of the cache memory so as to move the edge values thereof, and thereafter return to step 1904. If a no result in step 1906, then step 1914 may de-allocate the right-most column of data from the cache memory. In step 1912 the de-allocated mutual scan data may be allocated to the left edge of the cache memory so as to move the edge values thereof, and thereafter return to step 1904. While embodiments of this disclosure have been depicted, described, and are defined by reference to example embodiments of the disclosure, such references do not imply a limitation on the disclosure, and no such limitation is to be inferred. The subject matter disclosed is capable of considerable modification, alteration, and equivalents in form and function, as will occur to those ordinarily skilled in the pertinent art and having the benefit of this disclosure. The depicted and described embodiments of this disclosure are examples only, and are not exhaustive of the scope of the disclosure. |
According to some embodiments, systems for improved blower fans are provided. In some embodiments, systems may include a casing (210) comprising an inlet (212,214) to accept a fluid and an outlet (216) to evacuate the fluid. The systems may further comprise an impeller disposed within the casing, comprising a hub (220) and one or more impeller blades (222) coupled to the hub. In some embodiments,the inlet of the casing may be shaped (218) to reduce the amount of fluid that evacuates the casing via the inlet due to pressure within the casing. |
1.A system comprising:The housing includes:Accepting the inlet of the fluid; andEmptying the outlet of the fluid; andAn impeller disposed within the housing, comprising:Hub; andOne or more impeller blades coupled to the hub;Wherein the inlet of the housing is shaped to reduce the amount of fluid vented from the housing via the inlet due to pressure within the housing.2.The system of claim 1 wherein the inlet of the housing includes a portion that extends into the path traveled by the one or more impeller blades.3.The system of claim 2 wherein the one or more impeller blades comprise a groove into which the portion of the inlet extends.4.The system of claim 1 wherein the inlet of the housing tapers toward the one or more impeller blades.5.The system of claim 4 wherein the one or more impeller blades are tapered to match the taper of the inlet.6.The system of claim 1 wherein the inlet is at least partially non-circular in shape.7.The system of claim 1 wherein the inlet is generally circular in shape and is eccentrically positioned with respect to an axis about which the one or more impeller blades rotate.8.The system of claim 1 wherein the shape of the inlet is at least partially defined by an apparatus coupled to the housing.9.A system comprising:The housing includes:Accepting the inlet of the fluid; andEmptying the outlet of the fluid;An impeller disposed within the housing, comprising:Hub; andOne or more impeller blades coupled to the hub; andThe path of the portion of the fluid that is evacuated from the inlet of the housing.10.The system of claim 9 wherein the path is defined at least in part by a device coupled to the housing.11.The system of claim 9 wherein the housing comprises: two adjacent walls defining an inlet, wherein the walls are spaced apart to further define a region between the walls.12.The system of claim 11 wherein the path comprises at least a portion of the zone between the walls.13.The system of claim 9 wherein the path comprises a plurality of apertures.14.A system comprising:The housing includes:Accepting the inlet of the fluid; andEmptying the outlet of the fluid;An impeller disposed within the housing, comprising:Hub; andOne or more impeller blades coupled to the hub; andOne or more vanes that direct fluid into the inlet of the housing.15.The system of claim 14 wherein the one or more vanes are coupled to the housing.16.The system of claim 14 wherein the one or more vanes are coupled to an object adjacent the inlet.17.The system of claim 14 wherein the one or more vanes are shaped to direct fluid into the inlet without substantially disrupting the flow of fluid.18.A system comprising:processor;a memory that stores instructions to be executed by the processor;a blower fan that directs air to the processor;Providing air to the backflow restriction inlet of the blower fan; andA battery that powers the processor and the blower fan.19.The system of claim 18 wherein the return restriction inlet includes a funnel to direct air into the blower fan and substantially isolates pressure within the blower fan from the incoming air flow.20.The system of claim 18 wherein the return restriction inlet comprises at least one passage to evacuate back from the blower fan.21.The system of claim 18 wherein the return restriction inlet includes one or more vanes to direct air into the blower fan.22.The system of claim 18 wherein the return restriction inlet comprises an inlet opening in the blower fan, and wherein the inlet opening is shaped to reduce backflow.23.The system of claim 22 wherein the inlet opening is at least partially non-circular in shape.24.The system of claim 22 wherein the inlet opening is generally circular in shape and is eccentrically positioned about an axis about which the impeller of the blower fan rotates.25.The system of claim 18 wherein the system is a notebook computer. |
System for improved blower fanBackground techniqueFans are commonly used to promote heat dissipation from electronic devices. In some applications, such as applications where space is limited (eg, a notebook computer), a blower fan is used to direct air within and/or from the electronic device. Since the electronic device continuously generates a large amount of heat, the heat must be removed, so the efficiency of the blower fan becomes increasingly important. However, the blower fan often discharges a certain amount of air into the inlet air stream, thereby disrupting the flow of air entering the blower fan. This backflow effect reduces the efficiency of the blower fan.DRAWINGSFigure 1 is a cross-sectional view of the system.2A is a cross-sectional view of a system in accordance with some embodiments.2B is a cross-sectional view of a system in accordance with some embodiments.3A is a cross-sectional view of a system in accordance with some embodiments.3B is a cross-sectional view of a system in accordance with some embodiments.4A is a cross-sectional view of a system in accordance with some embodiments.4B is a perspective view of a system in accordance with some embodiments.FIG. 5A is a perspective view of a system in accordance with some embodiments.Figure 5B is a perspective view of a system in accordance with some embodiments.FIG. 6 is a graph illustrating an improvement of a system in accordance with some embodiments.Figure 7 is a block diagram of a system in accordance with some embodiments.Detailed waysReferring first to Figure 1, a cross-sectional view of system 100 is shown in Figure 1. The various systems described herein are illustrative and are not intended to limit the described embodiments. Different types, layouts, numbers, and configurations of any of the systems described herein can be used without departing from the scope of some embodiments. Fewer or more components shown with respect to the systems described herein may be used without departing from some embodiments.System 100 can include, for example, a housing 110 that includes a first inlet 112, a second inlet 114, and/or an outlet 116. System 100 may also or alternatively include impeller hub 120 and/or impeller blades 122. System 100 can be, for example, or include a fan such as a blower fan. The impeller blades 122 (and/or the hub 120) may, for example, rotate and/or swirl within the housing 110. In some configurations, fewer or more components than shown in FIG. 1 may be included within system 100. Housing 110 may, for example, include fewer or more inlets 112, 114 and/or outlets 116.In some configurations, the impeller blades 122 (and/or the hub 120) can be rotated within the housing 110. The swirling of the impeller blades 122 may, for example, cause air 130 to enter the inlets 112, 114. In other words, the impeller blades 122 can draw air 130 into the housing 110. The swirling of the impeller blades 122 may also cause air 132 to exit the housing 110. Air 132 may, for example, be caused to exit through outlet 116. In this manner, system 100 can be used as a typical blower fan to draw air 130 axially (e.g., along the axis of its impeller blades 122 and/or hub 120 about its whirling) and laterally and/or centrifugally air. Discharged.In some configurations, operation of system 100 causes some of the air 134 to exit the inlets 112, 114. The swirling of the impeller blades 122 within the housing 110 may, for example, result in a pressure increase zone within the housing 110. In some configurations, this increased pressure may cause some of the air 134 to exit the inlets 112, 114. Other factors such as air flow vortex may also or alternatively contribute to and/or cause air 134 to exit one or more of the inlets 112, 114. This "return" air 134 may interfere with the air 130 entering the housing 110. Return air 134, for example, causes turbulence, friction, vortexing, and/or other disturbances in the inlet flow of air 130 entering the housing. In some configurations, disturbances in the flow of inlet air 130 may reduce the efficiency and/or performance of system 100.Return air 134 may reduce the flow of air that system 100 can provide and/or move, for example by slowing the flow of inlet air 130. In other words, less air 132 exits the outlet 116, and thus less air 132 can be utilized to cool the electronic components (not shown in Figure 1). In a typical configuration, the more air 132 the system 100 can direct to the electrical components, the better the cooling effect. The return air 134 and/or the resulting effects may therefore reduce the effectiveness of the system providing and/or promoting cooling.Turning to FIG. 2A, FIG. 2A shows a cross-sectional view of system 200 in accordance with some embodiments. System 200 includes, for example, a housing 210 that includes a first inlet 212, a second inlet 214, an outlet 216, and/or a shaped portion 218. In some embodiments, hub 220 and/or one or more impeller blades 222 may be disposed within housing 210. Impeller blades 222 may, for example, be coupled to and/or integrated with hub 220. According to some embodiments, the impeller blades 222 may include a shaped portion 224. In some embodiments, components 210, 212, 214, 216, 220, 222 of system 200 may be similar in construction and/or functionality to similarly named components described in connection with FIG. In some embodiments, fewer or more components than those shown in FIG. 2 may be included within system 200.According to some embodiments, the shaped portion 218 of the housing 210 can be configured to reduce the amount of backflow produced by the system 200. The shaped portion 218 can, for example, facilitate isolation of any higher pressure regions within the housing 210 from the inlets 212, 214. The shaped portion 218 may also or alternatively block some and/or substantial amounts of return air by a path generally taken to prevent air from exiting the housing 210 via the inlets 212, 214. In some embodiments, the shaped portion 218 can be or include a tapered portion of the housing 210.For example, as shown in FIG. 2A, the shaped portion 218 can be the portion of the housing that tapers toward the impeller blades 222. The taper of the shaped portion 218 may, for example, at least partially separate any higher pressure zones (eg, near the tip end of the impeller blades 222) from the air entering the inlets 212, 214 (not shown in FIG. 2A), and/or may be at least The flow of air directed to the inlets 212, 214 is partially blocked (eg, because of vortices and/or eddies caused by the impeller blades 222). The shaped portion 218 can reduce and/or substantially eliminate backflow, at least because the higher pressure zone may cause and/or cause backflow by causing air to exit from the housing through the inlets 212, 214. In some embodiments, the shaped portion 218 can reduce and/or substantially eliminate such backflow, at least because air flow directed from within the housing 210 to the inlets 212, 214 can disrupt the inlet air flow. In some embodiments, the shaped portion 218 can form a funnel to direct air into the inlets 212, 214 and reduce the amount of return air that can escape from the housing 210 via the inlets 212, 214.In some embodiments, the impeller blades 222 may also or alternatively include a shaped portion 224. The shaped portion 224 of the impeller blade 222 can be shaped, for example, to substantially match the shaped portion 218 of the housing 210. According to some embodiments, the utilization of both shaped portions 218, 224 may further facilitate preventing and/or reducing backflow. Reducing the size of the air gap between the impeller blades 222 and the housing 210 may, for example, reduce the likelihood (and/or return flow) that causes air to enter the inlets 212, 214 to cause backflow.Referring now to Figure 2B, Figure 2B illustrates a cross-sectional view of system 200 in accordance with some embodiments. In some embodiments, system 200 can be similar to system 200 described in connection with FIG. 2A. System 200 can be configured, for example, to provide a more efficient flow of air for cooling by reducing backflow effects. In some embodiments, system 200 can include a housing 210 that includes a first inlet 212, a second inlet 214, an outlet 216, and/or a shaped portion 218. In some embodiments, hub 220 and/or one or more impeller blades 222 may be disposed within housing 210. According to some embodiments, the impeller blades 222 may include a shaped portion 224. In some embodiments, components 210, 212, 214, 216, 218, 220, 222, 224 of system 200 may be similar in construction and/or functionality to similarly named and/or labeled components described in connection with FIG. 2A. . In some embodiments, fewer or more components than those shown in FIG. 2B may be included within system 200.According to some embodiments, the shaped portion 218 of the housing 210 can be or include a lip or ridge that extends into the housing 210 (eg, as shown in Figure 2B). The shaped portion 218 can be, for example, a portion of the housing 210 that is angled into the path of the impeller blades 222. In some embodiments, the shaped portion 218 can extend into the shaped portion 224 of the impeller blade 222. The shaped portion 224 of the impeller blade 222 can be, for example, or include a groove, a pawl, and/or other features configured to receive the shaped portion 218 of the housing 210. The shaped portion 224 of the impeller blades 222, for example, allows the shaped portion 218 of the housing 210 to extend into the path of the impeller blades 222 without inhibiting the rotation of the impeller blades 222. According to some embodiments, the shaped portion 224 of the impeller blades 222 forms grooves and/or cuts within the impeller blades 222 through which the shaped portions 218 of the housing 210 may extend. The impeller blades 222 may then rotate, for example, within the housing and/or about the shaped portion 218 of the housing 210.In some embodiments, the shaped portions 218, 224 may substantially prevent backflow from disrupting the flow of air into the inlets 212, 214. The shaped portions 218, 224, for example, may generally isolate the higher pressure zone within the housing 210 from the inlets 212, 214 (eg, by creating a pressure wall). The shaped portions 218, 224 may also or alternatively substantially restrict and/or prevent air within the housing 210 from exiting through the inlets 212, 214. According to some embodiments, the extension of the shaped portion 218 of the housing 210 into the shaped portion 224 of the impeller vane 222 may block the flow of air within the housing 210 to the inlets 212, 214.According to some embodiments, the shaped portion 218 of the housing 210 (eg, in FIGS. 2A and/or 2B) may extend around the circumference of the inlets 212, 214 (eg, assuming circular shaped inlets 212, 214). In some embodiments, such as the embodiment illustrated in FIG. 2B, the shaped portion 218 can extend along portions of the inlets 212, 214. The shaped portion 218 may, for example, not extend along a region adjacent the outlet 216 to prevent damage and/or obstruct air exiting the housing 210 via the outlet 216. In some embodiments, the shaped portion 218 can include a plurality of lips, edges, and/or other surfaces along the one or more portions of the inlets 212, 214 and/or adjacent one or more portions of the inlets 212, 214. Formed portion 218 can include, for example, a series of lips that extend along the circumference of inlets 212, 214 and are spaced apart. In some embodiments, the shaped portion 218 can be or include a portion of the device that is coupled, attached, and/or adjacent to the housing 210 (eg, a portion of a laptop housing). According to some embodiments, various shapes and/or configurations of the shaped portion 218 can be used to limit and/or substantially reduce or prevent backflow.Turning now to Figure 3A, Figure 3A shows a cross-sectional view of a system 300 in accordance with some embodiments. In some embodiments, system 300 can be similar to system 200 described in connection with any of Figures 2A and/or 2B. System 300 can include, for example, a housing 310 that includes a first inlet 312, a second inlet 314, and/or an outlet 316. In some embodiments, hub 320 and/or one or more impeller blades 322 may be disposed within housing 310. According to some embodiments, system 300 may cause return air 324. System 300 can also or alternatively include device 340 that defines one or more channels 334. In some embodiments, components 310, 312, 314, 316, 320, 322 of system 300 can be similarly constructed and/or functionally similar to those described in connection with any of Figures 2A and/or 2B. Or labeled parts. In some embodiments, fewer or more components than those shown in FIG. 3A may be included within system 300.According to some embodiments, as shown in FIG. 3A, device 340 may be or include an inlet lane to facilitate directing air into inlets 312, 314. In some embodiments, device 340 can be coupled to housing 310 (eg, using fasteners, adhesives, and/or other methods or devices). Device 340 may be or include a portion of housing 310, in accordance with some embodiments. Device 340 may be, for example, a protrusion and/or other feature integrated into housing 310. In some embodiments, device 340 may simply be the portion of housing 310 that extends from inlets 312, 314. According to some embodiments, device 340 may be part of an object that is detached from housing 310, such as a portion of a laptop casing.According to some embodiments, the passage 342 may allow the return air 334 to exit the housing 310 without substantially interfering with the flow of air into the inlets 312, 314. Channel 342, for example, may provide an outlet for return air 334 that is positioned toward the periphery of the inlet air flow, thereby reducing the amount of inlet air that may be affected by return air 334. In some embodiments, the channel 342 can simply be or include a gap between the device 340 and the housing 310. The gap may be, for example, a gap that extends at least partially around the base of the inlet channel defined by device 340. According to some embodiments, the channel 342 may also or alternatively include one or more apertures. Device 340, for example, can include one or more apertures defining a channel 342. Other configurations of device 340 and/or channel 342 may also or alternatively be used to limit backflow effects within system 300.Turning to FIG. 3B, for example, FIG. 3B illustrates a cross-sectional view of system 300 in accordance with some embodiments. In some embodiments, system 300 can be similar to system 300 described in connection with FIG. 3A. System 300 can include, for example, a housing 310 that includes a first inlet 312, a second inlet 314, and/or an outlet 316. In some embodiments, hub 320 and/or one or more impeller blades 322 may be disposed within housing 310. According to some embodiments, system 300 may cause return air 324. System 300 can also or alternatively include device 340 that defines one or more channels 334. In some embodiments, components 310, 312, 314, 316, 320, 322, 334, 340, 342 of system 300 can be similar in construction and/or functionality to those of FIGS. 2A, 2B, and/or 3A. A similarly named and/or labeled component described in any of the figures. In some embodiments, fewer or more components than those shown in FIG. 3B may be included within system 300.As shown in FIG. 3B, the channel 342 can be defined by the positioning of the device 340 relative to the housing 310. In some embodiments, device 340 can be part of housing 310. Device 340 may be, for example, an outer wall with a double walled housing 310. The outer wall device 340 can be spaced apart from the inner wall of the housing 310 to define a zone between the two walls. The zone may be, for example, or include a path that extends from one or more of the inlets 312, 314 to one or more locations outside of the housing 310. As shown in FIG. 3B, the path can be used as channel 342 to allow return air 334 to be directed out of housing 310 via a different path than inlets 312, 314. In other words, the passage 342 can promote and/or cause the return air 334 to exit the housing 310 without substantially interfering with the flow of air to the inlets 312, 314.In some embodiments, the edges of the housing 310 and/or device 340 can be angled and/or tapered to direct return air 334 into the passage 342 and/or otherwise promote return air 334 and inlet air flow. Separation. The tapered portion of the housing 310 and/or device 340 can be similar in construction and/or function to the shaped portion 218 of the housing 210 described in connection with Figures 2A and/or 2B. In some embodiments, other methods and/or apparatus may be utilized to form channel 342 and/or direct return air 334 into channel 342. According to some embodiments, device 340 may be separate from and/or attached to housing 310. Device 340, for example, can include one or more pieces and/or portions that define one or more discrete channels 342 along the upper surface (and/or lower surface) of housing 310. In some embodiments, device 340 may also or alternatively include one or more tubes, tubing, conduits, and/or other components that define passage 342 and/or otherwise associated with passage 342.Referring now to Figure 4A, Figure 4A shows a cross-sectional view of a system 400 in accordance with some embodiments. In some embodiments, system 400 can be similar to systems 200, 300 described in connection with any of Figures 2A, 2B, 3A, and/or 3B. System 400 can include, for example, a housing 410 that includes an inlet 412 and/or an outlet 416. In some embodiments, the hub 420 and/or one or more impeller blades 422 can be disposed within the housing 410. According to some embodiments, incoming air 430 may enter housing 410 via inlet 412, and/or outlet air 432 may exit housing 410 via outlet 416. System 400 can also or alternatively include one or more vanes 450. In some embodiments, components 410, 412, 416, 420, 422, 430, 432 of system 400 can be similar in construction and/or functionality to any of those associated with Figures 2A, 2B, 3A, and/or 3B. The components are similarly named and/or labeled as depicted in the figures. In some embodiments, fewer or more components than those shown in FIG. 4A may be included within system 400.System 400 can be, for example, a blower fan with a single inlet 412 and/or a single outlet 416. In some embodiments, more inlets 412 and/or outlets 416 may be defined by system 400 and/or included within system 400. According to some embodiments, the impeller blades 422 may be swirled within the housing 410 to draw inlet air 430 into the inlet 412. Inlet air 430 may, for example, be drawn into inlet 412, into housing 410 and then may be discharged as outlet air 432 via outlet 416. In some embodiments, the reflow effect may limit the efficiency and/or performance of system 400. Reflow may also or alternatively increase the level of sound associated with operation of system 400.According to some embodiments, vanes 450 may be included within system 400 to direct inlet air 430 into inlet 412. The vanes 450 can be shaped, for example, to direct the inlet air 430 into the inlet 412 in a generally smooth and/or uninterrupted manner. The direction of the inlet air 430 can, for example, reduce the backflow effect. In some embodiments, such as where system 400 is disposed between two objects and/or otherwise exposed to low headspace conditions (eg, in a notebook and/or a portable computer), for example, vanes 450 can manage the entrance Air flow 430 reduces turbulence, vortex, and/or other flow that will block inlet air flow 430. In some embodiments, the reflow from the housing 410 can be similarly reduced and/or eliminated by breaking any swirling components of the reflow. In other words, the vanes 450 can block air flow along the edges of the inlet 412, which may have significant reflow components.The vanes 450 can be constructed in any manner that is or becomes known or practicable. The one or more vanes 450 may be coupled to the housing 410, for example, in any configuration that directs the inlet air 430 toward the inlet 412 and/or otherwise reduces the backflow from the housing 410. In some embodiments, the vanes 450 can be or include one or more separate pieces or components that are attached to the housing 410. According to some embodiments, the vanes 450 may be part of the housing 410 and/or otherwise integrated with the housing 410. The vane 450 can be, for example, or include one or more protrusions, ridges, lips, and/or other features of the housing 410. In some embodiments, the vanes 450 can simply be adjacent to the inlet 412 and/or adjacent the inlet 412. For example, where system 400 is a blower fan within a laptop computer (not shown), vane 450 may be a feature of the laptop computer located adjacent inlet 412. For example, when system 400 is installed in a laptop computer, vane 450 can be a feature of the laptop casing and/or another component within the laptop that directs inlet airflow 430 toward inlet 412.Turning to FIG. 4B, FIG. 4B illustrates a perspective view of system 400 in accordance with some embodiments. In some embodiments, system 400 can be similar to systems 200, 300, 400 described in connection with any of Figures 2A, 2B, 3A, 3B, and/or 4A. System 400 can include, for example, a housing 410 that includes an inlet 412 and/or an outlet 416. In some embodiments, the hub 420 and/or one or more impeller blades 422 can be disposed within the housing 410. System 400 can also or alternatively include one or more vanes 450. In some embodiments, components 410, 412, 416, 420, 422, 450 of system 400 can be similar in construction and/or functionality to those of FIGS. 2A, 2B, 3A, 3B, and/or 4A. A similarly named and/or labeled component as described in any of the figures. In some embodiments, fewer or more components than those shown in Figure 4B may be included within system 400.In some embodiments, such as shown in FIG. 4B, the vanes 450 can be arranged in a generally circular pattern along the circumference of the inlet 412. The vane 450 can extend, for example, from a region of the housing 410 near the periphery of the inlet 412 to a region near the center of the inlet 412. According to some embodiments, the inlet 450 may direct air from the outside and/or above the housing 410 in a radial manner toward the center of the inlet 412. In some embodiments, the vanes 450 may reduce backflow effects by reducing turbulence within the zone and/or around the inlet 412. The vanes 450, for example, may substantially block any swirling components of the air flow emanating from within the housing 410 (eg, due to the swirling of the impeller blades 422). The recirculation directed around the circumference of the inlet 412 may, for example, encounter the side of the vane 450, thereby substantially preventing backflow from establishing a turbulent flow pattern in the region surrounding the inlet 412 and/or along the inlet 412.Other configurations of the vanes 450 can also be used without departing from some embodiments. For example, in accordance with some embodiments, multiple layers and/or configurations of vanes 450 may be used to direct air into the inlet 412. The vanes 450 may further extend from the inlet 412 as compared to that shown in Figure 4B, and/or the vanes 450 may be shaped to capture and/or direct air as desired. In some embodiments, the vanes 450 can be generally conical to more smoothly direct air from around the housing 410 into the inlet 412 and/or better reduce and/or capture reflowed swirl components.Referring now to Figures 5A and 5B, Figures 5A and 5B show perspective views of a system 500 in accordance with some embodiments. In some embodiments, system 500 can be similar to system 200, 300, 400 described in connection with any of Figures 2A, 2B, 3A, 3B, 4A, and/or 4B. System 500 can include, for example, a housing 510 that includes an inlet opening 512 and/or an outlet 516. In some embodiments, hub 520 and/or one or more impeller blades 522 can be disposed within housing 510. In some embodiments, components 510, 512, 516, 520, 522 of system 500 can be similar in construction and/or functionality to those of FIGS. 2A, 2B, 3A, 3B, 4A, and/or 4B. A similarly named and/or labeled component as described in any of the figures. In some embodiments, fewer or more components than those shown in Figures 5A and 5B may be included within system 500.In some embodiments, the shape of the inlet opening 412 can be modified to reduce backflow effects. While, for example, a typical blower fan opening may be generally circular and centered on the axis about which hub 520 and/or impeller blade 522 are swirled, inlet opening 512 of system 500 may have a different shape and/or configuration to Reduce the reflux effect. The centered and circular shaped inlets of a typical blower fan, for example, can work well in free flow (eg, test or laboratory) conditions, but under many circumstances and/or conditions, such as when installed in a computing device ( For example, in a laptop computer, there may be a backflow problem. In some embodiments, the inlet opening 512 can be shaped to reduce the occurrence of backflow effects and/or backflow in various applications of the system 500.In the case of, for example, a mobile computing device with system 500 mounted above inlet opening 512 with a low headspace, inlet opening 512 can be at least partially non-circular to reduce backflow effects. According to some embodiments, a partially non-circular inlet opening 512 is provided to block the flow of return air around the inlet opening 512. In some embodiments, the non-circular portion and/or the plurality of non-circular portions of the inlet opening 512 can be positioned over a portion of a relatively large number of housings that are known and/or expected to create backflow and/or backflow.According to some embodiments, the shaping of the inlet opening 512 can be accomplished in a variety of ways. As shown in FIG. 5A, for example, the inlet opening 512 itself may be cut out (and/or molded or otherwise formed) in the housing 510 to form an at least partially non-circular shape. In some embodiments, the device can be mounted and/or coupled to the housing 510 to change the shape of the inlet opening 512. Any number and/or configuration of objects may be attached, fastened, and/or otherwise attached to the housing 510 to cover one or more portions of the inlet opening 512. For example, where system 510 is installed in a particular environment, a substantial amount and/or total recirculation zone and/or multiple zones of the housing that produce reflow and/or reflow may be determined and then covered to limit and/or The backflow effect is substantially eliminated.In some embodiments, as shown in FIG. 5B, the inlet opening 512 can be generally circular in shape, but can be offset and/or eccentric from the axis about which the hub 520 and/or the impeller blade 522 rotate. The offset of the circular inlet opening 512, for example, can reduce the backflow effect. According to some embodiments, the inlet opening 512 can be offset from a region within the housing that is known and/or suspected to produce backflow. The inlet opening 512 can be offset, for example, from a region within the housing associated with the higher pressure zone within the housing 510 to reduce the likelihood that higher pressure will result in backflow that may interfere with the inlet air flow.Turning now to Figure 6, Figure 6 shows a modified graph 600 illustrating a system in accordance with some embodiments. In some embodiments, graph 600 may illustrate system 200 as described in connection with any of Figures 2A, 2B, 3A, 3B, 4A, 4B, 5A, and/or 5B herein, An improvement of one of 300, 400, and 500. For example, graph 600 may illustrate the difference between a typical blower fan and a blower fan with one or more modified inlets in accordance with embodiments described herein. According to some embodiments, a blower fan with a modified inlet can operate more efficiently and/or better than a typical blower fan. Reducing and/or eliminating backflow may also or alternatively reduce the level of noise associated with operation of the blower fan with the modified inlet.For example, in the case of a typical blower fan and modified inlet blower fan mounted on a mobile computing device (and/or in a similar mobile computing device), graph 600 may illustrate improved performance of a blower fan with a modified inlet. . For example, at a particular static pressure ("P"), the modified blower fan can pass a higher flow ("Q") of air than a typical blower. A reduction in the recirculation effect caused by modifying the inlet can, for example, increase the performance and/or efficiency of the blower fan (and/or reduce the noise level). In some embodiments, the increased efficiency of the blower fan may allow more heat to be removed from the electronic components and/or from the mobile computing device itself. According to some embodiments, the inlet modification may provide a higher efficiency and/or performance effect at a certain static pressure level. The inlet modification can, for example, be configured to have a higher effect within the range of pressures typically experienced during operation of the blower fan within the portable computing device.The graph 600 is a typical improvement over a typical blower fan obtained by varying the inlet geometry (e.g., with a partially non-circular inlet opening) in a simulated installation environment. Graph 600 is depicted for purposes of explanation and not limitation of the described embodiments. Other types, quantities, and/or sizes of improvements may be obtained with different fans, different entry modifications, and/or different environments. According to some embodiments, utilizing combinations of the inlet modification techniques described herein, for example, may increase performance improvements over typical blower fans.Reference is now made to Fig. 7, which shows a block diagram of a system 700 in accordance with some embodiments. In some embodiments, system 700 can be similar to systems 200, 300, 400 described in connection with any of Figures 2A, 2B, 3A, 3B, 4A, 4B, 5A, and/or 5B. 500. System 700 can include, for example, a processor 720, a memory 704, a blower fan 706 with a reflow limit inlet 708, and/or a battery 710. In some embodiments, components 702, 704, 706, 708, 710 of system 700 can be housed within electronic device 712 (eg, a personal digital assistant (PDA), laptop, and/or personal computer (PC)) and/or Or additionally associated with electronic device 712. According to some embodiments, components 706, 708 of system 700 may be similar in construction and/or functionality to those of FIGS. 2A, 2B, 3A, 3B, 4A, 4B, 5A, and 5 or 5B. A similarly named component as described in any of the figures. In some embodiments, fewer or more components than those shown in FIG. 7 may be included within system 700.Processor 702 can be or include any number of processors, which can be any type or configuration of processors, microprocessors, and/or microengines that are known or available or become known or available. . In some embodiments, other electronic and/or electrical devices may be utilized in place of or in addition to processor 702. Processor 702 can be, for example, or include any device, object, and/or component that generates, stores, and/or requires removal of heat. In some embodiments, processor 702 can include one or more components of a cooling solution to cool processor 702. Components may include, for example, heat sinks (eg, integrated heat sinks (IHS), heat sinks and/or their heat sinks), heat pipes, and/or other cooling components.According to some embodiments, the processor 702 may be an XScale(R) processor, such as an Intel(R) PXA270 XScale(R) processor. According to some embodiments, memory 704 may be or include one or more magnetic storage devices such as a hard disk, one or more optical storage devices, and/or solid state storage. Memory 704, for example, can store applications, programs, operations, and/or modules that store instructions to be executed by processor 702. Memory 704 may include any type of memory for storing data, such as single data rate random access memory (SDR-RAM), double data rate random access memory (DDR-RAM), or programmable read only, in accordance with some embodiments. Memory (PROM).In some embodiments, the blower fan 706 can be used to direct air (and/or other fluids) to the processor 702 and/or other components associated with the processor, such as cooling solution components. The blower fan 706 can, for example, direct air to the processor 702 to facilitate cooling of the processor 702. According to some embodiments, the blower fan 706 may also or alternatively direct air out of the electronic device 712 (not shown). Processor 702 and/or blower fan 706 may be driven by battery 710, in accordance with some embodiments. Battery 710 can be, for example, any type or configuration of battery capable of or capable of driving electronic device 712.According to some embodiments, the backflow restriction inlet 708 may be a system and/or device configured to prevent and/or reduce backflow in accordance with one or more of the embodiments described herein. The return restriction inlet 708 may, for example, include one or more vanes to direct air into the inlet of the blower fan 706 (eg, according to system 400). The return restriction inlet 708 may also or alternatively include and/or define one or more passages through which the return flow may escape the blower fan 706 without substantially interfering with the inlet air flow (e.g., according to system 300). According to some embodiments, the backflow restriction inlet 708 may include a portion of the housing of the blower fan 706 that is configured to reduce backflow. Portions of the housing may, for example, be curved to form a lip and/or taper toward the impeller blades to reduce the likelihood of backflow and/or backflow occurring (eg, according to system 200). The return restriction inlet 708 may also or alternatively include an inlet opening to the blower fan 706 that is at least partially non-circular in shape and/or offset from an axis about which the impeller of the blower fan 706 rotates (eg, according to system 500) ).In some embodiments, the backflow restriction inlet 708 can be a portion and/or a part of the blower fan 706. According to some embodiments, the backflow restriction inlet 708 may be a device that is coupled and/or attached to the blower fan 706. The reflow limit inlet 708 may also or alternatively include equipment and/or equipment portions in the vicinity of the blower fan 706. The reflow limiting inlet 708 can, for example, include a portion of the electronic device 712, such as a portion of the housing of the electronic device 712. In some embodiments, such as where the blower fan 706 includes more than one inlet, the backflow restriction inlet 708 can be associated with each inlet included with the blower fan. In some embodiments, multiple reflow restriction inlets 708 may be utilized. In the case of utilizing multiple recirculation limiting inlets 708, reflow limiting inlets 708 may function by the same or different reflow limiting strategies (such as described herein). According to some embodiments, the backflow restriction inlet 708 may increase the efficiency and/or performance of the blower fan 706, and/or may reduce the level of noise within the electronic device 712.The several embodiments described herein are for illustrative purposes only. A person skilled in the art will recognize that many alternatives to the embodiments described herein can be made without departing from the spirit and scope of the invention. It will be apparent to those skilled in the art from this disclosure that other embodiments can be practiced with the modifications and variations that are only limited by the appended claims. |
A position detection system for use in association with computing applications is disclosed. The position detection system comprises at least one positional element for attaining a position and a positioning device operative to determine a position of the positional element. The positional element comprises at least one first emitter for emitting a substantially continuously modulated acoustic waveform decodable to fix the position, and a second emitter for emitting a synchronization signal. The positioning device comprises an arrangement of at least one of a first detector operative to detect the continuously modulated acoustic waveform in a manner permitting fixing of the position and outputting the waveform for computation, in a manner retentive of the position fixing ability and a second detector operative to detect the synchronization signal. The synchronization signal is transmitted within a time frame having a fixed duration and is continuously repeated. The time frame is known to the positioning element. The synchronization signal is a sequence of at least two synchronization sub-signals. Each synchronization sub-signal bears timing data for the continuously modulated acoustic waveform, thereby to improve accuracy of the fixing of the position. The at least two synchronization sub-signals allow the at least one positional element to derive clock synchronization data by correlating the timing data and the known time frame duration. |
WHAT IS CLAIMED IS:1. A position detection system for use in association with computing applications, said position detection system comprising:at least one positional element for attaining a position, said positional element comprising: —_ . _____ . .at ...eoiittCT for eaiiittflft a.,-substantially,.ccfttimttaualy,,modulated acoustic waveform decodable to fix said position, and a second emitter for emitting a synchronization signal;a positioning device operative to determine a position of said positional element, said positioning device comprising:an arrangement of at least one of a first detector operative to detect said continuously modulated acoustic waveform in a manner permitting fixing of said position and outputting said waveform for computation, in a manner retentive of said position fixing ability; and a second detector operative to detect said synchronization signal;said synchronization signal being transmitted within a time frame having a fixed duration and being continuously repeated, said time frame being known to said positioning element, said synchronization signal being a sequence of at least two synchronization sub-signals, each synchronization §ub-signal bearing timing data for said continuously modulated acoustic waveform, thereby to improve accuracy of said fixing of said position, said at least two synchronization sub-signals allow said at least one positional element to derive clock synchronization data by correlating said timing data and said known time frame duration.2. A position detection system according to claim 1 wherein said acoustic waveform is an ultrasonic waveform.403. A position detection system according to claim 1 wherein said synchronization signal is an electromagnetic signal.4. A position detection system according to claim 1 wherein said synchronization afflalls'Wugrarc^igtrag^5. A position detection system according to claim 1 wherein said synchronization signal is a radio signal.6, A position detection system according so claim 1 wherein said timing data comprises a measure of time elapsed between an identifiable component of said acoustic waveform and time of transmission of said synchronization signal7. A position detection system according to claim 6 wherein:said positional element additionally comprises a first clock;said positioning device additionally comprises a second clock; and said synchronisation signal comprises clock synchronization data useful to synchronize between said first clock and said second clock.8, A position detection system according to claim 1 wherein said synchronization signal additionally comprises identification data of said positional element.9. A position detection system according to claim I wherein said synchronization signal is transmitted within at least one time slot, said one time slot being randomly selected from a fixed number of time slots provided within said time-frame.Ineffectual Property 1 Office of N.2 " 14110. A position detection system according to claim 9 wherein said synchronization signal additionally comprises identification data of said time-frame and identification data of said time slot within said time-frame bearing said synchronization signal.A position detection system accuidnig to claim 10 wileteiii said time-frameIdaoiificaiio.ojiiSa ia a ii a location numeral of said time slot within said time-frame bearing said synchronization signal.12. A position detection system according to claim 10 wherein said time-frame has a duration, said duration being known to said positioning device and wherein data of said clock synchronization is derived by said positioning device by correlating a received time-frame duration and said known time-frame duration.13. A position detection system according to claim 12 wherein said dock synchronization data is derived by linearly interpolating a sequence of respective received time-frame durations and said known time-frame duration.14. A position detection system according to claim 12 wherein said clock synchronization data is derived by using a phase lock loop between a sequence of respective received time-frame durations and said known time-frame duration.15. A position detection system according to claim. 1 wherein said acoustic waveform is selected from a predefined set of acoustic waveforms wherein said synchronization signal additionally comprises an identification data of said selected acoustic waveform.16. A position detection system according to claim i wherein said modulation is an amplitude modulation.I Intellectual Property| Office of N.Z.i * K MOV yfinc II s „ i w * S_sjuyI i3 ^ it*" s u s J*** !4217. A position detection system according to claim I wherein said modulation is a frequency modulation.18. A position detection system according to claim i wherein said modulation is a phase modulation.19. A position detection system according to claim 1 wherein said synchronization signal comprises an error correction code.20. A position detection system according to claim 19 wherein said error correction code comprises at least one cyclic redundancy character.21. A position detection system according to claim. 1 wherein said synchronization signal additioaally comprises ideatification data of a chafige of a status of at least one discrete input.22. A position detection system according to claim 21 wherein said discrete input is a state of a switch.23. A position detection system according to claim 21 wherein said synchronization signal additionally comprises a measure of time elapsed between said change of status of said discrete input and transmission of said synchronization signal.24. A position detection system according to claim 23 wherein said measure of elapsed time comprises a count of said synchronization signals transmitted between said43change of status of said discrete input and said transmission of said synchronization signal.25. A position detection system according to claim 24 wherein said count of said synchronization signals is limited and when said limit is reached said count remains at satdTltiroEliMira^neArwci^nce ofa"^bangs of~slaius~oTa swftcfc " " —26, A position detection system according to claim. 1 wherein said synchronization signal additionally comprises at least one measurement data of at least one of an analog input and a digital input.27. A position detection system according to claim 1 wherein said first detector arrangement comprises a single detector.28. A position detection system according to claim 1 wherein said first detector arrangement comprises at least two detectors and is operative to determine said position in two dimensions.29. A position detection system according to claim 1 wherein said first detector arrangement comprises at least three detectors and is operative to determine said position in three dimensions.30. A position detection system according to claim 1 wherein said positional element is associated with at least one of a computer pointing device and a writing device.31. A position detection system according to claim 1 wherein said positional element is associated with at least one of a mobile device and a portable device.fmetteetual Poparty Office at NX4432. A position detection system according to any one of the preceding claims and wherein said positional element is a plurality of positional elements.33. A position detection method for measuring a position of a positional element by a positioning device, said method comprising the steps of:providing a first clock at the positional element;emitting a substantially continuously modulated acoustic waveform at said position of said positional element, said waveform synchronized with said first clock and decodable to fix said position,emitting a synchronization signal at said position of said positional element, said synchronization signal being a sequence of at least two synchronization sub-signals, each synchronization sub-signal bearing timing data for said continuously modulated acoustic waveform, said synchronization signal being transmitted within a time frame having a fixed duration and being continuously repeated, said time frame being known to said positioning element, said timing data synchronized with said first clock;providing a second clock at said positioning device;receiving said acoustic waveform by said positioning device, via an arrangement of at least one of a first detector operative to detect said continuously modulated acoustic waveform in a manner permitting fixing of said position and outputting said waveform for computation, in a manner retentive of said position fixing ability;receiving said synchronization signal by said positioning device,deriving clock synchronization data from said synchronization signal by correlating said timing data and said time frame being known to said positioning element;synchronizing said second clock with said first clock by said positioning device according to said clock synchronization data; and computing said position of said positional device using said timing data and acoustic waveform.'NTEomclAoFPN°zEfiTV -6 JUN 2008R F O C I \ / r—4534. A position detection method according to claim 33 wherein said acoustic waveform is an ultrasonic waveform.35. A position detection method according to claim 33 wlierein said synchronization irfgtisf36. A position detection method according to claim 33 wherein said synchronization signal is an infrared signal.37. A position detection method according to claim 33 wherein said synchronization signal is a radio signal.38. A position detection method according to claim 33 wherein said timing data comprises a measure of time elapsed between an identifiable component of said acoustic waveform and time of transmission of said synchronization signal.39. A position detection method according to claim 38 wherein:said positional element additionally comprises a first clock;said positioning device additionally comprises a second clock; and said synchronization signal comprises a clock synchronization data useful to synchronize between said first clock and said second clock.40, A position detection method according to claim 33 wherein said synchronization signal additionally comprises identification data of said positional element.41. A position detection method according to claim 33 wherein said emitting of said synchronization signal comprises:intelfeciua! Prtpsrty Office of N.2-46providing a time-frame;providing a fixed number of time slots within each said time-frame; randomly selecting one of said rime slots within each said time-frame; and emitting said synchronization signal within said selected rime slot.42. A position detection method according to claim 41 wherein said synchronization signal additionally comprises identification data of said time-frame and identification data of said time slot within said time-frame bearing said synchronization signal,43. A position detection method according to claim 42 wherein said time-frame identification data is a counter of said time-frames and said time slot identification data is a location numeral of said time slot within said time-frame bearing said synchronization signal.44. A position detection method according to claim 42 and additionally comprising; providing said time-frame duration to said positioning device in advance;deriving data of said clock synchronization by said positioning device by correlating said received time-frame duration and a known time-frame duration.45. A position detection method according to claim 44 wherein said step of deriving said clock synchronization data is performed by linearly interpolating a sequence of received time-frame durations and said known time-frame duration.46. A position detection method according to claim 44 wherein said step of deriving clock synchronization data is performed by using a phase lock loop between a sequence of received time-frame durations and said known time-frame duration.4747. A position detection method according to claim 33 wherein said Step of emitting said acoustic waveform additionally comprises randomly selecting said acoustic waveform from a predefined set of acoustic waveforms; and wherein said step of emitting a synchronization signal additionally comprises emitting identification data of said selected acoustic waveform.48. A position detection method according to claim 47 wherein said acoustic waveform is a continuously modulated acoustic waveform.49. A position detection method according to claim 48 wherein said modulation is a frequency modulation.50. A position detection method according to claim 48 wherein said modulation is a phase modulation.51. A position detection method according to claim 33 wherein said synchronization signal comprises an error correction code.52. A position detection method according to claim 51 wherein said error correction code comprises at least one cyclic redundancy character.53. A position detection method according to claim 33 wherein said step of emitting said synchronization signal additionally comprises emitting identification data of a change of a status of at least one discrete input54. A position detection method according to claim 53 wherein said discrete input is a state of a switch.Ira«iteetu«t ftwtyOffice of M.2.1 5 WW 280!p :r n P 1 V n g I &FWS& t«KSK: % W flWS#4855. A position detection, method according to claim 53 wherein said step of emitting said synchronization signal additionally comprises emitting a measure of time elapsed between said change of status of said discrete input and a transmission of said56. A position detection method according to claim 55 wherein said measure of elapsed time comprises a count of a number of said synchronization signals transmitted between said change of status of said discrete input and said transmission of said synchronization signal.57, A position detection method according to claim 56 wherein said count of said synchronization signals is limited and when said limit is reached said count remains at said limit until a next occurrence of a change of status of a switch.58. A position detection method according to claim 33 wherein said step of emitting said synchronization signal additionally comprises emitting at least one measurement data of at least one of an analog input and a digital input59. A position detection method according to claim 33 wherein said step of receiving said acoustic waveform at said first detector arrangement comprises receiving said acoustic waveform at least three first detectors.60. A position detection method according to claim 33 wherein said step of receiving said acoustic waveform at said first detector arrangement comprises receiving said acoustic waveform via at least two first detectors and wherein said step of computing said position of said positional device comprises fixing said position in two dimensions.Intellectual Property Office of HZ.I4961. A position detection method according to claim 33 wherein said step of receiving said acoustic waveform at said first detector arrangement comprises receiving said acoustic waveform via at least three first detectors and wherein said step of computing said position of said positional device comprises fixing said position in three dimensions.62. A position detection method according to any one of claims 33-61 and wherein said positional element comprises a plurality of positional sub-elements.63. A position detection method according to claim 39 wherein said step of emitting a sequence of synchronization signals starts at a predefined delay after emitting said identifiable component of said acoustic waveform, wherein said predefined delay is known to said positioning device, and wherein said step of synchronizing said second clock with said first clock uses said predefined delay to synchronize said second clock and said first clock.64. A position detection system for use in association with computing applications, the system comprising: a positional element for attaining a position and comprising a first emitter and a second emitter each for emitting a continuously modulated acoustic waveform decodable to fix said position, the emitters being a predetermined distance apart, said two emitters sending orthogonal codes; and a detector arrangement for detecting said waveforms in a manner permitting fixing of said position and permitting determination of an attitude of said positional element, the detector arrangement further being operable to output said waveforms for computation, in a manner retentive of said position fixing ability;said positional element further comprising a third emitter for emitting a synchronization signal;said detector arrangement further comprising an additional detector operative to detect said synchronization signal, said synchronization signal, being transmitted within a[intellectual property office of n.2- 6 JUN 2008RECEivedI50time frame having a fixed duration and being continuously repeated, said time frame being known to said positioning element, said synchronization signal;said synchronization signal being a sequence of at least two synchronization sub-signals, each synchronization sub-signal bearing timing data for said continuously modulated acoustic waveform and respective pressure data; and said detector arrangement being operative to estimate a virtual straight line connecting said first emitter, said second emitter and a virtual point on a screen associated with said computing application wherein said at least two synchronization sub-signals allow said positional element to derive clock synchronization data by correlating said timing data and said known time frame duration.65. A position detection system substantially as herein described or exemplified, with reference to the accompanying drawings66. A position detection method according to claim 33 substantially as herein described or exemplified.intellectual property office of n.z.- 6 JUN 2008RECEIVED |
WO 2005/1116531PCT/IL2005/000509ACOUSTIC ROBUST SYNCHRONIZATION SIGNALING FOR ACOUSTIC POSITIONING SYSTEM5 FIELD AND BACKGROUND OF THE INVENTIONThe present invention relates to an acoustic positioning method and system and, more particularly, but not exclusively to a method and system for synchronization of transmissions between a positional element and a positioning device.10 The application of positioning, or location awareness, is commonly divided according to the size of the space in which the positional element should be located. The space size ranges from the personal area, which range is typically up to 1 meter, the room area which range is typically up to 10 meters, the local area, such as a warehouse, which range is up to 100 meters, and wide area which is typically an open 15 space.Some applications require positioning in three dimensions. Other applications, typically when the object is known to be located close enough to a known surface, such as the floor, require positioning in two dimensions only, and some applications require only the measurement of the distance between the positional element and the 20 positioning device.There are several methods for locating elements and most of them are based on measuring the time of arrival of a signal transmitted or reflected from the positional element.There are numerous applications for small space positioning, that is, 25 positioning within personal, room and local areas. The main applications involve pointing devices for computer interaction, and robotics and machine control, locating portable home appliances and especially toys, locating inventory in warehouses, hospital wards, etc.30 1. Personal Area Positioning - Computer pointing devices, digital pens and touch screensWO 2005/1116532PCT/IL2005/0005093-D mouse:A 3D mouse uses electromagnetic or ultrasonic positioning techniques to indicate its position in 3-D space to a monitoring device. The cordless mice in use today use Bluetooth and similar radio and IR transmitters for wireless connectivity.5 The radio or IR only takes care of the wireless connectivity, that is the signaling issue. Positioning generally involves a movement tracker in the mouse itself, which may be optically based. Simple movement tracking gives a 2D solution. 3D solutions can be produced, for example using either of the following:10 Acoustic:A mouse emits ultrasonic and IR pulses that are received by a desktop receiver. By measuring the time of flight, triangulation can be performed.IR sensors:15 A mouse emits IR pulses whose angles are measured by a desktop receiver.Several angle sensors allow three - dimensional triangulation thus obtaining the special position.PC tablets and styluses:20 A PC tablet uses a digital pen or stylus. The stylus enables interactions including writing directly on a graphic tablet, PC tablet, PC screen, PDA screen, cellphone screen and on any other computer enabled surface, screen or tablet. Available solutions work with passive or active electromagnetic or acoustic technologies.25Digital PensDigital pens are pointing devices used for electronic detection of handwriting or hand drawing, or for general pointing. The digital pens generally use technologies such as acoustics, IR and light. Other versions use accelerometers that sense30 accelerations and transmit the data to a positioning assembly. Another version is a camera that analyzes small printing codes on special paper to determine its position. Other pens use electromagnetic (including passive & active), and other technologies for their operation. Some of the digital pens are an autonomous unit, meaning the penWO 2005/1116533PCT/IL2005/000509works independently, providing its own fully processed co-ordinates as an output, and such is typical of optical and digital camera based units. Others, especially acoustic and electromagnetic devices, require a receiving or sensing unit. Digital Pens are widely used with PC's, laptops, PDAs, cellular telephones, electronic books, and the 5 like.Touch screens:Touch screens generally comprise sensors embedded within or near a computer screen in order to receive input from the screen. Some technologies include 10 coating the screen with special material that can sense physical contact, the material featuring electrical resistance, electrical capacitance or a surface acoustic wave (SAW) material. Other technologies include embedding of sensors around the screen. The sensors may be IR, acoustic, SAW and others.15 2. Room Area Positioning - Interactive whiteboards and toysInteractive whiteboardsThe interactive whiteboard is a whiteboard that captures written data from the board into an associated computer. One of the common technologies in this field is 20 acoustic positioning: a marker is placed in a sleeve that transmits beacon signals which are picked up and analyzed by a dedicated device also placed near the whiteboard. In some cases an IR or electromagnetic signal is transmitted along with the acoustic beacon for better accuracy and for simplicity. Another common technology is electromagnetic: the above mentioned marker sleeve transmits an 25 electromagnetic field, which is picked up by special loops on the back of the whiteboard.Technology using electrical resistance is also used. In such a case the surface of the whiteboard is coated with resistive material. Pressure is applied to the coating, and the pressure causes a local change in the resistive properties of the board. From 30 the changes, the controller is able to obtain a x, y position from the applied pressure.Technology using electrical capacitance, which is similar to the resistive, can also be used. Again, pressure is used, this time to change the capacitance properties of the board. Then, the controller is able to obtain the x, y positionWO 2005/1116534PCT/IL2005/000509ToysIt is relatively uncommon, due to the high cost, to have toys in which one unit can be aware of the location of a second unit.5 In a very basic example, one toy notes that there is another toy nearby,prompting a reaction, for example talking. In a more sophisticated example, one toy knows more or less where the other toy is.In the future it is hoped to provide a yet more sophisticated example in which one unit can successfully pass an object to the next one and vice versa. Further in the 10 future a toy is envisaged, in which twenty-two soccer robots run around passing the ball one to another. The robots calculate where to kick according to the locations of the other robots on the same and opposite teams. To provide each of the twenty-two robots with the computing and control power in order to play a game of soccer produces a very expensive and complex solution.15 Generally, toy technology has to be provided at low cost and current technology is relatively expensive. Specific technologies each have their drawbacks:Infrared sensors — IR can be used to indicate presence in the vicinity of a second object. At a higher level it can show a general direction.Accelerometers — the disadvantages of accelerometers are discussed above in 20 the section on pointing devices.Acoustic - Acoustic devices are relatively expensive. Only a single unit can be used in the same environment, energy use is relatively high, and the devices are difficult to miniaturize.25 Local Area Positioning - Robotics and Machine ControlIn recent years several new robotics products have reached the prototype stage and beyond. The robotics products include freely moving robots for different applications. The applications include lawn mowers, pool cleaners, spy and bomb disposal robots with cameras and remote control and many more. Such robots 30 typically use their own sensing together with pre-programming to find their way around in their surrounding environment.Possible new applications include an autonomous vacuum cleaner. One or more vacuum cleaners may roam automatically around the premises, vacuuming dirtWO 2005/1116535PCT/IL2005/000509and transferring the dirt to either fixed location units or roaming units. The unit that vacuums may autonomously locate the receiving unit to which it delivers the dirt and dock therewith in order to deliver the dirt.5 DrawbacksAll the technologies mentioned above, except the acoustic, need sensors on the positioning plane: the electromagnetic solution needs antenna loops on the back of the board, the pen with the camera needs special digitized paper and the touch-screens need special coatings. The need for sensors adds both to the cost of the final product, 10 and furthermore provides an unnatural restriction on use in that it does not allow the user to use arbitrary planes, such as a cluttered desk surface, as a working platform.Some of the technologies are limited to two-dimensional locations. But even those that can manage a third dimension do not currently provide accurate information of the third dimension. For example a stylus based on electromagnetic detection can 15 be detected when hovering above a screen, but it is not possible to tell accurately how high it is. The detector simply determines that it is present.There are other drawbacks specific.to certain of the technologies. For instance, IR positioning has difficulties working with direct sun. Existing acoustic solutions have serious limitations in acoustically noisy environments, in particular in the all-20 important industrial environment, where ultrasound noise is most common.Solutions that use wireless protocols as Bluetooth may suffer from protocol collisions, and from interference with other wireless equipment, such as WLAN equipment.All the technologies that are based on measuring the time of flight of a signal 25 transmitted by the positional element and received by the positioning device require accurate synchronization between the transmitter and the receiver to compensate for their clocks inaccuracy and drift.Acoustic positioning methods and devices are known in the art, including, but not limited to, the following US patents: 6,876,356 ; 6,875,933 ; 6,841,742 ; 30 6,822,641 ; 6,731,270 ; 6,724,371 ; 6,717,073 ; 6,654,008 ; 6,633,280 ; 6,628,270 ; 6,556,694 ; 6,539,363 ; 6,535,206 ; 6,529,189 ; 6,517,266 ; 6,501,461 ; 6,456,567 ; 6,456,280 ; 6,424,340 ; 6,414,673 ; 6,404,416 ; 6,373,003 ; 6,335,723 ; 6,326,565 ; 6,313,825 ; 6,310,615 ; 6,300,580 ; 6,292,180 ; 6,292,177 ; 6,266,051 ; 6,265,676 ;66,229,526 ; 6,211,863 ; 6,195,446 ; 6,191,778 ; 6,177,927 ; 6,153,836 ; 6,147,681 ; 6,144,367 ; 6,124,847 ; 6,111,565 ; 6,108,271 ; 6,104,387 ; 6,100,877 ; 6,067,080 ; 5,977,958 ; 5,907,130 ; 5,883,338 ; 5,872,743 ; 5,866,856 ; 5,818,421 ; 5,798,755 ; 5,793,361 ; 5,768,616 ; 5,750,941 ; 5,717,168 ; 5,657,054 ; 5,657,053 ; 5,635,951 ; 5,581,269 ; 5,557,301 ; 5,548,092 ; 5,539,159 ; 5,525,764 ; 5,517,579 ; 5,515,051 ; 5,500,492 ; 5,478,976 ; 5,308,936 ; 5,144,594 ; 5,128,660 ; 5,111,005 ; 5,054,005 ; 5,007,085 ; 4,991,148 ; 4,965,635 ; 4,814,552.The reader is also referred to applicant's prior application No. PCT/IL03/00309 filed April 14, 2003, the contents of which are hereby incorporated by reference.All the problems discussed above are further enhanced in the multi user environment, where one or more positioning devices have to locate several positional elements, and even more so, when the positional elements may roam between positioning devices.There is thus a widely recognized need for, and it would be highly advantageous to have an infrared communications system and method devoid of the above limitations.SUMMARY OF THE INVENTIONAccording to one aspect of the present invention, there is provided a position detection system for use in association with computing applications, said position detection system comprising: at least one positional element for attaining a position, said positional element comprising: at least one first emitter for emitting a substantially continuously modulated acoustic waveform decodable to fix said position, and a second emitter for emitting a synchronization signal; a positioning device operative to determine a position of said positional element, said positioning device comprising: an arrangement of at least one of a first detector operative to detect said continuously modulated acoustic waveform in a manner permitting fixing of said position and outputting said waveform for computation, in a manner retentive of said position fixing ability; and a second detector operative to detect said synchronization signal; said synchronization signal being transmitted within a time frame having a fixed duration and being continuously repeated,said time frame being known to said positioning element, said synchronization signal being a sequence of at least two synchronization sub-signals, each synchronization sub-signal bearing timing data for said continuously modulated acoustic waveform, thereby to improve accuracy of said fixing of said position, said at least two synchronization sub-signals allow said at least one positional element to derive clock synchronization data by correlating said timing data and said known time frame duration.intellectual property office of n.z- 6 JUN 2008RECEIVED7According to an embodiment of the present invention, there is provided a position detection system wherein the acoustic waveform is an ultrasonic waveform.According to yet another embodiment of the present invention, there is provided a position detection system wherein the synchronization signal is an electromagnetic signal.According to still another embodiment of the present invention, there is provided a position detection system wherein the synchronization signal is an infrared signal.Further, according to another embodiment of the present invention, there is provided a position detection system wherein the synchronization signal is a radio signal.Still further, according to another embodiment of the present invention, there is provided a position detection system wherein the timing data contains a measure of time elapsed between an identifiable component of the acoustic waveform and time of transmission of the synchronization signal.Even further, according to another embodiment of the present invention, there is provided a position detection system wherein the positional element additionally contains a first clock and the positioning device additionally containing a second clock, and the synchronization signal contains a clock synchronization data useful to synchronize between the first clock and the second clock.Additionally, according to another embodiment of the present invention, there is provided a position' detection system wherein the synchronization signal additionally contains identification data of the positional element.Additionally, according to yet another embodiment of the present invention, there is provided a position detection system wherein the synchronization signal is transmitted within at least one time slot, the one time slot being randomly selected from a fixed number of time slots provided within a time-frame.Additionally, according to still another embodiment of the present invention, there is provided a position detection system wherein the synchronization signal additionally contains identification data of the time-frame and identification data of the time slot within the time-frame bearing the synchronization signal.According to another embodiment of the present invention, there is provided a position detection system wherein the time-frame identification data is a counter of the time-frames and the time slot identification data is a location numeral of the time slot within the time-frame bearing the synchronization signal.According to yet another embodiment of the present invention, there is provided a position detection system wherein the time-frame duration is known to the positioning device and the clock synchronization data is derived by the positioning device by correlating the received time-frame duration and the known time-frame ratonv-tual property office of n £- 6 JUN 2008 Rcr>cii/-8According to still another embodiment of the present invention, there is provided a position detection system wherein the clock synchronization data is derived by linearly interpolating a sequence of the received time-frame durations and the known time-frame duration.Further, according to another embodiment of the present invention, there is provided a position detection system wherein the clock synchronization data is derived by using a phase lock loop between a sequence of the received time-frame durations and the known time-frame duration.Still further, according to another embodiment of the present invention, there is provided a position detection system wherein the acoustic waveform is selected from a predefined set of acoustic waveforms wherein the synchronization signal additionally contains an identification data of the selected acoustic waveform.Even further, according to another embodiment of the present invention, there is provided a position detection system wherein the modulation is an amplitude modulation, a frequency modulation or a phase modulation.Additionally, according to yet another embodiment of the present invention, there is provided a position detection system wherein the synchronization signal contains an error correction code.Additionally, according to still another embodiment of the present invention, there is provided a position detection system wherein the error correction code contains at least one cyclic redundancy character.According to another embodiment of the present invention, there is provided a position detection system wherein the synchronization signal additionally contains at least one identification data of a change of a status of at least one discrete input.According to yet another embodiment of the present invention, there is provided a position detection system wherein the discrete input is a state of a switch.According to still another embodiment of the present invention, there is provided a position detection system wherein the synchronization signal additionally contains a measure of time elapsed between the change of status of the discrete input and the transmission of the synchronization signal.Further, according to another embodiment of the present invention, there is provided a position detection system wherein the measure of elapsed time contains a count of the synchronization signals transmitted between the change of status of the discrete input and the transmission of the synchronization signal.intellectual property office of n.z.- 6 JUN 20089Still further, according to another embodiment of the present invention, there is provided a position detection system wherein the count of the synchronization signals is limited and when the limit is reached the count remains at the limit until a next occurrence of a change of status of a switch.Even further, according to another embodiment of the present invention, there is provided a position detection system wherein the synchronization signal additionally contains at least one measurement data of at least one of an analog input and a digital input.Additionally, according to another embodiment of the present invention, there is provided a position detection system wherein the first detector arrangement contains a single detector.Additionally, according to yet another embodiment of the present invention, there is provided a position detection system wherein the first detector arrangement contains at least two detectors and is operative to determine the position in two dimensions.Additionally, according to still another embodiment of the present invention, there is provided a position detection system wherein the first detector arrangement contains at least three detectors and is operative to determine the position in three dimensions.According to another embodiment of the present invention, there is provided a position detection system wherein the positional element is associated with at least one of a computer pointing device and a writing device.According to yet another embodiment of the present invention, there is provided a position detection system wherein the positional element is associated with at least one of a mobile device and a portable device.According to still another embodiment of the present invention, there is provided a position detection system as described above and wherein the positional element is a plurality of positional elements.According to another aspect of the present invention, there is provided a position detection method for measuring a position of a positional element by a positioning device, said method comprising the steps of: providing a first clock at the positional element; emitting a substantially continuously modulated acoustic waveform at said position of said positional element, said waveform synchronized with said first clock and decodable to fix said position, emitting a synchronization signal at said position of said positional element, said synchronization signal being a sequence of at least two synchronization sub-signals, each synchronization sub-signal bearing timing data for said continuously modulated acoustic waveform, said synchronization signal being transmitted within a tijne frame having a fixed duration and being continuously repeated, said time fh me being known to inftllectual property office of n.z- 6 JUN 200810said positioning element, said timing data synchronized with said first clock; providing a second clock at said positioning device; receiving said acoustic waveform by said positioning device, via an arrangement of at least one of a first detector operative to detect said continuously modulated acoustic waveform in a manner permitting fixing of said position and outputting said waveform for computation, in a manner retentive of said position fixing ability; receiving said synchronization signal by said positioning device, deriving clock synchronization data from said synchronization signal by correlating said timing data and said time frame being known to said positioning element; synchronizing said second clock with said first clock by said positioning device according to said clock synchronization data; and computing said position of said positional device using said timing data and acoustic waveform.According to an embodiment of the present invention, there is provided a position detection method wherein the step of emitting the synchronization signal contains the steps of: providing a time-frame, providing a fixed number of time slots within each the time-frame, randomly selecting one the time slot within each the timeframe, emitting the synchronization signal within at least one time slot.Further, according to another embodiment of the present invention, there is provided a position detection method additionally containing the steps of: providing the time-frame duration to the positioning device in advance, and deriving the clock synchronization data by the positioning device by correlating the received time-frame duration and the known time-frame duration.Additionally, according to another embodiment of the present invention, there is provided a position detection method wherein the step of deriving the clock synchronization data is performed by linearly interpolating a sequence of the received time-frame durations and the known time-frame duration.Additionally, according to yet another embodiment of the present invention, there is provided a position detection method wherein the step of deriving clock synchronization data is performed by using a phase lock loop between a sequence of the received time-frame durations and the known time-frame duration.Additionally, according to still another embodiment of the present invention,there is provided a position detection method wherein the step of emitting the acoustic waveform additionally contains randomly selecting the acoustic waveform from a predefined set of acoustic waveforms; and wherein the step of emitting synchronization signal additionally contains emitting an identification data of the selected acoustic waveform.[intellectual property i office of n.z- 6 JUN 2008RECEIVED11According to another embodiment of the present invention, there is provided a position detection method wherein the step of emitting the synchronization signal additionally contains emitting at least one identification data of a change of a status of at least one discrete input.According to yet another embodiment of the present invention, there is provided a position detection method wherein the step of emitting the synchronization signal additionally contains emitting a measure of time elapsed between the change of status of the discrete input and the transmission of the synchronization signal.According to still another embodiment of the present invention, there is provided a position detection method wherein the step of emitting the synchronization signal additionally contains emitting at least one measurement data of at least one of an analog input and a digital input.Further, according to another embodiment of the present invention, there is provided a position detection method wherein the step of receiving the acoustic waveform at the first detector arrangement contains receiving the acoustic waveform at least three first detectors.Still further, according to another embodiment of the present invention, there is provided a position detection method wherein the step of receiving the acoustic waveform at the first detector arrangement contains receiving the acoustic waveform via at least two first detectors and wherein the step of computing the position of the positional device contains fixing the position in two dimensions.Even further, according to another embodiment of the present invention, there is provided a position detection method wherein the step of receiving the acoustic waveform at the first detector arrangement contains receiving the acoustic waveform via at least three first detectors and wherein the step of computing the position of the positional device contains fixing the position in three dimensions.Additionally, according to another embodiment of the present invention, there is provided a position detection method wherein the step of emitting a sequence of synchronization signals starts at a predefined delay after emitting the identifiable component of the acoustic waveform, wherein the predefined delay is known to the positioning device, and wherein the step of the synchronizing second clock with the first clock uses the predefined delay to synchronize the second clock and the first clock.According to a further aspect of the present invention, there is provided a position detection system for use in association with computing applications, the system comprising: a positional element for attaining a position and comprising a first emitter and a second emitter each for emitting a continuously modulated acoustic waveform decodable to fix said position, the emitters being a predetermined distance apart, said two emitters12sending orthogonal codes; and a detector arrangement for detecting said waveforms in a manner permitting fixing of said position and permitting determination of an attitude of said positional element, the detector arrangement further being operable to output said waveforms for computation, in a manner retentive of said position fixing ability; said positional element further comprising a third emitter for emitting a synchronization signal; said detector arrangement further comprising an additional detector operative to detect said synchronization signal, said synchronization signal, being transmitted within a time frame having a fixed duration and being continuously repeated, said time frame being known to said positioning element, said synchronization signal; said synchronization signal being a sequence of at least two synchronization sub-signals, each synchronization sub-signal bearing timing data for said continuously modulated acoustic waveform and respective pressure data; and said detector arrangement being operative to estimate a virtual straight line connecting said first emitter, said second emitter and a virtual point on a screen associated with said computing application wherein said at least two synchronization sub-signals allow said positional element to derive clock synchronization data by correlating said timing data and said known time frame duration.Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The materials, methods, and examples provided herein are illustrative only and not intended to be limiting.Implementation of the method and system of the present invention involves performing or completing certain selected tasks or steps manually, automatically, or a combination thereof. Moreover, according to actual instrumentation and equipment of preferred embodiments of the method and system of the present invention, several selected steps could be implemented by hardware or by software on any operating system of any firmware or a combination thereof. For example, as hardware, selected steps of the invention could be implemented as a chip or a circuit. As software, selected steps of the invention could be implemented as a plurality of software instructions being executed by a computer using any suitable operating system. In any case, selected steps of the method and system of the invention could be described as being performed by a data processor, such as a computing platform for executing a plurality of instructions.BRIEF DESCRIPTION OF THE DRAWINGSThe invention is herein described, by way of example only, with reference to the accompanying drawings. With specific reference now to the drawings in detail, it is stressed that the particulars shown are by way of example and for purposes of intellectual property office of n.2- 6 JUN 2008WO 2005/11165313PCT/IL2005/000509illustrative discussion of the preferred embodiments of the present invention only, and are presented in order to provide what is believed to be the most useful and readily understood description of the principles and conceptual aspects of the invention. In this regard, no attempt is made to show structural details of the invention in more 5 detail than is necessary for a fundamental understanding of the invention, the description taken with the drawings making apparent to those skilled in the art how the several forms of the invention may be embodied in practice.In the drawings:Fig. 1 is a simplified illustration of an acoustic positioning system according 10 to a preferred embodiment of the present invention;Fig. 2 is a simplified illustration of a preferred embodiment of the positional device part of the acoustic positioning system of Fig. 1;Fig. 3 is a simplified illustrations of another preferred embodiment of the positional device part of the acoustic positioning system of Fig. 1 enabling detection 15 of the orientation in space of the positional device;Fig. 4 is a simplified block diagram of a preferred embodiment of the positioning assembly part of the acoustic positioning system of Fig. 1 configured to interface with a computing facility;Fig. 5 is a simplified block diagram of another preferred embodiment of the 20 positioning assembly part of the acoustic positioning system of Fig. 1 configured to include a computing facility;Fig. 6 is a simplified block diagram of a mathematical model of the acoustic channel between the positional element part and the positioning assembly part of the acoustic positioning system of Fig. 1;25 Fig. 7 is a two-part graph showing a typical correlation function associated with the channel model of the acoustic channel between the positional element part and the positioning assembly part of the acoustic positioning system of Fig. 1;Fig. 8 is a simplified block diagram showing a decoding unit for carrying out decoding of the correlation function of Fig. 7 according to the channel model of Fig. 30 6.Fig. 9 is a simplified illustration of a timing diagram of the transmission of the synchronization signal by the positional element past and the reception the synchronization signal by the positioning assembly part.WO 2005/11165314PCT/IL2005/000509DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS The principles and operation of a positioning system and method according to the present invention may be better understood with reference to the drawings and 5 accompanying description.Before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not limited in its application to the details of construction and the arrangement of the components set forth in the following description or illustrated in the drawings. The invention is capable of other 10 embodiments or of being practiced or carried out in various ways. Also, it is to be understood that the phraseology and terminology employed herein is for the purpose of description and should not be regarded as limiting.Reference is now made to Fig. 1, which is a simplified illustration of a positioning system 10 according to a preferred embodiment of the present invention. 15 Positioning system 10 comprises two main parts: a positional assembly 11 mounted on a positional device 12, which in the example of Fig. 1 is a pen, and a positioning assembly 13.The positional assembly 11 comprises two emitters: an acoustic emitter 14, preferably operative to emit continuously modulated ultrasound signal 15, and a 20 synchronization emitter 16 operative to emit a synchronization signal 17. The synchronization signal is preferably much faster than the acoustic signal, preferably the synchronization signal is an electromagnetic signal, preferably the synchronization signal is an infrared signal. Alternatively the synchronization signal is a radio signal. The positioning assembly 13 comprises three acoustic signal receivers 18 and 25 a synchronization signal receiver 19 connected to a positioning controller 20. The positioning controller may be a computing device such as a PC, a tablet, a PDA, etc., or an interfacing device to a computing device 21 as shown in Fig. 1. It is appreciated that the positioning assembly may comprise any number of acoustic receivers according to the positioning requirements. Typically, the positioning assembly 30 comprises one acoustic receiver for one-dimensional positioning, or two receivers for . two-dimensional positioning or three receivers for three-dimensional positioning. However, to obtain increased accuracy, increased coverage and to avoid obstruction of the signal path, the number of the acoustic receivers can be larger than the numberWO 2005/11165315PCT/IL2005/000509of the positioning dimensions. Similarly, the number of synchronization receivers can be larger than one.The continuously modulated ultrasound signal 15 and the synchronization signal 17 comprise the following features:5 a. The continuously modulated ultrasound signal 15 comprises a preferably continuous and contiguous sequence of modulation frames.b. Each modulation frame is distinguished by a time mark, typically but not exclusively the associated with the beginning of the frame. The time mark is typically a component of the modulation scheme of the acoustic signal.10 c. The synchronization signal 17 comprises a sequence of preferably non-continuous data elements. The rate of data elements is higher than the rate of modulation frames. Thus, a plurality of data elements are transmitted between each consecutive time marks.d. Each data element comprises information locating the time of15 transmission of the time mark according to a clock of the positional element 11. All the data elements following each time mark bears timing information for the same preceding time mark. Thus, assuring the reception of at least one correct timing information for each modulation frame at the positioning assembly 13. The timing information is typically, but not exclusively, the elapsed time between the20 transmission of the time mark and the transmission of each following data element.e. The positioning assembly 13 calculates the position of the positional element 11 by measuring the elapsed time between the time of transmission of the time mark as provided by the following data element and the time of arrival of the time mark at each of the acoustic receivers 18. It is assumed that the time of flight of25 the synchronization signal is effectively zero. The term "time-of-flight" refers hereinbelow to the elapsed time between the transmission and arrival of the acoustic time mark.f. The measurement of the time of flight of the acoustic signal is performed by the positioning assembly 13 based on its own clock and the timing information30 provided by the data elements and calculated by the positional element 11 based on the positional element's clock. Since the clocks suffer a certain inaccuracy and continuous unequal drift there is a requirement to synchronize the clock. Therefore the data elements additionally comprise clock synchronization information.WO 2005/11165316PCT/IL2005/000509g. The positioning system 10 preferably supports multi-user functionality, preferably both in the aspect of a single positioning assembly 13 being able to concurrently determine the positions of a plurality of positional elements 11, and the aspect of a plurality of positioning assembly 13 being able to concurrently determine5 the positions of a plurality of positional elements 11.h. To support multi-user functionality the positional element 11 preferably comprises a plurality of modulation schemes. The positional element 11 preferably, from time to time, randomly selects a modulation scheme. Alternatively, the modulation scheme is pre-selected, preferably by a manufacturer. The data elements10 additionally and preferably comprise an identification of the modulation scheme of the current modulation frame.i. Additionally to support multi-user functionality the data elements perform time-hopping to resolve collisions. The synchronization signal is transmitted within a continuous sequence of contiguous synchronization frames. Each synchronization15 frame is made of a fixed number of time slots, typically but not exclusively the number of time slots in a synchronization frame is 16. Each data element is transmitted within a sequence of time slots, wherein each such time slot is selected from a different synchronization frame, one slot per frame. The data element is therefore divided into packets wherein each packet is transmitted in one time slot. The 20 time slot that carries the packet is randomly selected for each synchronization frame. Preferably, the data element is short and the transmission bit rate is high so that the entire data element fits into a single packet and hence into a single time slot.j. The time length of all the time slots and all the synchronization frames for all of positional elements 11 is preferably identical except for the differences in their 25 clocks due to inaccuracy, drift, etc. Each data element packet comprises the number of its time slot within the synchronization frame. Thus the positioning assembly 13 is able to measure the time length of the current frame and assess the difference between the its own clock and the clock of the positional element 11. Thus achieving the synchronization of the clocks of the positioning assembly 13 and the positional 30 element 11.k. The data elements additionally comprise information about other elements of the positional element 11 as is necessary for the specific application.The abovementioned features will be further described below.WO 2005/11165317PCT/IL2005/000509Reference is now made to Fig. 2 and Fig. 3, which are simplified illustrations of two preferred embodiments of the positional device 12, differing by a second acoustic emitter 14 in the positional device 12 of Fig. 3. The second acoustic emitter enables the positioning assembly 13 (not shown in Figs. 2 and 3) to determine the 5 orientation of the positional device 12.The positional device 12 of Figs. 2 and 3 comprises the acoustic signal transmitter 14, the synchronization signal transmitter 16, and three push button intermittent switches 22, connected via interfacing electronic circuitry 23 to a microcontroller 24. Battery 25 provides power via power supply 26 and clock 10 circuitry 27 provides timing signals.Reference is now made to Fig. 4, which is a simplified block diagram of a preferred embodiment of the positioning assembly 13 configured to connect to a computing device (not shown). In the preferred embodiment of Fig. 4 the positioning assembly 13 is connected to the computing device via an analog input, preferably a 15 microphone input or an audio line-in input, such as audio inputs of a PC. It is appreciated that other types of inputs, preferably digital inputs such as MIDI, USB and wireless inputs such as Bluetooth may also be used to connect the positioning assembly 13 to the computing device.The positional assembly of Fig. 4 preferably comprises an array of acoustic 20 receivers 18, preferably acoustic transducers such as microphones, typically at least two microphones to convert the acoustic signals transmitted by the acoustic emitters (not shown) back to electrical signals. A synchronization signal receiver 19, preferably an IR photodiode, detects IR synchronization signals transmitted by an IR synchronization signal emitter (not shown). Alternatively, an antenna may replace the 25 IR photodiode to receive radio synchronization signal.Pre-amp and filtering circuitry 28 is preferably provided for each of the acoustic receivers 18 and the synchronization signal receiver 19. Time or frequency multiplexing functionality 29 allows the signals to be multiplexed onto a single channel. Frequency down-conversion, using local oscillator circuitry 30 and mixer 30 functionality 31 allows the signals as received to be converted downwards to frequencies compatible with an analog input of the computing device.WO 2005/11165318PCT/IL2005/000509A microprocessor 32 or other controlling logic is used to control and coordinate the positioning assembly. The synchronization signal enables the microprocessor to synchronize the signaling components.A cable and jack 33 are provided for connection to the computing device's 5 microphone socket, or any other input having an A/D converter. Data into the analog input is preferably buffered and filtered by buffer and filter circuitry 34. Buffering may be different depending on whether a microphone socket or some other input is used.Power supply circuitry 35 permits usage of the microphone jack 10 simultaneously as a power source for the positioning assembly and for data output.When using a host CPU to decode the positioning data transferred from the analog input, there is an inherent problem of synchronization. The clock of the positional element is not synchronized with the positioning assembly, which in turn is not synchronized with the computing device A/D converter. The synchronization of 15 the positional element and the positioning assembly can be achieved with the synchronization signal as described herein. Synchronization further on down the line with the host time base is in many cases impossible. Even with a relatively high sampling rate such as 50KHz, the mismatch between the synchronization signal and the A/D sample may be in the order of 20uSec, which corresponds to a few 20 centimeters in the measured location. Such imprecision is not suitable for most applications. Furthermore, even if good synchronization is achieved at a certain instance, the clocks of the two systems, namely the host and the positioning assembly, tend to drift over time due to limited accuracy of existing crystal technologies.To overcome the above-described host synchronization issue, positioning 25 assembly preferably uses a certain time or frequency slot to transmit to the host a synchronization pattern, which is at the Nyquist rate of the host A/D converter. The host can use the pattern to determine the phase difference between its own clock and the positioning time base clock.The synchronization pattern can be transmitted at regularity sufficient to 30 compensate for clock drift, and there is no need to send such a signal at every loop cycle.In a further preferred embodiment, the positioning assembly sends commands to the positional element, whether by acoustic, light, infrared, RF or any other form ofWO 2005/11165319PCT/IL2005/000509signal that the positional element is capable of responding to. In such an embodiment, the positional element 11 has RF or light receivers. Upon reception of a command, the positional element 11 may emit a signal such as the acoustic signal discussed above. The time of emission of the instruction from the positioning assembly is known, and 5 can be used to start timing a delay in receipt of the acoustic signal. The respective delays of the acoustic signals at the different microphones can again be used to arrive at position co-ordinates.Reference is now made to Fig. 5, which is a simplified block diagram of a preferred embodiment of the positioning assembly 13 configured to include a 10 computing facility. Elements that are the same as in Fig. 4 are given the same reference numerals and are not described again except to the extent necessary for an understanding of the present figure. In Fig. 5, an A/D converter 36 takes the output of the down conversion 31 and provides it to microprocessor 37. Microprocessor 37 is connected to a memory 38 and a digital data port 39. The Microprocessor 37 carries 15 out decoding of the acoustic signal waveform to determine the position of the positional element 11 and may additionally run applications using the positional information thus determined. The features are preferably provided within a positioning assembly chipset. The solution leads to a more complex and therefore more costly positioning assembly than that of Fig. 4. However, the circuitry can be 20 dedicated for use with the signal to coordinate decoding algorithm to be described below, and thus is still relatively simple in comparison with currently available solutions.A decoding algorithm is preferably provided to convert digitized versions of the positional element signals into position coordinates for passing to a local operating 25 system or directly to an application or the like. The algorithm is preferably provided as part of client software for the computing device, either as a driver for the positioning assembly or built in to the local operating system or exceptionally as part of a specific application. In the embodiments of Fig. 5 the algorithm may be incorporated into the positioning assembly electronics.30 The algorithm preferably takes into account the relatively low sampling frequency capabilities likely to be available by carrying out frequency down conversion. The conversion reduces the data frequency from the relatively high frequencies needed for transmission from the positional element to the relatively lowWO 2005/11165320PCT/IL2005/000509frequencies that installed sound hardware is likely to be able to sample and digitize. In addition the algorithm preferably includes an ability to handle noise and is preferably adapted for specific issues in the handling of low frequency signals in general.As mentioned above, the known art in the position location field concentrates 5 on the use of very short and energetic acoustic signals as the location signal. In order to achieve good resolution, the known solutions dictate high sampling frequencies, typically higher than 400KHz, in order to be able to find such short location signals and not miss them entirely. The present embodiments by contrast preferably do not use sampling rates higher than 44.1 KHz, since such frequencies are incompatible with 10 the installed base of sound processing equipment. Furthermore, it is recommended to keep the beacon signal sound frequency higher than 20KHz, that is within the ultrasonic range, so that users do not hear it. These two demands require a solution in which data is modulated over an ultrasonic carrier signal or waveform. The data can be frequency modulated (FM) or phase modulated (PM) onto the carrier comprising 15 the ultrasonic signal, or any other known method may be used. The algorithm preferably operates to decode the modulated signal and to reconstruct the original position-information bearing signal from the results of sampling thereof. In the present embodiment it is preferred to use band-limited signals in order to achieve a desired resolution level.20 Preferably, continuous wave (CW) modulations such as spread spectrum and frequency hopping are used, in acoustic position finding, to overcome reverberation and multipath effects.Typically, more than one detector is used, and the signals from the detectors are multiplexed for a single input. In certain cases, the need for multiplexing may be 25 avoided. For example, in the case of a stereo input sound blaster ® or similar stereo sound card, one can feed two signals into the microphone input, and another two signals to the "Line-In" input, making a total of four signals that do not need to be multiplexed together. Thus, the positioning assembly does not require a time division multiplexer for input access purposes. Rather, up to four sensors may be fed directly 30 to the sound card, and the sound blaster's ® internal circuitry is then able to take care, using an appropriate software driver, of the received signals. It is noted, however, that even stereo input sound blasters have a maximum of two A/D converters, so that timeWO 2005/11165321PCT/IL2005/000509division multiplexing is still needed to enable the sound card to carry out sampling over more than two channels simultaneously.In order to enable the stereo input sound card to sample four separate channels over two A/D converters, the transmitted signals may thus be synchronized with each 5 other by the positioning assembly. Such synchronization may be achieved in a number of ways. One way is to send synchronization data from or to the positioning assembly alongside the signals themselves. Another method requires cyclic transmission, that is to say the signals are sent in a coordinated manner so that a signal period or phasing between the channels that is known to both sides is used. The 10 methods hereinbefore described thus provide data synchronization, both with and without an internal time division mechanism.It is pointed out that the use of the separate stereo inputs, as described above, has certain drawbacks in comparison to other embodiments described hereinbefore. Thus for example there may be a phase difference between sampling carried out at 15 each of the two A/D converters, and thus a calibration stage has to be performed before using the system. Otherwise, the phase difference itself may confuse the distance determinations, leading to reduced accuracy.Another drawback is that relatively complex software driving functionality is required to keep switching timing between the microphone input and the "Line In" 20 input as accurate as possible. A jitter of a mere l^Sec between the switching timings can result in 0.3mm of measurement inaccuracy at room temperature.In addition much of the installed sound card base only allows for mono input. Very few sound cards are equipped for stereo microphone input.Additional cost may be added because, in order to use the additional inputs, an 25 additional connector and wiring have to be provided on the positioning assembly, which most users will not be able to utilize.A preferred embodiment of the present invention uses a maximum likelihood detector for decoding the signals received from the sensors to determine the distances to the individual sensors. At the maximum likelihood detector, the signals received 30 from the sensors, via the positioning assembly, are compared to reference signals. The comparison indicates a most likely signal and from the most likely signal a distance is determined as the distance from which the signal was most likely transmitted.WO 2005/111653 PCT/IL2005/00050922The maximum likelihood detector preferably uses a Ml mathematical model of the channel to construct a look up table of reference signals against which to compare received signals so that a best match distance can be found. As an alternative, the expected waveform can be sampled at the Nyquist rate, and any timing 5 mismatch between the sampling points can be overcome by extrapolation functions, to reveal the distance.Reference is now made to Fig. 6, which is a simplified block diagram indicating typical components of a mathematical channel model 40 for incorporating into a maximum likelihood detector of the kind considered above. The channel model 10 40 comprises an initial signal sequence S(t), referenced by numeral 41, which is fed into the transfer function Hl(s), referenced by numeral 42, of the acoustic emitter within the positional element 11, followed by air gap 43, which is modeled simply as a delay. The air gap is altered for different distances. The result is then fed to the reception path in the positioning assembly 13, which includes transfer function H2(s), 15 referenced by numeral 44, for the acoustic receiver, equalization H3(s), referenced by numeral 45, and low pass filtering H4(s), referenced by numeral 46, as well as mixing and any other features of the path. The full modeling of the channel is useful in the design of the maximum likelihood detector in that it allows accurate expected signals to be constructed against which the received signals ideally differ only in phase. The 20 detector is then relatively easily able to distinguish the most likely signal, which in turn corresponds to the most likely distance.The synchronization signal is used in the maximum likelihood based scheme both to set the start of the delay and also to synchronize clocks between the positional element and the positioning assembly. Synchronization path 47 is indicated on the 25 model. Specifically, path 47 provides a synchronization signal to a local oscillator 48.The skilled person will appreciate that acoustic signals have differing angular transfer functions. An equalizer can be added to the positioning assembly in order to compensate for this fact.The synchronization signal preferably also points, via a second path 49, to a 30 start time equivalent to a zero distance in a distance look up table 50. The most likely signal obtained by the maximum likelihood detector is then used to identify a most likely non-zero distance from the look up table. The skilled person will appreciate that, instead of a look-up table, it is possible to use an array generated on the fly.WO 2005/111653 PCT/IL2005/00050923Furthermore, other detectors may be used; and there are several known decoders of FM signals, such as PLLs, I/Q demodulation, phase multiplication etc. The maximum likelihood distance may then be tested by means of correlation.Alternatively and preferably the mixer 51 is replaced by Pass Band sampling 5 having a sampling frequency that is smaller than half the maximum frequency of interest, preferably using analog anti-aliasing filters.Also alternatively and preferably the mixer 52 is replaced by high frequency sampling having sampling frequency that is equal or greater than half the maximum frequency of interest, preferably using digital filtering. This embodiment eases the 10 requirements on the analog filtering and enables the use of a decimation filter with frequency down conversion to provide overall data throughput similar to the previous alternative embodiment.Reference is now made to Fig. 7, which is a two-part graph showing a typical correlation function that may be used. The top part 53 of the graph shows the 15 function, and the lower part 54 of the graph is an enlarged or zoomed view of the upper central part of the graph.Reference is now made to Fig. 8, which is a simplified block diagram showing a decoding unit for carrying out decoding as described above. The decoding unit comprises a maximum likelihood detector 55 that uses a channel model 40 as 20 described with reference to Fig. 6 above, and look-up table 50. The maximum likelihood detector 55 is followed by correlator 56, which uses correlation function 57 to carry out correlation using the distance detected as most likely by the maximum likelihood detector 55, to confirm that the detected distance is correct.Reference is now made to Fig. 9, which is a simplified illustration of a timing 25 diagram of transmitting and receiving of the synchronization signal 17 of Fig. 1. The synchronization signal 17 is transmitted by the positional element 11 preferably as a sequence of data elements 58. Each data element is preferably transmitted as a single packet 59. Each packet 59 is preferably transmitted within a slot 60 of a synchronization frame 61. Each synchronization frame 61 preferably comprises a 30 fixed number of slots 60, typically 16 slots per frame.It is appreciated that a packet 59 may be larger than can be fitted into one slot 60. In this case the packet can be subdivided and transmitted within several slots as necessary. However, preferably, the data element is short, and the transmission bitWO 2005/11165324PCT/IL2005/000509rate is high, so that the entire data element fits into a single packet and the packet fits into a single time slot, as is shown in Fig. 9.The synchronization frames 61 are of equal time length and follow each other immediately. The positional element selects one slot from each subsequent frame to 5 transmit the data element until the entire data element is transmitted. The slot is selected randomly within each frame. This time-hoping mechanism is useful to resolve collisions between two or more positional elements operating close to each other.The procedure for estimating the actual rate of the clock of the positional 10 element in terms of the clock of the positioning assembly is as follows. In the description below the data element fits into a single packet that fits into a single slot.1) The positioning assembly preferably comprises a free running timer. Upon receiving a valid packet header (0x55), the value of this timer is sampled and is referred to hereinbelow as "Packet Time Stamp" (PTS).15 2) The PTS is preferably delivered to the application layer together with the received packet data.3) The packet also preferably includes a "Time Slot" field, which indicates the position of the time slot relatively to the beginning of its Frame. The time slots are preferably changed for every frame, preferably by using a CRC-8 as a pseudo-random 20 generator. The purpose of this randomization is to minimize the effect of periodic interferers in the synchronization channel.The algorithm for synchronizing the clocks is as follows: The clock estimator calculates the differences between the transmitter clock and the receiver clock. Since the differences are mainly due to crystals inaccuracy, the estimation is basically a fit 25 of linear data. The linear fit slope units are measured in parts per million (ppm). The synchronization algorithm is implemented also to adjust to changes that occur due to temperature effects. The performance of the estimator is better than 30nSec, which corresponds to approximately 10|.im.In the example of Fig. 9 the positional element uses 16 time slots per 30 synchronization frame. In the frame shown in Fig. 9 the positional element transmits the packet in the seventh slot. The positioning assembly receives the stream 62 of frames 63, which are the same as the frames 61, except that their time measurements are different due to clock rate differences between the positional element and theWO 2005/11165325PCT/IL2005/000509positioning assembly. The positioning assembly receives packet 59, and samples its internal timer, creating a PTS, upon receiving a correct packet Header. The packet and the corresponding PTS value are then passed to the software layer for clock recovery.In the example provided in Fig 9, the clock of the positioning assembly is 5 faster that the clock of the positional element and thus the length of the frame 63 is longer than expected. Therefore the positioning assembly is able to estimate the difference between the clocks and accurately calculate the time of transmission of the time mark of the modulation frame from the contents of the packet.It is appreciated that there is no forced synchronization between the 10 positioning assembly and the positional element and each is performing its own state machine independently. Also, other positioning elements can transmit their data at random time. It is the algorithm that keeps track of the different positioning elements as they enter into the range of the positioning assembly.As can be understood from the discussion above, the synchronization signal, 15 particularly the data element, is preferably a digital signal. The synchronization signal preferably supports the following features and considerations:1) Power consumption is a consideration, especially for the positional element. To provide minimum power consumption the bit rate should be high as possible, preferably above 1.25MBit data rates.20 2) To further conserve power the amount of data transmitted should be minimal.3) To further conserve power and reduce cost the entire communication scheme is simplex, the positional element typically does not comprise a receiver, and the positioning assembly does not sent requests to re-transmit lost information. The 25 acoustic positioning system should endure data loss in excess of 80%.The following table presents a preferred packet structure featuring small size and high endurance. In the example presented in the table below the synchronization data element fits into the packet.WO 2005/11165326PCT/IL2005/000509FieldValueSize [BitslDescriptionHeader0x558Allows receiver synchronizationPacket Structure02Minimal size packet ID.Pen ID6Defines also the acoustic signal. Value not equal to 0x3F.Time Slot4Pseudo random synchronization, calculated by taking 4 LSB's of a cycles counter over CRC-8.IR Packet Number4Counts modulo 15 IR packets.Switch 3 status1The current switch statusSwitch 2 status1The current switch statusSwitch 1 status1The current switch statusSwitch 0 status1The current switch statusSwitch change counter4The number of consecutive packets with same Switch3-0 value. The counter doesn't roll over, but saturates at 6 (to avoid OxFF).CRC8Redundancy check CRC-8 algorithm.Total Number of bites40Including Start / Stop50The positional element preferably additionally transmits, typically and preferably within the synchronization data elements, information regarding other peripheral components of the positional element, such as the status of switches, as seen in the above table.5 Preferably the positional element transmits the status of the peripheral components within each data element, preferably within each packet. Preferably, the status are accompanied by a switch change counter, which preferably counts the number of packets transmitted since the last change of a switch. In the example presented in the above table, the counter increments by one for each packet. Once the 10 counter reaches a predefined maximum value, which is in this example the value 6, the counter remains at this value until a change in one of any of the monitored . switches occurs. At this time the counter is be reset to 0. Thus, the positioning assembly can assess the status of the switches at any time, at the accuracy of the rate of transmission of the synchronization packets, even when some of the packets are 15 lostIn the example presented in the table below, a switch has changed its status at Frame n — 3 (counter equal to 0). In the next frame, the counter increments to 1 (sinceWO 2005/11165327PCT/IL2005/000509there are no new changes in the switches status). Frame n - 1 is totally lost, probably due to interferers. Frame n enables the positioning assembly to recover the data of Frame n - 1, since the counter of the switches is incremented by 2 from the last received packet. The conclusion is that the switch status of frame n - 1 is 0x72.FieldFrame n-4Frame n-3Frame n-2Frame n-1Frame nHeaderOxAAOxAAOxAANoneOxAAPacket Structure & ID222None2Time Slot & Packet Number0xD10x320xF3None0x15Switch Status & Counter0xF70x700x71None0x73CRC0xC10x57OxEANone0xD25 If an interferer makes the data inconsistent with the CRC, the data is dumped,unless the positioning assembly is able to recover the data using the CRC. The algorithm checks the integrity of the data before actually using it. For example, if the recovered timing of a packet is too far from the expected, the data of this particular packet is dumped.10 Additional coding of the acoustic signal can be used for greater signal robustness and also to minimize interference with neighboring users. The latter has several benefits. It allows multiple users to use the same positioning assembly, or it may allow a single user to use several pointing devices, for example in a game such as chess. If each playing piece is a different pointing device and signal decoding allows 15 the different pointing devices to be distinguished then the system is able to incorporate multiple playing piece games. Minimizing interference with neighboring users may further allow the co-existence of multiple users in the same room.One of the preferred ways of minimizing interference between different pointing devices is by using pseudo-random frequency hopping algorithms. Each 20 mobile unit preferably has a pseudo-random frequency hopping sequence incorporated within an encoding unit (connecting between elements 23 and 24 of Figs 2 and 3 but not shown or preferably within microcontroller 24). The positioning assembly, or a decoding unit as preferred, has a corresponding de-hopping unit which is able to synchronize on the same hopping sequence. A preferred embodiment 25 provides synchronization by using the IR, or other electromagnetic signal, to transferWO 2005/11165328PCT/IL2005/000509the hopping sequence to the positioning assembly. Another preferred embodiment uses factory calibration to provide a sequence.One of the applications that can be realized with a position detection system based on frequency hopping is integration of the positioning assembly and WLAN 5 (wireless local area network). The result is a WLAN access point with positioning capabilities, able to support multi users and able to manage each of the users data separately. The users are able, for example, to write on paper or on their own electronic pads using pointing devices belonging to, or compatible with, the WLAN. Unseen, the WLAN traces the movements of each of the users separately and 10 produces networked electronic versions of each of their handwritten documents. For the purposes of writing on paper, the pointing device is a combination of the pointing device and a standard pen.Customer and application needs vary, and individual applications may require maximization of particular variables in relation to others. For instance, in certain 15 applications, accuracy may be of less importance than power consumption, and thus possible accuracy levels or the number of detectors in operation may be reduced in favor of reduced power consumption. In order to allow such system-specific optimization without manufacturing a range of similar devices, a flexible programmable scheme is preferred, both for the positioning assembly and for the 20 mobile unit.Flexible programming may be performed by burning fuses or by use of nonvolatile memory (as ROM or EEPROM). Typical data for setting in this way includes sampling rate per second, transmission power, two-dimensional or three-dimensional application, and the like.25 The positional element may additionally be supplied with a pressure sensor,whose output can be used by appropriate applications to allow graphical or security features. For example a line may be drawn differently depending on the pressure applied. A suitable pressure sensor for incorporation into a pointing device may comprise a digitizer (10 bits or less), a strain gauge and a driving circuit 30 Yet another feature may include the ability to measure the angle of the mobile unit (useful for instance in digital stylus applications). A suitable angle sensor for incorporation into the positional element may comprise a tilt gauge, digitizer and driving circuit. In a further embodiment, two position indicators such as ultrasonicWO 2005/11165329PCT/IL2005/000509loudspeakers may be placed at either end of the pointing device, each transmitting in a manner that renders the signals distinguishable. The angle of the pointing device may then be derived by calculating each of the positions and performing simple geometry between them.5Stand Alone Positioning assemblyAs mentioned above, in the embodiment of Fig. 4, the positioning assembly includes the ability to decode signals without the support of the host computing device.10 The decoding algorithm described hereinabove does not require especially powerful processing power and it is thus feasible to include a limited resource CPU into the positioning assembly without increasing the overall cost. In a preferred embodiment, .a computation power of-1MIPS is used to decode the signals. Such low computation power can in fact be integrated into a single customized positioning 15 assembly chip, or as a low cost add-on. The use of such a CPU allows a more conventional connection to hosts, such as: UART, USB, Serial and others since the signal that is transferred is the processed result of the positioning and not the raw signals. Such an output is also suitable for direct use within WLAN and Bluetooth. Such a stand-alone positioning assembly preferably includes a digitizing element, 20 (A/D converter), a CPU, a memory and interface circuitry.Reference is now made back to Fig. 3 in which two acoustic emitters are mounted preferably at two sides of a positional device to enable detection of the orientation of the device. Each acoustic emitter issues a separate waveform that is separately detected and the orientation of the positional device is determined by 25 drawing a straight line between the two positions. Preferably, the two acoustic emitters are able to identify themselves to the positioning assembly and to operate simultaneously. The respective signals of the two acoustic repeaters may be time or frequency multiplexed to work together and in one preferred embodiment the two acoustic repeaters use frequency hopping, each using a different pseudo-random 30 sequence. The positional element can use a single synchronization emitter to provide synchronization for both modulation frames.WO 2005/11165330PCT/IL2005/000509Electromagnetic PositioningAnother method that can be used with the microphone input is electromagnetic positioning. A board with orthogonally arranged magnetic loops (conductors) serves as a writing pad, A pointing device emits electromagnetic signals, 5 which are picked up by the pad's magnetic loops. By analyzing the signals, the pointing device's position can be calculated. The loops can be printed onto a PCB and can be made small enough to give any desired level of precision.The pointing device is the same as described above except that the synchronization signal emitter is an electromagnetic transmitter including an emitting 10 antenna and associated modulating circuitry. The synchronization signal receivers of the positioning assembly comprises built in loops as sensors with RF demodulating circuitry but otherwise is the same as the positioning assembly described above. The decoding algorithm again has to deal with a different kind of information part of the signal but otherwise covers the same issues as those discussed above. 15 The positioning system of the present embodiments has a wide range of applications, a few of which are listed below. Preferably a single electronic device is manufactured, and is set up in different ways for the chosen application, possibly by the use of jumper or dip-switches. The switches may allow configuration of the system for the most appropriate trade-offs for the given application. In some 20 applications low power consumption is important. In others accuracy of positioning is critical. In yet others, accuracy is less important than rapid updating and the number of samples per second. In others range is important, and in yet others the ability to accommodate large numbers of users may be critical.In the following, a number of applications of the above-described technology 25 are considered.Multi-User Positioning SystemA multi-user positioning system embodiment of the present invention preferably comprises a WLAN system with an embedded positioning assembly 30 according. A plurality of users in the conference room has a positional element each. Each positional element has its own unique identity as described above. The various positional elements transmit continuously modulated waveforms accompanied by synchronization signals. The waveforms are detected by the multi-user positioningWO 2005/11165331PCT/IL2005/000509system. The waveforms may additionally be tracked by tracking systems local to each user, preferably within their cellular telephones. In addition the conference table itself may have a master positioning assembly combined with the conference room telephone facility.5Toy applicationsToys with positioning can be divided into three sub-categories, to be explained below: front of screen games, front of computer games, and computer free environments.10 Front of Screen Games are games in which user interaction is directly with the computer screen, for example:Toy Finger: - a toy pointing devices for toddlers or children to point at computer screens in order to interact with the website or program. Touching the screen with the pointing device launches a cartoon website inside the member zone of 15 the toddler. The pointing device also enables the user to interact with objects appearing on the screen. The pointing device, preferably in the form of a pointing finger or cartoon character, and technologically a digital pen, has its unique identity, according to any of the above embodiments.Toy Bird:- A game is provided in which the user flies a bird to a nest 20 located in upper right hand side of the screen in order to receive points or applause. The implementation is as for the pointing finger above.Wireless Joysticks - A possible application of the technology is a wireless joystick for computer games. Joysticks have applications across the computer game industry.25 Front of Computer Games - Front of computer games are games where interaction happens in the vicinity of the computer, or for that matter the PDA, cellular telephone, or an element attached to the computer as can be understood from the following example.Battlefield Game - A board preferably representing a battlefield in 30 which two opponents join battle. Playing pieces, each comprising a positional element, represent soldiers and weapons, which advance towards each other and fight. Certain aspects of the game occur only on the screen. For example if one of the players advances his soldier to a specific location containing a mine, the resultingWO 2005/11165332PCT/IL2005/000509explosion occurs on the screen. A positioning assembly embedded within the computer or an element attached to the computer receives the unique positioning coordinates of each and every soldier, vehicle, etc. and coordinates it using a war plan application on the computer.5 Computer Free Environments - Computer free environment games are games that do not require a PC because they themselves carry a sufficiently powerful CPU.Battlefield Games — as above but standalone, without the computer.Positioning enabled toy cars - A car follows or otherwise interacts10 with another car. A first car has a positional element while a second car has a positioning assembly. The second car is thus able to follow the first one or otherwise interact therewith.Independent Robots15 Independent robots keep track of each other's position and the position of a ball and transfer the ball between them. Each robot has a positional element for the robot as a whole and additional positional elements for each limb whose position is needed for the kind of maneuvers intended. In one embodiment each robot includes its own standalone positioning assembly and makes its decisions based on incoming20 positional data from itself and from the surrounding robots. However in a second preferred embodiment each robot has only positional elements and control circuitry. Tracking is carried out by an external positioning assembly, which then instructs the robots on how to move. Thus only a single intelligent device need be provided and the robots can be relatively unsophisticated.25 In one preferred embodiment, one robot transfers a ball to a second robot. The second robot takes the ball and transfers it to a third robot.In another preferred embodiment a joystick controls the movement of a robot while the other robots automatically try to catch him based on his positioning. The application can make use of two-way communication, as explained elsewhere herein.30WO 2005/11165333PCT/IL2005/000509Positioning enabled building blocksBuilding blocks are each equipped with a uniquely identifiable positional element. A user can build various constructions interactively, receiving computer guidance during the course of building.5Command & control glovesCommand and control gloves for virtual reality or like games. Each limb of the glove is provided with position location ability according to the above embodiments. In accordance with the present embodiments such positioning ability 10 can be provided simply by attaching a sensor to the end of each finger of a regular glove. Thus each finger is provided with separate positioning ability to be read as desired by the game application. Alternatively or additionally, rings on the fingers may provide wireless terminals or straps may be applied on any part of the body of the user or on items or accessories used in the game.15Inventory applicationAn inventory system according to a preferred embodiment of the present invention comprises positional elements embedded in items of stock and a positioning assembly to track the movement of the stock items.20Manufacturing applicationA manufacturing line employing robots according to a preferred embodiment of the present invention comprises positional elements embedded in each robot and a positioning assembly that keeps global control of the robots. Each robot may have a 25 positional element for the robot as a whole and positional elements for each limb whose position is needed for the kind of maneuvers intended. In one embodiment, where robots need to interact with each other, each robot includes its own standalone positioning assembly and makes its decisions based on incoming positional data from itself and from the surrounding robots. However in a second preferred embodiment 30 each robot only has positional elements and control circuitry. Tracking is carried out by the external positioning assembly which then instruct the robots on how to move. Thus only minimal number of intelligent devices need be provided, and relatively unsophisticated robots can provide group behavior.WO 2005/11165334PCT/IL2005/000509Higher precision can be achieved by putting additional wireless terminals in the detection space, at pre-determined locations. Measuring these units will calibrate the absolute measurement of the moving terminals so that a greater precision can be achieved.5Security ApplicationA pointing device with a positioning assembly according to a preferred embodiment of the present invention can be incorporated into an electronic identification scheme. Personal written signatures are often used for identification but 10 a skilled forger is able to copy other persons' signatures. A forger however, copies the outward appearance of the signature and not the way in which the user applies pressure to the pen or holds the pen, say at a given angle on a given part of the signature. A pointing device, that the user can use as a pen to write on paper, and which can supply not only movement information but also pressure and attitude 15 information, provides an enhanced security personal signature. Systems for obtaining signature information which incorporate pressure as well as the outward appearance are in use, however, use of preferred embodiments of the present invention makes such a system cheaper and more flexible. In addition, attitude information of the pen allows for greater verification. The orientation of the pen can be measured by adding 20 an additional angle sensor to the pen. The angle sensor may comprise an accelerometer or may use an additional location signal transmitter on the other side of the stylus, as described above. In the latter case, the positioning assembly determines the XYZ locations of the two transducers, from which the angle of the stylus can be calculated. The angle is then used as additional factor and results in an electronic 25 version of the signature, which is a triplet of three vector values (XY location, pressure, and angle).The following embodiments describe an enhanced identification apparatus, which integrates positioning with other security methods.Usage of a pointing device in the form of a stylus as an authentication means. 30 A group of styluses are provided as part of the system. One of these styli is provided to each of an identified group of users and each stylus is provided with its own electronic identity.WO 2005/11165335PCT/IL2005/000509By identifying the stylus, the user presently interacting with the system is identified and this allows verifiable usage of the system in security-wise applications. The user may also be required to provide his usual signature, which may be electronically verified based on movement and applied pressure or the like.5 For greater security, a stylus can also be provided with a feature to enable a digital signature, for example based on the Public Key Infrastructure (PIG). The user may sign with his usual hand-written signature. Once the hand signature is verified, the system uses the stylus to provide a digital signature to the document using a PKI algorithm. Such a feature requires 2-way communication between the pointing device 10 and the positioning assembly, which can be provided using available IR or RF channels. The electronic signature thus provides a guarantee both that the personalized stylus was used and that the authorized user was verified.As an alternative or in addition to the above, a keypad may be added to allow the user to enter a personal identification number (PIN).15 As a further alternative or in addition to the above, the system may further incorporate a biometrics sensor to the stylus or the positioning assembly to increase the security level. The biometrics sensor may be for fingerprint recognition, retinal signature recognition, voice signature recognition and the like.20 Additional Stylus ApplicationsA stylus or digital pen may additionally be used for:Remote control - The position of the stylus may be tracked and used to exert control over a system. Thus pointing to a device may appear to make it operate. Twisting the stylus whilst pointing may affect the operation of the device. 25 Wristwatch phones may be supplied with a miniature stylus to write on the face of the phone or on an adjacent small pad attached thereto. Alternatively writing may be carried out on regular paper and the watch located nearby to track the stylus movement.The stylus may be used to provide SMS messages instead of having to type 30 them in through the keyboard and/or may provide the ability to sketch and send the sketch as a file. Likewise the stylus may be used to input a telephone number which is then dialed. The same idea may be applied to conventional telephones.WO 2005/111653 PCT/IL2005/00050936The stylus may be used to enable writing for data input etc to other devices such as cash registers, gaming devices, Cable TV, refrigerators, etc.The stylus of the security application discussed above can be used as part of a cheque or credit card signature authentication in front of a point of sale device.5 Speaker pen - Provided the computing power is available, upon writing, it is possible to provide an application in which the pen writes and the application speaks the written notes. Applications for recognizing handwriting are well known, and applications for electronic voicing of writing are known. The combination of the two with the stylus of the present embodiments provides a way of reading back 10 handwritten notes. The application may be located in the positioning assembly or attached computer. If using the embodiment in which transmission back to the pen is possible, then the pen itself can speak the written notes.Combined digital pen and translator - the pen writes and translates the output into other languages.15 Any combinations of the above.A standalone device serving as the Positioning assembly, has its own screen and preferably is networked, via Bluetooth, Wireless LAN, regular LAN or the like to printers and other devices. The arrangement provides a full range of coverage from hand input to final printed or any other form of output.20Miscellaneous applicationsGun aiming device - by mounting two positional elements on a game device in . the form of a gun or a similar device. Preferably, one positional element is mounted on the end of the device and the other is mounted as far as possible on the a virtual 25 line parallel to the nozzle of the gun. The two positional element send orthogonal codes (or codes having low cross correlation). The positioning assembly is associated with a screen, preferably on one of the corners or right above the screen and has at least three microphones. The positioning assembly estimates the virtual line from the two positioning elements on the gun, to the screen. The status of buttons pushed on 30 the gun are transferred via the IR link, together with the synchronization data.3D Stereo - by placing the wireless transmitter on a person the stereo can choose how to direct different volume or sound from different speakers to give the person wherever he is in the room a complete and real surround experience. StereoWO 2005/11165337PCT/IL2005/000509direction as such is known but can be greatly simplified by using tracking according to the present invention.Video Tracking - Based on the same principle as stereo tracking, tracking may be used in association with a PC video cam to automatically follow a person who is 5 being filmed. The embodiments are of course extendable to any video system and can be especially useful for video conferencing, etc.Exterior and interior positioning system for cars - for example, having elements inside the car controlled or known about by keeping track of their position.Tracking device — a standalone positioning assembly device with a screen 10 directing the user to the location of an object in its vicinity. The system may indicate the identity and location of these objects on the screen. The system may be useful in a room for finding keys and other personal items.Two-way network system - The system comprises a series of device having both a transmitter and receiver. Each device locates and registers each other device it 15 is aware of and between them they build a virtual network. The network may be built amongst themselves or may additionally use a smart hub. The result is a radio-based network whose range is far greater than the range of any of the individual objects. Each object has the exact co-ordinates of neighboring objects and thus can use directional transmission to improve range or spectral efficiency and the network can 20 be used to deliver data to any point or to obtain from any participant object the whereabouts of unrelated network objects and so forth. The network can be connected to other like networks or can have a point of access to a wider network. The individual elements may be provided with their own identities and the system is useful for providing real time tracking of teams of men and simultaneously providing 25 them with an intercom system.A scaled down version of the inventory system may provide an Out of Range alert. A positional element may be provided on lose items temporarily provided to customers, for example earphone headsets provided to airline passengers. If the customer takes away the item then an out of range alarm is set, allowing for the errant 30 device to be found.A user may have a personal locator that activates doors, lights and appliances. Likewise communications equipment can be directed, by tracking of the personal locator, to divert calls to the nearest fax machine, etc. Both tracking and managementWO 2005/11165338PCT/IL2005/000509of the communication transfer are preferably managed over a LAN, or WLAN. The personal locator can itself tell the user about incoming calls and other communications and give the options for receiving the communication. In the WLAN version, the positioning assembly is preferably part of the WLAN infrastructure.5 It is expected that during the life of this patent many relevant positioning devices and systems will be developed and the scope of the terms herein is intended to include all such new technologies a priori.It is appreciated that certain features of the invention, which are, for clarity, described in the context of separate embodiments, may also be provided in 10 combination in a single embodiment. Conversely, various features of the invention, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable sub-combination.Although the invention has been described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications and variations 15 will be apparent to those skilled in the art. Accordingly, it is intended to embrace all such alternatives, modifications and variations that fall within the spirit and broad scope of the appended claims. All publications, patents and patent applications mentioned in this specification are herein incorporated in their entirety by reference into the specification, to the same extent as if each individual publication, patent or 20 patent application was specifically and individually indicated to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present invention.39 |
Some novel features pertain to an integrated device package (e.g., die package) that includes a package substrate, a die, an encapsulation layer and a first set of metal layers. The package substrateincludes a first surface and a second surface. The die is coupled to the first surface of the package substrate. The encapsulation layer encapsulates the die. The first set of metal layers is coupledto a first exterior surface of the encapsulation layer. In some implementations, the first set of metal layers is configured to operate as a die-to-wire connector of the integrated device package. Insome implementations, the integrated device package includes a second set of metal layers coupled to the second surface of the package substrate. In some implementations, the integrated device packageincludes a second set of metal layers coupled to a second exterior surface of the encapsulation layer. |
1.An integrated device package that includes:a package substrate, the package substrate including a first surface and a second surface;a die, the die being coupled to a first surface of the package substrate;An encapsulation layer that encapsulates the die;A first set of metal layers, the first set of metal layers being coupled to a first outer surface of the encapsulation layer.2.The integrated device package of claim 1 further comprising a second set of metal layers coupled to the second surface of the package substrate.3.The integrated device package of claim 2 further comprising a set of solder balls coupled to said second set of metal layers.4.The integrated device package of claim 1 wherein the first set of metal layers are configured to operate as a die-to-cable connector of the integrated device package.5.The integrated device package of claim 1 further comprising a second set of metal layers coupled to the second outer surface of the encapsulation layer, wherein the first outer surface is a top surface of the encapsulation layer, and the second outer surface is a side surface of the encapsulation layer.6.The integrated device package of claim 5 further comprising a third set of metal layers coupled to the second surface of the package substrate, wherein the third set of metal layers are coupled To the second set of metal layers.7.The integrated device package of claim 5 wherein said integrated device package is coupled to a carrier, said second set of metal layers being configured to be directly coupled to said carrier.8.The integrated device package of claim 1 wherein the first set of metal layers are configured to provide an electrical path to the integrated device package by at least one of a power signal and/or a data signal.9.The integrated device package of claim 1 wherein the integrated device package is at least one of a die package and/or a chip package.10.The integrated device package of claim 1 wherein said integrated device package is incorporated into at least one of: a music player, a video player, an entertainment unit, a navigation device, a communication device, a mobile device, a mobile Telephone, smart phone, personal digital assistant, fixed location terminal, tablet computer, and/or laptop computer.11.An equipment that includes:a package substrate, the package substrate including a first surface and a second surface;a die, the die being coupled to a first surface of the package substrate;An encapsulation layer that encapsulates the die;A first device configured to provide an electrical connection of a die to a cable, the first device being coupled to a first outer surface of the encapsulation layer.12.The apparatus of claim 11 wherein said first device is further coupled to a second surface of said package substrate.13.The apparatus of claim 11 wherein said first device is further coupled to a second outer surface of said encapsulation layer, wherein said first outer surface is a top surface of said encapsulation layer and said The second outer surface is a side surface of the encapsulation layer.14.The apparatus of claim 11 wherein said equipment is coupled to a carrier, said first device being configured to be directly coupled to said carrier.15.The apparatus of claim 11 wherein said first device is configured to provide an electrical path to said equipment by at least one of a power signal and/or a data signal.16.The equipment according to claim 11, wherein said equipment is incorporated into at least one of: a music player, a video player, an entertainment unit, a navigation device, a communication device, a mobile device, a mobile phone, a smart phone , personal digital assistant, fixed location terminal, tablet computer, and/or laptop computer.17.A cable to die device that includes:a housing, the housing including a cavity, wherein the cavity is configured to be coupled to an integrated device package;a cable sleeve coupled to the outer casing;a set of cables in the cable sleeve;a first set of metal layers, the first set of metal layers being coupled to the set of cables, the first set of metal layers being located in the outer casing, wherein the first set of metal layers are configured to be coupled to the A second set of metal layers in the integrated device package.18.The cable-to-die device of claim 17 wherein said first set of metal layers and said set of cables are configured to provide at least one of a power signal and/or a data signal to The electrical path of the integrated device package.19.The cable to die device of claim 17 further comprising a shielding layer within said outer casing.20.The cable-to-die device of claim 17 wherein said cavity defines a first inner surface and a second inner surface in said outer casing, said first set of metal layers being coupled to said outer casing First inner surface.21.The cable-to-die device of claim 20, further comprising a third set of metal layers coupled to said set of cables, wherein said third set of metal layers Coupled to a second inner surface of the outer casing.22.The cable-to-die device of claim 17 further comprising:a first set of interconnects, the first set of interconnects being coupled to the first set of metal layers;A first set of interfaces, the first set of interfaces coupled to the set of cables and the first set of interconnects.23.The cable-to-die device of claim 17 wherein said cable-to-die device is incorporated into at least one of: a music player, a video player, an entertainment unit, a navigation device, a communication Devices, mobile devices, mobile phones, smart phones, personal digital assistants, fixed location terminals, tablet computers, and/or laptop computers.24.An equipment that includes:a housing, the housing including a cavity, wherein the cavity is configured to be coupled to an integrated device package;a cable sleeve coupled to the outer casing;a set of cables in the cable sleeve;a first device, the first device configured to provide an electrical connection, the first device coupled to the set of cables, the first device being located in the housing, wherein the first device is configured to A second set of metal layers coupled to the integrated device package.25.The apparatus of claim 24 wherein said first device and said set of cables are configured to provide an electrical path to said integrated device package by at least one of a power signal and/or a data signal .26.The apparatus of claim 24 further comprising a shield layer within said outer casing.27.The apparatus of claim 24 wherein said cavity defines a first inner surface and a second inner surface in said outer casing, said first means being coupled to a first inner surface of said outer casing.28.The apparatus of claim 27 wherein said first device is further coupled to a second inner surface of said outer casing.29.The apparatus of claim 24, further comprising:a first set of interconnects, the first set of interconnects being coupled to the first device;A first interface device, the first interface device configured to be coupled to the first set of interconnects and the set of cables.30.The apparatus according to claim 24, wherein said equipment is incorporated into at least one of: a music player, a video player, an entertainment unit, a navigation device, a communication device, a mobile device, a mobile phone, a smart phone , personal digital assistant, fixed location terminal, tablet computer, and/or laptop computer. |
a die package including a die to cable connector and a cable to die connector configured to be coupled to the die packageCross-reference to related applicationsThe present application claims priority to and the benefit of U.S. Patent Application Serial No. 14/254,764, filed on Apr.backgroundfieldEach feature relates to a die package including a connector and a connector configured to be coupled to the die package.Background techniqueFIG. 1 illustrates a plan view of a conventional integrated device assembly 100. As shown in FIG. 1, the integrated device assembly 100 includes a printed circuit board (PCB) 102, a first die package 104, a second die package 106, a third die package 108, a first capacitor 110, and a second capacitor 112. The third capacitor 114 and the fourth capacitor 116. First die package 104, second die package 106, third die package 108, first capacitor 110, second capacitor 112, third capacitor 114, and fourth capacitor 116 are coupled to the surface of PCB 102.PCB 102 includes a connection region 120. The connection area 120 of the PCB 102 includes a first connector 122, a second connector 124, and a third connector 126. The first connector 122, the second connector 124, and the third connector 126 are cable to board connectors. The first connector 122, the second connector 124, and the third connector 126 are configured to be coupled to a set of cable connectors. As shown in FIG. 1, connectors 122, 124, and 126 occupy a lot of space on PCB 102.2 illustrates a cross-sectional view of a cross section AA of the integrated device assembly 100 of FIG. 2 illustrates a PCB 102, a first die package 104, a third die package 108, a first capacitor 110, a second capacitor 112, a third capacitor 114, and a fourth capacitor 116. FIG. 2 also illustrates a second connector 124 coupled to PCB 102. The second connector 124 is configured to be coupled to the connector head 200. The connector head 200 is coupled to a power source (eg, a battery). The connector head 200 provides a power signal that passes through the second connector 124, through the PCB 102, through at least one capacitor (eg, the second capacitor 112), and then to the second die package 106.This configuration of the assembly 100 has several disadvantages. First, as noted above, connectors 122, 124, and 126 occupy a number of valuable spaces on PCB 102. This limits how small the assembly 100 can be. Second, the increase in connectors 122, 124, and 126 increases the distance that the power signal travels through to the die package, which can result in signal degradation, especially at low voltages. Furthermore, signal degradation can result in poor performance of integrated circuits in the die package. Third, the additional connectors 122, 124, and 126 may add undesirable cost and weight to the assembly 100.Therefore, there is a need for cost effective integrated device assemblies that have a low profile but also occupy as small a mesa space as possible. Ideally, such integrated device assemblies.OverviewThe various features, devices, and methods described herein provide a die package including a connector and a connector configured to be coupled to the die package.A first example provides an integrated device package that includes a package substrate, a die, an encapsulation layer, and a first set of metal layers. The package substrate includes a first surface and a second surface. The die is coupled to the first surface of the package substrate. The encapsulation layer encapsulates the die. A first set of metal layers is coupled to the first outer surface of the encapsulation layer.According to an aspect, the integrated device package further includes a second set of metal layers coupled to the second surface of the package substrate. In some implementations, the integrated device package further includes a set of solder balls coupled to the second set of metal layers.According to an aspect, the first set of metal layers are configured to operate as a die-to-cable connector of the integrated device package.According to an aspect, the integrated device package further includes a second set of metal layers coupled to the second outer surface of the encapsulation layer, wherein the first outer surface is a top surface of the encapsulation layer and the second outer surface is a side surface of the encapsulation layer. In some implementations, the integrated device package further includes a third set of metal layers coupled to the second surface of the package substrate, wherein the third set of metal layers is coupled to the second set of metal layers. In some implementations, the integrated device package is coupled to a carrier and the second set of metal layers are configured to be directly coupled to the carrier.According to an aspect, the first set of metal layers are configured to provide an electrical path to the integrated device package by at least one of a power signal and/or a data signal.According to one aspect, the integrated device package is at least one of a die package and/or a chip package.According to one aspect, the integrated device package is incorporated into at least one of: a music player, a video player, an entertainment unit, a navigation device, a communication device, a mobile device, a mobile phone, a smart phone, a personal digital assistant, a fixed location terminal, In a tablet computer, and/or a laptop computer.A second example provides an apparatus comprising a package substrate, a die, an encapsulation layer, and a first device. The package substrate includes a first surface and a second surface. The die is coupled to the first surface of the package substrate. The encapsulation layer encapsulates the die. The first device is configured to provide an electrical connection of the die to the cable. The first device is coupled to the first outer surface of the encapsulation layer.According to an aspect, the first device is further coupled to the second surface of the package substrate.According to an aspect, the first device is further coupled to the second outer surface of the encapsulation layer, wherein the first outer surface is a top surface of the encapsulation layer and the second outer surface is a side surface of the encapsulation layer.According to an aspect, the apparatus is coupled to a carrier, wherein the first device is configured to be directly coupled to the carrier.According to an aspect, the first device is configured to provide an electrical path to the equipment by at least one of a power signal and/or a data signal.According to one aspect, the equipment is incorporated into at least one of: a music player, a video player, an entertainment unit, a navigation device, a communication device, a mobile device, a mobile phone, a smart phone, a personal digital assistant, a fixed location terminal, a tablet Computer, and/or laptop.A third example provides a cable to die device that includes a housing, a cable, a cable sleeve, a set of cables, and a first set of metal layers. The housing includes a cavity, wherein the cavity is configured to be coupled to an integrated device package. A cable sleeve is coupled to the outer casing. The set of cables is in the cable jacket. A first set of metal layers is coupled to the set of cables. A first set of metal layers is located in the housing, wherein the first set of metal layers are configured to be coupled to a second set of metal layers of the integrated device package.According to an aspect, the first set of metal layers and the set of cables are configured to provide an electrical path to the integrated device package by at least one of a power signal and/or a data signal.According to one aspect, the cable to die device further includes a shielding layer located within the outer casing.According to an aspect, the cavity forms a first inner surface and a second inner surface in the outer casing. A first set of metal layers is coupled to the first inner surface of the outer casing. In some implementations, the cable to die device further includes a third set of metal layers coupled to the set of cables, wherein the third set of metal layers is coupled to the second inner surface of the outer casing.According to an aspect, the cable to die device further includes a first set of interconnects coupled to the first set of metal layers, and a first set of interfaces coupled to the set of cables and the first set of interconnects.According to one aspect, the cable to die device is incorporated into at least one of: a music player, a video player, an entertainment unit, a navigation device, a communication device, a mobile device, a mobile phone, a smart phone, a personal digital assistant, a fixed In a location terminal, a tablet computer, and/or a laptop computer.A fourth example provides an apparatus that includes a housing, a cable sleeve, a set of cables, and a first device. The housing includes a cavity, wherein the cavity is configured to be coupled to an integrated device package. A cable sleeve is coupled to the outer casing. The set of cables is in the cable jacket. The first device is configured to provide an electrical connection. A first device is coupled to the set of cables. The first device is located in the housing, wherein the first device is configured to be coupled to the second set of metal layers of the integrated device package.According to an aspect, the first device and the set of cables are configured to provide an electrical path to the integrated device package by at least one of a power signal and/or a data signal.According to an aspect, the apparatus includes a shielding layer located within the outer casing.According to an aspect, the cavity forms a first inner surface and a second inner surface in the outer casing, the first device being coupled to the first inner surface of the outer casing. In some implementations, the first device is further coupled to the second inner surface of the outer casing.According to an aspect, the apparatus includes a first set of interconnects coupled to the first device, and a first interface device configured to couple to the first set of interconnects and the set of cables.According to one aspect, the equipment is incorporated into at least one of: a music player, a video player, an entertainment unit, a navigation device, a communication device, a mobile device, a mobile phone, a smart phone, a personal digital assistant, a fixed location terminal, a tablet Computer, and/or laptop.DrawingThe various features, aspects, and advantages of the invention will be apparent from the accompanying drawings.Figure 1 illustrates a plan view of a conventional integrated device assembly.Figure 2 illustrates a cross-sectional view of a conventional integrated device assembly.3 illustrates a cross-sectional view of an integrated device assembly including a die package including a connector.4 illustrates an oblique view of an integrated device assembly including a die package including a connector.Figure 5 illustrates a plan view of a die package including a connector.6 illustrates a plan view of an internal cross section of a header connector configured to be coupled to a die package including a connector.Figure 7 illustrates a view of a die package including a connector coupled to a header connector.8 illustrates a plan view of an internal cross section of a header connector configured to be coupled to a die package including a connector.9 illustrates a plan view of an internal cross section of another head connector configured to be coupled to a die package including a connector.10 illustrates a cross-sectional view of an integrated device assembly including a die package including a connector.11 illustrates an oblique view of an integrated device assembly including a die package including a connector.Figure 12 illustrates a plan view of a die package including a connector.13 illustrates a plan view of an internal cross section of a header connector configured to be coupled to a die package including a connector.14 illustrates a view of a die package including a connector coupled to a header connector.15 illustrates a plan view of an internal cross section of a header connector configured to be coupled to a die package including a connector.16 illustrates a plan view of an internal cross section of another head connector configured to be coupled to a die package including a connector.17 illustrates a cross-sectional view of an integrated device assembly including a die package including a connector.18 illustrates an oblique view of an integrated device assembly including a die package including a connector.Figure 19 illustrates a cross-sectional view of a connector head including a shield.Figure 20 illustrates a cross-sectional view of another connector head including a shield.21 illustrates a cross-sectional view of a die package including a die-to-cable connector.22 illustrates a cross-sectional view of another die package including a die-to-cable connector.Figure 23 illustrates a sequence for providing a die package including a die to cable connector.24 illustrates a flow chart of a method for providing a die package including a die to cable connector.Figure 25 illustrates a cross-sectional view of an integrated device assembly including a die package including a connector.Figure 26 illustrates a cross-sectional view of another die package including a die to cable connector.27 illustrates various electronic devices that can integrate the integrated devices, connectors, semiconductor devices, dies, package substrates, integrated circuits, and/or PCBs described herein.A detailed descriptionIn the following description, specific details are set forth to provide a thorough understanding of the various aspects of the disclosure. However, those of ordinary skill in the art will understand that these aspects can be practiced without these specific details. For example, the circuits may be shown in block diagrams to avoid obscuring these aspects in unnecessary detail. In other instances, well-known circuits, structures, and techniques may not be shown in detail to avoid obscuring the aspects of the disclosure.OverviewSome novel features relate to an integrated device package (eg, a die package) that includes a package substrate, a die (eg, a wafer level die), an encapsulation layer, and a first set of metal layers. The package substrate includes a first surface and a second surface. A die (eg, a wafer level die) is coupled to the first surface of the package substrate. The encapsulation layer encapsulates the die. A first set of metal layers is coupled to the first outer surface of the encapsulation layer. In some implementations, the first set of metal layers are configured to operate as a die-to-cable connector of the integrated device package. In some implementations, the integrated device package further includes a second set of metal layers coupled to the second surface of the package substrate. In some implementations, the integrated device package further includes a second set of metal layers coupled to the second outer surface of the encapsulation layer, wherein the first outer surface is a top surface of the encapsulation layer and the second outer surface is a side surface of the encapsulation layer . In some implementations, the integrated device package is coupled to a carrier (eg, a printed circuit board) and the second set of metal layers are configured to be directly coupled to the carrier. In some implementations, the first set of metal layers are configured to provide at least one of a power signal and/or a data signal to an electrical path to/from the integrated device package.Some novel features relate to a cable to die device that includes a housing, a cable sleeve, a set of cables, and a first set of metal layers. The housing includes a cavity configured to be coupled to the integrated device package. A cable sleeve is coupled to the outer casing. The set of cables is located in the cable jacket. A first set of metal layers is coupled to the set of cables. A first set of metal layers is located in the housing, wherein the first set of metal layers are configured to be coupled to a second set of metal layers of the integrated device package. In some implementations, the first set of metal layers and the set of cables are configured to provide at least one of a power signal and/or a data signal an electrical path to/from the integrated device package. In some implementations, the cable to die device further includes a shielding layer located within the outer casing. In some implementations, the cavity forms a first inner surface and a second inner surface in the outer casing. A first set of metal layers is coupled to the first inner surface of the outer casing. In some implementations, the cable to die device further includes a third set of metal layers coupled to the set of cables, wherein the third set of metal layers is coupled to the second inner surface of the outer casing. In some implementations, the cable-to-die device further includes a first set of interconnects coupled to the first set of metal layers, and a first set of interfaces coupled to the set of cables and the first set of interconnects.Exemplary integrated device assembly including a die package including a die-to-cable connectorFIG. 3 conceptually illustrates an example of an integrated device assembly 300 that includes a die package 302, a carrier 304, and a connector head 306. The die package 302 is coupled to the carrier 304 by a set of solder balls 308 (eg, a solder ball grid array). However, the die package 302 can be coupled to the carrier 304 by other forms of interconnects (eg, a ground pad grid array). Different materials can be used for the carrier 304 for different implementations. In some implementations, the carrier 304 is a printed circuit board (PCB). In some implementations, the carrier 304 is a substrate (eg, a composite substrate).FIG. 3 illustrates a die package 302 including a first set of connectors 312. Different implementations can use different die packages. Examples of die packages are described in further detail in Figures 21-22 and 26. In some implementations, the first set of connectors 312 is a die-to-cable connector. The first set of connectors 312 are located on the sides of the die package 302 (eg, on the package portion of the die package). However, the first set of connectors 312 can be located on other portions of the die package 302. In some implementations, the first set of connectors 312 are metal layers that are coupled to the die package 302. The first set of connectors 312 are configured to provide an electrical path for a signal (eg, a power signal, a data signal, or a ground signal). In some implementations, the first set of connectors 312 are coupled to one or more solder balls 308. In the case where a ground pad array is used instead of a solder ball, the first set of connectors 312 can be coupled to one or more ground pads.Figure 3 further illustrates the connector head 306. In some implementations, the connector head 306 is a cable to die connector device. The connector head 306 includes a second set of connectors 316. The second set of connectors 316 can include a metal layer. The connector head 306 also includes a set of cables (not visible) that are coupled to the second set of connectors 316. In some implementations, the set of cables includes a set of cables. An example of a set of cables is further described in Figures 8 and 9. In some implementations, the connector head 306 is configured to be coupled to a die package (eg, the die package 302). For example, the second set of connectors 316 of the connector head 306 can be configured to be coupled to the first set of connectors 312 of the die package 302. In some implementations, the connector head 306 is coupled to a power source (eg, a battery). In some implementations, the connector head 306 is configured to provide an electrical path from the power source to the die package 302. In some implementations, the connector header 306 is configured to provide an electrical path to/from the die package 302 for the data signals. In some implementations, the connector head 306 is configured to provide an electrical path to/from the die package 302 as a ground signal.Different implementations may provide different electrical paths to/from the die package 302 for power signals (from power supplies), data signals, and/or ground signals.In some implementations, power signals and/or data signals from the power source can pass through the cable of the connector head 306, the second set of connectors 316, the first set of connectors 312, at least the first set of solder balls 308 a solder ball (or ground pad grid array), a first set of traces in the carrier 304, at least one capacitor coupled to the carrier 304, a second set of traces in the carrier 304, and/or from a set of solder balls At least a second solder ball of 308 arrives at the die package 302. In some implementations, the ground signal can pass through the same path, a similar path, or a different path.In some implementations, power signals and/or data signals from the power source can pass through the cable of the connector head 306, the second set of connectors 316, the first set of connectors 312, and/or from a set of solder balls 308. At least a first solder ball (or a grid of ground pad grids) arrives at the die package 302. In some implementations, the ground signal can pass through the same path, a similar path, or a different path.The integrated device assembly 300 of Figure 3 provides several technical advantages over conventional integrated device assemblies. First, providing a cable directly coupled to the die package to the die connector head 306 saves valuable space on the carrier 304 because the connector is implemented and/or integrated on the die package rather than the carrier 304 on. Second, providing a connector on the die package 302 reduces the distance required for signals (eg, power signals, data signals) to pass through to the die package, which can result in improved signal to the die package and/or to the tube Better signal for core package, especially at low voltages. For example, in some implementations, the signal can bypass interconnects (eg, traces, pads, vias) in a carrier (eg, a PCB). In some implementations, the signal can still pass through some of the interconnects of the carrier, but the distance will be much shorter. Third, the absence of a separate connector coupled to the connector head reduces the cost and weight of the assembly 300. Fourth, the absence of a separate connector on the carrier 304 simplifies the design of interconnections (e.g., traces) in the carrier 304, as the interconnections on the carrier 304 would not have to be designed around the connector.4 conceptually illustrates an oblique view of an example of an integrated device assembly 400 that includes a die package 402, a carrier 404, a connector header 406, a first capacitor 408, and a second capacitor 410. The die package 402 is coupled to the carrier 404 by a set of solder balls (not visible). However, the die package 402 can be coupled to the carrier 404 by other forms of interconnects (eg, a ground pad grid array). Different materials can be used for the carrier 404 for different implementations. In some implementations, the carrier 404 is a printed circuit board (PCB). In some implementations, the carrier 404 is a substrate (eg, a composite substrate).FIG. 4 illustrates that the die package 402 includes a first set of connectors 412. Different implementations can use different die packages. Examples of die packages are described in further detail in Figures 21-22 and 26. In some implementations, the first set of connectors 412 is a die-to-cable connector. The first set of connectors 412 are located on the sides of the die package 402 (eg, on the package portion of the die package). However, the first set of connectors 412 can be located on other portions of the die package 402. In some implementations, the first set of connectors 412 is a metal layer that is coupled to the die package 402. The first set of connectors 412 are configured to provide electrical paths for signals (eg, power signals, data signals). In some implementations, the first set of connectors 412 are coupled to one or more solder balls (eg, solder balls 308). In the case where a ground pad array is used instead of a solder ball, the first set of connectors 412 can be coupled to one or more ground pads.Figure 4 further illustrates the connector head 406. In some implementations, the connector head 406 is a cable to die connector device. The connector head 406 includes a second set of connectors 416. The second set of connectors 416 can include a metal layer. The connector head 406 also includes a set of cables (not visible) coupled to the second set of connectors 416. In some implementations, the set of cables includes a set of cables. An example of a set of cables is further described in Figures 8 and 9. In some implementations, the connector head 406 is configured to be coupled to a die package (eg, the die package 402). For example, the second set of connectors 416 of the connector head 406 can be configured to be coupled to the first set of connectors 412 of the die package 402. In some implementations, the connector head 406 is coupled to a power source (eg, a battery). In some implementations, the connector head 406 is configured to provide an electrical path from the power source to the die package 402. In some implementations, the connector header 406 is configured to provide an electrical path for data signals to the die package 402. In some implementations, the connector head 406 is configured to provide an electrical path from the die package 402 as a ground signal.Different implementations may provide different electrical paths for the power signal (from the power supply), the data signal to the die package 402, and/or the ground signal from the die package 402.In some implementations, the power signal and/or data signal from the power source can pass through the cable of the connector head 406, the second set of connectors 416, the first set of connectors 412, at least the first from a set of solder balls a solder ball (or ground pad grid array), a first set of traces in the carrier 404, at least one capacitor coupled to the carrier 404 (eg, the first capacitor 408), a second set of traces in the carrier 404, and / or at least a second solder ball from a set of solder balls (eg, solder balls 308) to reach the die package 402. In some implementations, the ground signal can pass through the same path, a similar path, or a different path.In some implementations, power signals and/or data signals from the power source can pass through the cable of the connector head 406, the second set of connectors 416, the first set of connectors 412, and/or from a set of solder balls. At least a first solder ball (or a grid of ground pad grids) arrives at the die package 402. In some implementations, the ground signal can pass through the same path, a similar path, or a different path.FIG. 5 illustrates a plan view of a die package 500 including a first set of connectors 502. In some implementations, the die package 500 is a die package that includes a die-to-cable connector. In some implementations, the first set of connectors 502 is a metal layer that is coupled to an encapsulation layer (eg, a mold) of the die package 500. In some implementations, the first set of connectors 502 is part of the die-to-cable connector of the die package 500.FIG. 5 illustrates that the first set of connectors 502 are located on the sidewalls of the die package 500. FIG. 5 further illustrates that the first set of connectors 502 are embedded in the die package 500 (eg, embedded in the encapsulation layer). In some implementations, the first set of interconnects 502 can be on the surface of the die package 500 (eg, on the surface of the package layer). Examples of die packages are described in further detail in Figures 21-22 and 26.FIG. 6 illustrates a plan view of the connector head 600. In particular, FIG. 6 illustrates an internal cross-sectional plan view of the connector head 600. In some implementations, the connector head 600 is a cable to die connector. As shown in FIG. 6, the connector head 600 includes a housing 602, a cable sleeve 604, and a second set of connectors 606. The outer casing 602 includes a cavity 608. The outer casing 602, the second set of connectors 606, and the cavity 608 are configured to be coupled to a die package (eg, the die package 500) that includes a set of connectors. The second set of connectors 606 are coupled to a set of cables in the cable sleeve 604 (not shown for purposes of clarity). In some implementations, the set of cables is configured to be coupled to a power source (eg, a battery) and/or a data signal source. An example of a cable in a cable sleeve is depicted in Figures 8-9.Figure 7 illustrates a plan view of a die package coupled to a connector head. As shown in FIG. 7, die package 500 is coupled to connector head 600. The die package 500 is coupled to the connector head 600 such that the first set of connectors 502 of the die package 500 are electrically coupled to the second set of connectors 606 of the connector head 600. FIG. 7 also illustrates that the die package 500 and the first set of connectors 502 are located within the cavity 608 of the outer casing 602 of the connector head 600. In some implementations, the housing 602 encapsulates the die package 500 when the connector head 600 is coupled to the die package 500.FIG. 8 illustrates a plan view of the connector head 800. In particular, FIG. 8 illustrates an internal cross-sectional plan view of the connector head 800. In some implementations, the connector head 800 is a cable to die connector device. As shown in FIG. 8, the connector head 800 includes a housing 802, a cable sleeve 804, a second set of connectors 806, and a set of cables 808. The outer casing 802 and the second set of connectors 806 are configured to be coupled to a die package (eg, die package 500) that includes a set of connectors. A second set of connectors 806 are coupled to a set of cables 808 in the cable sleeve 804. In some implementations, the set of cables 808 are configured to be coupled to a power source (eg, a battery) and/or a data signal source.FIG. 9 illustrates a plan view of connector head 900. In particular, FIG. 9 illustrates an internal cross-sectional plan view of the connector head 900. In some implementations, the connector head 900 is a cable to die connector device. As shown in FIG. 9, connector head 900 includes a housing 902, a cable sleeve 904, a second set of connectors 906, a set of interconnects (eg, traces) 908, and a set of cables 910. The outer casing 902 and the second set of connectors 906 are configured to be coupled to a die package (eg, the die package 500) that includes a set of connectors. A second set of connectors 906 are coupled to a set of interconnects 908 in the housing 902. The set of interconnects 908 are coupled to a set of cables 910 in the cable sleeve 904. In some implementations, the set of cables 910 has a larger size than the set of interconnects 908. In some implementations, the set of cables 910 is coupled to the set of interconnects 908 by a set of interfaces 912. Different implementations can use different interfaces. In some implementations, the set of interfaces 912 can include one of at least a metallic material and/or a bonding material capable of conducting electrical signals. In some implementations, the set of interfaces 912 can include a structure that couples the set of interconnects 908 and the set of cables 910. The structure can include a conductive material (eg, a metal), a non-conductive material (eg, a plastic), and/or a bonding material. In some implementations, the set of cables 910 is configured to be coupled to a power source (eg, a battery) and/or a data signal source.Exemplary integrated device assembly including a die package including a die-to-cable connectorFIG. 10 conceptually illustrates an example of an integrated device assembly 1000 including a die package 1002, a carrier 1004, and a connector head 1006. The die package 1002 is coupled to the carrier 1004 by a set of solder balls 1008 (eg, a solder ball grid array). However, the die package 1002 can be coupled to the carrier 1004 by other forms of interconnects (eg, a ground pad grid array). Different materials can be used for the carrier 1004 in different implementations. In some implementations, the carrier 1004 is a printed circuit board (PCB). In some implementations, the carrier 1004 is a substrate (eg, a composite substrate).FIG. 10 illustrates that the die package 1002 includes a first set of connectors 1012 and a second set of connectors 1022. Different implementations can use different die packages. Examples of die packages are described in further detail in Figures 21-22 and 26. In some implementations, the first set of connectors 1012 and the second set of connectors 1022 are die-to-cable connectors. The first set of connectors 1012 are located on the sides of the die package 1002 (eg, on the package portion of the die package). However, the first set of connectors 1012 can be located on other portions of the die package 1002. The second set of connectors 1022 are located on a first surface (eg, a top surface) of the die package 1002 (eg, on a top surface of the package portion of the die package 1002).In some implementations, the first set of connectors 1012 and the second set of connectors 1022 are metal layers that are coupled to the die package 1002. The first set of connectors 1012 and the second set of connectors 1022 are configured to provide electrical paths for signals (eg, power signals, data signals). In some implementations, the first set of connectors 1012 are coupled to one or more solder balls 1008. In the case where a ground pad array is used instead of a solder ball, the first set of connectors 1012 can be coupled to one or more ground pads. A second set of connectors 1022 is coupled to the first set of connectors 1012.FIG. 10 further illustrates the connector head 1006. In some implementations, the connector head 1006 is a cable to die connector device. The connector head 1006 includes a third set of connectors 1016 and a fourth set of connectors 1026. The third set of connectors 1016 and the fourth set of connectors 1026 can comprise a metal layer. The connector head 1006 also includes a set of cables (not shown for purposes of clarity) that are coupled to the third set of connectors 1016 and/or the fourth set of connectors 1026. In some implementations, the set of cables includes a set of cables. An example of a set of cables is further described in Figures 15 and 16. In some implementations, the connector head 1006 is configured to be coupled to a die package (eg, the die package 1002). For example, the third set of connectors 1016 of the connector head 1006 can be configured to be coupled to the first set of connectors 1012 of the die package 1002. Similarly, the fourth set of connectors 1026 of the connector head 1006 can be configured to be coupled to the second set of connectors 1022 of the die package 1002. In some implementations, the connector head 1006 is coupled to a power source (eg, a battery). In some implementations, the connector head 1006 is configured to provide an electrical path from the power source to the die package 1002. In some implementations, the connector header 1006 is configured to provide an electrical path for data signals to the die package 1002. In some implementations, the connector head 1006 is configured to provide an electrical path from the die package 1002 as a ground signal.Different implementations may provide different electrical paths for the power signal (from the power supply), the data signal to the die package 1002, and/or the ground signal from the die package 1002.In some implementations, power signals and/or data signals from the power source can pass through the cable of the connector head 1006, the third set of connectors 1016, the fourth set of connectors 1026, the second set of connectors 1022, the first a set of connectors 1012, at least a first solder ball (or ground pad grid array) from a set of solder balls 1008, a first set of traces in the carrier 1004, at least one capacitor coupled to the carrier 1004, in the carrier 1004 A second set of traces, and/or at least a second solder ball from a set of solder balls 1008, reaches the die package 1002. In some implementations, the ground signal can pass through the same path, a similar path, or a different path.In some implementations, power signals and/or data signals from the power source can pass through the cable of the connector head 1006, the third set of connectors 1016, the second set of connectors 1026, the second set of connectors 1022, the first The set of connectors 1012 and/or at least a first solder ball (or ground pad grid array) from a set of solder balls 1008 arrive at the die package 1002. In some implementations, the ground signal can pass through the same path, a similar path, or a different path.The integrated device assembly 1000 of Figure 10 provides several technical advantages over conventional integrated device assemblies. First, providing a cable directly coupled to the die package to the die connector head 1006 saves valuable space on the carrier 1004 because the connector is implemented and/or integrated on the die package rather than the carrier 1004 on. Second, providing a connector on the die package 1002 reduces the distance required for signals (eg, power signals, data signals) to pass through to the die package, which can result in improved signal to the die package and/or to the tube Better signal for core package, especially at low voltages. For example, in some implementations, the signal can bypass interconnects (eg, traces, pads, vias) in a carrier (eg, a PCB). In some implementations, the signal can still pass through some of the interconnects of the carrier, but the distance will be much shorter. Third, the absence of a separate connector coupled to the connector head reduces the cost and weight of the assembly 1000. Fourth, the absence of a separate connector on the carrier 1004 simplifies the design of interconnections (e.g., traces) in the carrier 1004 because the interconnections on the carrier 1004 will not have to be designed around the connector.FIG. 11 conceptually illustrates an oblique view of an example of an integrated device assembly 1100 that includes a die package 1102, a carrier 1104, a connector header 1106, a first capacitor 1108, and a second capacitor 1110. The die package 1102 is coupled to the carrier 1104 by a set of solder balls (not visible). However, the die package 1102 can be coupled to the carrier 1104 by other forms of interconnects (eg, a ground pad grid array). Different materials can be used for the carrier 1104 for different implementations. In some implementations, the carrier 1104 is a printed circuit board (PCB). In some implementations, the carrier 1104 is a substrate (eg, a composite substrate).FIG. 11 illustrates that the die package 1102 includes a first set of connectors 1112 and a second set of connectors 1122. Different implementations can use different die packages. Examples of die packages are described in further detail in Figures 21-22 and 26. In some implementations, the first set of connectors 1112 and the second set of connectors 1122 are die-to-cable connectors. The first set of connectors 1112 are located on the sides of the die package 1102 (eg, on the package portion of the die package). However, the first set of connectors 1112 can be located on other portions of the die package 1102. The second set of connectors 1122 are located on a surface (eg, a top surface) of the die package 1102 (eg, on a top surface of the package portion of the die package). In some implementations, the first set of connectors 1112 and the second set of connectors 1122 are metal layers that are coupled to the die package 1102.The first set of connectors 1112 and the second set of connectors 1122 are configured to provide electrical paths for signals (eg, power signals, data signals). In some implementations, the first set of connectors 1112 are coupled to one or more solder balls (eg, solder balls 1008). In the case where a ground pad array is used instead of a solder ball, the first set of connectors 1112 can be coupled to one or more ground pads. In some implementations, the second set of connectors 1122 are coupled to the first set of connectors 1112.Figure 11 further illustrates the connector head 1106. In some implementations, the connector head 1106 is a cable to die connector device. The connector head 1106 includes a third set of connectors 1116 and a fourth set of connectors 1126. The third set of connectors 1116 and the fourth set of connectors 1126 can include a metal layer. The connector head 1106 also includes a set of cables (not visible) that are coupled to the third set of connectors 1116 and/or the fourth set of connectors 1126. In some implementations, the set of cables includes a set of cables. An example of a set of cables is further described in Figures 15 and 16. In some implementations, the connector head 1106 is configured to be coupled to a die package (eg, the die package 1102). For example, the third set of connectors 1106 of the connector head 1116 can be configured to be coupled to the first set of connectors 1112 of the die package 1002. Similarly, the fourth set of connectors 11126 of the connector head 1106 can be configured to be coupled to the second set of connectors 1122 of the die package 1102. In some implementations, the connector head 1106 is coupled to a power source (eg, a battery). In some implementations, the connector head 1106 is configured to provide an electrical path from the power source to the die package 1102. In some implementations, the connector header 1106 is configured to provide an electrical path for data signals to the die package 1102. In some implementations, the connector head 1106 is configured to provide an electrical path from the die package 1102 as a ground signal.Different implementations may provide different electrical paths for the power signal (from the power supply), the data signal to the die package 1102, and/or the ground signal from the die package 1102.In some implementations, power signals and/or data signals from the power source can pass through the cable of the connector head 1106, the third set of connectors 1116, the fourth set of connectors 1126, the second set of connectors 1122, the first a set of connectors 1112, at least a first solder ball (or ground pad), a first set of traces in the carrier 1104, at least one capacitor coupled to the carrier 1104, a second set of traces in the carrier 1104, and/or at least The second solder ball is reached to the die package 1102.In some implementations, power signals and/or data signals from the power source can pass through the cable of the connector head 1106, the third set of connectors 1116, the second set of connectors 1126, the second set of connectors 1122, the first The set of connectors 1112 and/or at least a first solder ball (or ground pad) is brought to the die package 1102.FIG. 12 illustrates a plan view of a die package 1200 including a first set of connectors 1202 and a second set of connectors 1204. In some implementations, the die package 1200 is a die package that includes a die-to-cable connector. In some implementations, the first set of connectors 1202 and the second set of connectors 1204 are metal layers that are coupled to an encapsulation layer (eg, a mold) of the die package 1200. In some implementations, the first set of connectors 1202 and/or the second set of connectors 1204 are part of a die-to-cable connector of the die package 1200.12 illustrates that the first set of connectors 1202 are on the sidewalls of the die package 1200 and the second set of connectors 1204 are on the first surface (eg, the top surface) of the die package 1200. FIG. 12 further illustrates that the first set of connectors 1202 are embedded in the die package 1200 (eg, embedded in the encapsulation layer). In some implementations, the first set of connectors 1202 can be on the surface of the die package 1200 (eg, on the surface of the package layer). Similarly, FIG. 12 further illustrates that the second set of connectors 1204 are embedded in the die package 1200 (eg, embedded in the encapsulation layer). In some implementations, the second set of connectors 1204 can be on the surface of the die package 1200 (eg, on the surface of the encapsulation layer). Examples of die packages are described in further detail in Figures 21-22 and 26.FIG. 13 illustrates a plan view of the connector head 1300. Specifically, FIG. 13 illustrates an internal cross-sectional plan view of the connector head 1300. In some implementations, the connector head 1300 is a cable to die connector device. As shown in FIG. 13, the connector head 1300 includes a housing 1302, a cable sleeve 1304, a third set of connectors 1306, and a fourth set of connectors 1308. The outer casing 1302 includes a cavity. The outer casing 1302, the third set of connectors 1306, the fourth set of connectors 1308, and the cavity are configured to be coupled to a die package (eg, die package 1200) that includes a set of connectors. The third set of connectors 1306 and the fourth set of connectors 1308 are coupled to a set of cables in the cable sleeve 1304 (not shown for purposes of clarity). In some implementations, the set of cables is configured to be coupled to a power source (eg, a battery) and/or a data signal source.Figure 14 illustrates a plan view of a die package coupled to a connector head. As shown in FIG. 14, die package 1200 is coupled to connector head 1300. The die package 1200 is coupled to the connector head 1300 such that the first set of connectors 1202 of the die package 1200 are electrically coupled to the third set of connectors 1206 of the connector head 1300. Similarly, the die package 1200 is coupled to the connector head 1300 such that the second set of connectors 1204 of the die package 1200 are electrically coupled to the fourth set of connectors 1208 of the connector head 1300. 14 also illustrates that the die package 1200 and the first set of connectors 1202 are located within the cavity of the housing 1302 of the connector head 1300. In some implementations, the housing 1302 encapsulates the die package 1200 when the connector head 1300 is coupled to the die package 1200.FIG. 15 illustrates a plan view of the connector head 1500. In particular, Figure 15 illustrates an internal cross-sectional plan view of the connector head 1500. In some implementations, the connector head 1500 is a cable to die connector device. As shown, the connector head 1500 includes a housing 1502, a cable sleeve 1504, a first set of connectors 1506, a first set of cables 1508, a second set of connectors 1516, and a second set of cables 1518. The outer casing 1502, the first set of connectors 1506, and the second set of connectors 1516 are configured to be coupled to a die package (eg, the die package 1200) that includes a set of connectors. The first set of connectors 1506 are coupled to a first set of cables 1508 in the cable sleeve 1504. The second set of connectors 1516 are coupled to a second set of cables 1518 in the cable sleeve 1504. In some implementations, the first set of cables 1508 and the second set of cables 1518 are configured to be coupled to a power source (eg, a battery) and/or a data signal source.Figure 16 illustrates a plan view of the connector head 1600. In particular, Figure 16 illustrates an internal cross-sectional plan view of the connector head 1600. In some implementations, the connector head 1600 is a cable to die connector. As shown in FIG. 16, the connector head 1600 includes a housing 1602, a cable sleeve 1604, a first set of connectors 1606, a first set of interconnects (eg, traces) 1608, a first set of cables 1610, a second A set of connectors 1616, a second set of interconnects (eg, traces) 1618, a second set of cables 1620. The outer casing 1602, the first set of connectors 1606, and the second set of connectors 1616 are configured to be coupled to a die package (eg, die package 1102) that includes a set of connectors. The first set of connectors 1606 are coupled to a first set of interconnects 1608 in the housing 1602. A second set of connectors 1616 is coupled to a second set of interconnects 1618 in the housing 1602. The first set of interconnects 1608 are coupled to a first set of cables 1610 in the cable sleeve 1604. The second set of interconnects 1618 are coupled to a second set of cables 1620 in the cable sleeve 1604. In some implementations, the first set of cables 1610 has a larger size than the first set of interconnects 1608. In some implementations, the second set of cables 1620 has a larger size than the second set of interconnects 1618. In some implementations, the first set of cables 1610 are coupled to the first set of interconnects 1608 by a set of interfaces 1622. In some implementations, the set of interfaces 1622 can comprise one of at least a metallic material and/or a bonding material capable of conducting electrical signals. In some implementations, the set of interfaces 1622 can include a structure that couples the set of interconnects 1608 and the set of cables 1610. The structure can include a conductive material (eg, a metal), a non-conductive material (eg, a plastic), and/or a bonding material. In some implementations, the second set of cables 1620 are coupled to the second set of interconnects 1618 by a set of interfaces 1622. In some implementations, the first and second sets of cables 1610 and 1620 are configured to be coupled to a power source (eg, a battery) and/or a data signal source.Exemplary integrated device assembly including a die package including a die-to-cable connectorFIG. 17 conceptually illustrates an example of an integrated device assembly 1700 that includes a die package 1702, a carrier 1704, and a connector head 1706. The die package 1702 is coupled to the carrier 1704 by a set of solder balls 1708 (eg, a solder ball grid array). However, the die package 1702 can be coupled to the carrier 1704 by other forms of interconnects (eg, a ground pad grid array). Different materials may be used for the carrier 1704 for different implementations. In some implementations, the carrier 1704 is a printed circuit board (PCB). In some implementations, the carrier 1704 is a substrate (eg, a composite substrate).FIG. 17 illustrates that the die package 1702 includes a first set of connectors 1712. Different implementations can use different package substrates. An example of a die package is described in further detail in Figures 21-22. In some implementations, the first set of connectors 1712 is a die-to-cable connector. The first set of connectors 1712 are located on the sides of the die package 1702 (eg, on the package portion of the die package). However, the first set of connectors 1712 can be located on other portions of the die package 1702. In some implementations, the first set of connectors 1712 is a metal layer that is coupled to the die package 1702. The first set of connectors 1712 are configured to provide electrical paths for signals (eg, power signals, data signals). In some implementations, the first set of connectors 1712 are coupled to one or more solder balls 1708. In the case where a ground pad array is used instead of a solder ball, the first set of connectors 1712 can be coupled to one or more ground pads.Figure 17 further illustrates the connector head 1706. In some implementations, the connector head 1706 is a cable to die connector device. The connector head 1706 includes a cavity 1720 that passes through the connector head 1706. In some implementations, cavity 1720 is configured to be coupled to die package 1702. In some implementations, when the connector head 1706 is coupled to the die package 1702, a portion of the die package 1702 (eg, the top surface of the package layer) is exposed. Thus, in some implementations, when the connector head 1706 is coupled to the die package 1702, the connector head 1706 is coupled to the sidewall of the die package 1702. One advantage of this configuration of the connector head 1706 is that it provides a low profile connector to provide a low profile integrated device assembly.The connector head 1706 includes a second set of connectors 1716. The second set of connectors 1716 can include a metal layer. The connector head 1706 also includes a set of cables (not shown for purposes of clarity) that are coupled to the second set of connectors 1716. In some implementations, the set of cables includes a set of cables. In some implementations, the connector head 1706 is configured to be coupled to a die package (eg, the die package 1702). For example, the second set of connectors 1716 of the connector head 1706 can be configured to be coupled to the first set of connectors 1712 of the die package 1702. In some implementations, the connector head 1706 is coupled to a power source (eg, a battery). In some implementations, the connector head 1706 is configured to provide an electrical path from the power source to the die package 1702. In some implementations, the connector header 1706 is configured to provide an electrical path for data signals to the die package 1702. In some implementations, the connector head 1706 is configured to provide an electrical path from the die package 1702 as a ground signal.Different implementations may provide different electrical paths for the power signal (from the power supply), the data signal to the die package 1702, and/or the ground signal from the die package 1702.In some implementations, the power signal and/or data signal from the power source can pass through the cable of the connector head 1706, the second set of connectors 1716, the first set of connectors 1712, at least the first set of solder balls 1708. a solder ball (or ground pad grid array), a first set of traces in carrier 1704, at least one capacitor coupled to carrier 1704, a second set of traces in carrier 1704, and/or from a set of solder balls At least a second solder ball of 1708 arrives at the die package 1702. In some implementations, the ground signal can pass through the same path, a similar path, or a different path.In some implementations, power signals and/or data signals from the power source can pass through the cable of the connector head 1706, the second set of connectors 1716, the first set of connectors 1712, and/or from a set of solder balls 1708 At least a first solder ball (or a grid of ground pad grids) arrives at the die package 1702. In some implementations, the ground signal can pass through the same path, a similar path, or a different path.The integrated device assembly 1700 of Figure 17 provides several technical advantages over conventional integrated device assemblies. First, providing a cable directly coupled to the die package to the die connector head 1706 saves valuable space on the carrier 1704 because the connector is implemented and/or integrated on the die package rather than the carrier 1704 on. Second, providing a connector on the die package 1702 reduces the distance required for signals (eg, power signals, data signals) to pass through to the die package, which can result in improved signal to the die package and/or to the tube Better signal for core package, especially at low voltages. For example, in some implementations, the signal can bypass interconnects (eg, traces, pads, vias) in a carrier (eg, a PCB). In some implementations, the signal can still pass through some of the interconnects of the carrier, but the distance will be much shorter. Third, the absence of a separate connector coupled to the connector head reduces the cost and weight of the assembly 1700. Fourth, the absence of a separate connector on carrier 1704 simplifies the design of interconnections (e.g., traces) in carrier 1704 because the interconnections on carrier 1704 would not have to be designed around the connector.FIG. 18 conceptually illustrates an oblique view of an example of an integrated device assembly 1800 that includes a die package 1802, a carrier 1804, a connector header 1806, a first capacitor 1808, and a second capacitor 1810. The die package 1802 is coupled to the carrier 1804 by a set of solder balls (not visible). However, the die package 1802 can be coupled to the carrier 1804 by other forms of interconnects (eg, a ground pad grid array). Different materials may be used for the carrier 1804 for different implementations. In some implementations, the carrier 1804 is a printed circuit board (PCB). In some implementations, the carrier 1804 is a substrate (eg, a composite substrate).FIG. 18 illustrates that the die package 1802 includes a first set of connectors 1812. Different implementations can use different package substrates. Examples of die packages are described in further detail in Figures 21-22 and 26. In some implementations, the first set of connectors 1812 is a die-to-cable connector. The first set of connectors 1812 are located on the sides of the die package 1802 (eg, on the package portion of the die package). However, the first set of connectors 1812 can be located on other portions of the die package 1802. In some implementations, the first set of connectors 1812 are metal layers that are coupled to the die package 1802. The first set of connectors 1812 are configured to provide electrical paths for signals (eg, power signals, data signals). In some implementations, the first set of connectors 1812 are coupled to one or more solder balls (eg, solder balls 1708). In the case where a ground pad array is used instead of a solder ball, the first set of connectors 1812 can be coupled to one or more ground pads.Figure 18 further illustrates the connector head 1806. In some implementations, the connector head 1806 is a cable to die connector device. The connector head 1806 includes a cavity 1820 that passes through the connector head 1806. In some implementations, cavity 1820 is configured to be coupled to die package 1802. In some implementations, when the connector head 1806 is coupled to the die package 1802, a portion of the die package 1802 (eg, the top surface of the encapsulation layer) is exposed. Thus, in some implementations, when the connector head 1806 is coupled to the die package 1802, the connector head 1806 is coupled to the sidewall of the die package 1802. One advantage of this configuration of the connector head 1806 is that it provides a low profile connector to provide a low profile integrated device assembly.The connector head 1806 includes a second set of connectors 1816. The second set of connectors 1816 can include a metal layer. The connector head 1806 also includes a set of cables (not visible) that are coupled to the second set of connectors 1816. In some implementations, the set of cables includes a set of cables. An example of a set of cables is depicted in Figures 8 and 9. In some implementations, the connector head 1806 is configured to be coupled to a die package (eg, die package 1802). For example, the second set of connectors 1816 of the connector head 1806 can be configured to be coupled to the first set of connectors 1812 of the die package 1802. In some implementations, the connector head 1806 is coupled to a power source (eg, a battery). In some implementations, the connector head 1806 is configured to provide an electrical path from the power source to the die package 1802. In some implementations, the connector header 1806 is configured to provide an electrical path for data signals to the die package 1802. In some implementations, the connector head 1806 is configured to provide an electrical path from the die package 1802 as a ground signal.Different implementations may provide different electrical paths for the power signal (from the power supply), the data signal to the die package 1802, and/or the ground signal from the die package 1802.In some implementations, the power signal and/or data signal from the power source can pass through the cable of the connector head 1806, the second set of connectors 1816, the first set of connectors 1812, at least the first from a set of solder balls a solder ball (or ground pad grid array), a first set of traces in the carrier 1804, at least one capacitor coupled to the carrier 1808 (eg, the first capacitor 1804), a second set of traces in the carrier 1804, and / or at least a second solder ball from a set of solder balls (eg, solder balls 1708) to reach the die package 1802. In some implementations, the ground signal can pass through the same path, a similar path, or a different path.In some implementations, power signals and/or data signals from the power source can pass through the cable of the connector head 1806, the second set of connectors 1816, the first set of connectors 1812, and/or from a set of solder balls. At least a first solder ball (or array of ground pad grids) is reached to reach the die package 1802. In some implementations, the ground signal can pass through the same path, a similar path, or a different path.Exemplary cable to die connector including shieldFIG. 19 illustrates a cross-sectional view of the connector head 1900. In particular, Figure 19 illustrates an internal cross-sectional plan view of a connector head 1900 that includes a shield. In some implementations, the connector head 1900 is a cable to die connector device. As shown in FIG. 19, the connector head 1900 includes a housing 1902, a cable sleeve 1904, a first set of connectors 1906, a second set of connectors 1908, a set of cables 1910, and a shield layer 1920. The housing 1902, the first set of connectors 1906, and the second set of connectors 1908 are configured to be coupled to a die package (eg, die package 1200) that includes a set of connectors. The first set of connectors 1906 and the second set of connectors 1908 are coupled to a set of cables 1910 in the cable sleeve 1904. In some implementations, the set of cables 1910 is configured to be coupled to a power source (eg, a battery) and/or a data signal source.The outer casing 1904 includes a shielding layer 1920. Different materials can be used for the shielding layer 1920. In some implementations, the shield layer 1920 provides electrical, magnetic, and/or electromagnetic interference (EMI) shielding.FIG. 20 illustrates a cross-sectional view of the connector head 2000. In particular, Figure 20 illustrates an internal cross-sectional plan view of a connector head 2000 including a shield. In some implementations, the connector head 2000 is a cable to die connector device. As shown in FIG. 20, the connector head 2000 includes a housing 2002, a cable sleeve 2004, a first set of connectors 2006, a set of cables 2010, a shield layer 2020, and a cavity 2030. The cavity 2030 passes through the outer casing 2002. The housing 2002, the first set of connectors 2006, and the cavity 2030 are configured to be coupled to a die package (eg, the die package 1200) that includes a set of connectors. The first set of connectors 2006 is coupled to a set of cables 2010 in the cable sleeve 2004. In some implementations, the set of cables 2010 is configured to be coupled to a power source (eg, a battery) and/or a data signal source. The outer casing 2004 includes a shield layer 2020. Different materials can be used for the shielding layer 2020.Exemplary die package including die to cable connectorsFIG. 21 illustrates an example of a die package 2100 that includes a die-to-cable connector. As shown in FIG. 21, the die package 2100 includes a package substrate 2102, a wafer level die 2104, and an encapsulation layer 2106. The package substrate 2102 includes a first pad 2110, a via 2112, and a second pad 2114. The first pad 2110 is on the first surface of the package substrate 2102. The through hole 2112 passes through the package substrate 2102. The second pad 2114 is embedded in the second surface of the package substrate 2102. The first pad 2110 is coupled to the via 2112. The via 2112 is coupled to the second pad 2114. In some implementations, the second pad 2114 is part of a ground pad grid array (LGA). It should be noted that the package substrate 2102 can include a number of metal layers (eg, M1, M2) and a plurality of vias that couple the plurality of metal layers.Wafer level die 2104 is coupled to package substrate 2102. In particular, wafer level die 2104 is coupled to first pad 2110 of package substrate 2102. In particular, wafer level die 2104 is coupled to first pad 2110 by first solder balls 2116. In some implementations, the first solder balls 2116 can be replaced by other forms of interconnects, such as pillars (eg, copper posts). Thus, wafer level die 2104 can be coupled to first pad 2110 by other forms of interconnects, and thus is not limited to first solder ball 2116 shown in FIG.FIG. 21 also illustrates encapsulation layer 2106 packaging wafer level die 2104. Different materials may be used for the encapsulation layer 2106. In some implementations, the encapsulation layer 2106 includes at least one of a mold, a polymer, and/or a filler.21 further illustrates a first set of interconnects 2120, a second set of interconnects 2122, and a third set of interconnects 2124. The first set of interconnects 2120, the second set of interconnects 2122, and the third set of interconnects 2124 are configured to operate as a die-to-cable connector of the die package 2100. The first set of interconnects 2120 are coupled to a second set of interconnects 2122. The second set of interconnects 2122 are coupled to a third set of interconnects 2124.The first set of interconnects 2120 are located on a first surface (eg, a top surface) of the encapsulation layer 2106. In some implementations, the first set of interconnects 2120 are embedded in the first surface of the encapsulation layer 2106.The second set of interconnects 2122 are located on a second surface (eg, a side surface) of the encapsulation layer 216 and/or a side surface of the package substrate 2102. In some implementations, the second set of interconnects 2122 are embedded in a second surface of the encapsulation layer 2106 and/or a side surface of the package substrate 2102.The third set of interconnects 2124 are located on a second side (eg, a bottom surface) of the package substrate 2102. In some implementations, a third set of interconnects 2124 are embedded in the second side of the package substrate 2102. In some implementations, the third set of interconnects 2124 are configured as pads of a ground pad grid array (LGA).Different implementations may provide different electrical paths between the first set of interconnects 2120 and the wafer level die 2104. In some implementations, the electrical path between the first set of interconnects 2120 and the wafer level die 2104 includes at least a second set of interconnects 2122, a third set of interconnects 2124, and interconnects in the package substrate 2102 (eg, traces) One of the first pad 2110 and/or the solder ball 2116.Figure 21 illustrates a die package 2100 having a wafer level die. However, in some implementations, the die package 2100 can include two or more wafer level dies.FIG. 22 illustrates another example of a die package 2200 that includes a die-to-cable connector. As shown in FIG. 22, the die package 2200 includes a package substrate 2202, a wafer level die 2204, and an encapsulation layer 2206. The package substrate 2202 includes a first pad 2210, a via 2212, and a second pad 2214. The first pad 2210 is on the first surface of the package substrate 2202. The through hole 2212 passes through the package substrate 2202. The second pad 2214 is embedded in the second surface of the package substrate 2202. The first pad 2210 is coupled to the via 2212. Via 2212 is coupled to second pad 2214. It should be noted that the package substrate 2202 can include a plurality of metal layers (eg, M1, M2) and a plurality of vias that couple the plurality of metal layers.Wafer level die 2204 is coupled to package substrate 2202. In particular, wafer level die 2204 is coupled to first pad 2202 of package substrate 2210. In particular, wafer level die 2204 is coupled to first pad 2210 by first solder balls 2216. In some implementations, the first solder balls 2216 can be used or replaced by other forms of interconnects, such as pillars (eg, copper posts). Thus, wafer level die 2204 can be coupled to first pad 2210 by other forms of interconnects, and thus is not limited to first solder ball 2216 shown in FIG.FIG. 22 also illustrates encapsulation layer 2206 packaging wafer level die 2204. Different materials may be used for the encapsulation layer 2206. In some implementations, the encapsulation layer 2206 includes at least one of a mold, a polymer, and/or a filler.FIG. 22 further illustrates a first set of interconnects 2220, a second set of interconnects 2222, and a third set of interconnects 2224. The first set of interconnects 2220, the second set of interconnects 2222, and the third set of interconnects 2224 are configured to operate as a die-to-cable connector of the die package 2200. The first set of interconnects 2220 are coupled to a second set of interconnects 2222. The second set of interconnects 2222 are coupled to a third set of interconnects 2224.The first set of interconnects 2220 are located on a first surface (eg, a top surface) of the encapsulation layer 2206. In some implementations, the first set of interconnects 2220 are embedded in the first surface of the encapsulation layer 2206.The second set of interconnects 2222 are located on a second surface (eg, a side surface) of the encapsulation layer 2206 and/or a side surface of the package substrate 2202. In some implementations, the second set of interconnects 2222 are embedded in a second surface of the encapsulation layer 2206 and/or a side surface of the package substrate 2202.The third set of interconnects 2224 are located on a second side (eg, a bottom surface) of the package substrate 2202. In some implementations, a third set of interconnects 2224 are embedded in the second side of the package substrate 2202. A second solder ball 2230 (which may be part of a solder ball grid array) is coupled to the third set of interconnects 2224.Different implementations may provide different electrical paths between the first set of interconnects 2220 and the wafer level die 2204. In some implementations, the electrical path between the first set of interconnects 2220 and the wafer level die 2204 includes at least a second set of interconnects 2222, a third set of interconnects 2224, solder balls 2230, second pads 2214, vias 2212, one of the first pad 2210 and/or the solder ball 2216. In some implementations, the electrical path between the first set of interconnects 2220 and the wafer level die 2204 includes at least a second set of interconnects 2222, a third set of interconnects 2224, solder balls 2230, at least one external to the die package 2200 One of the interconnect, the second pad 2214, the via 2122, the first pad 2210, and/or the solder ball 2216.Figure 22 illustrates a die package 2200 having a wafer level die. However, in some implementations, the die package 2200 can include two or more wafer level dies.Exemplary sequence for fabricating a die package including a die-to-cable connectorIn some implementations, providing (eg, fabricating) a die package including a die to cable connector includes several processes. 23 illustrates an exemplary sequence for providing a die package including a die to cable connector. In some implementations, the sequence of FIG. 23 can be used to provide/manufacture the die package of FIGS. 21 and/or 22 and/or other die packages described in this disclosure.It should be noted that the sequence of Figure 23 may combine one or more stages to simplify and/or clarify the sequence used to provide the die package.Stage 1 of Figure 23 illustrates package substrate 2300 after package substrate 2300 is provided (e.g., formed). In some implementations, providing package substrate 2300 can include receiving a package substrate from a vendor or fabricating a package substrate. The package substrate 2300 includes a first pad 2302, a via 2304, a second pad 2306, and a first set of interconnects 2308. The first pad 2302 is coupled to the via 2304. Via 2304 is coupled to second pad 2306.Stage 2 illustrates wafer level die 2310 after wafer level die 2310 is coupled to package substrate 2300. As shown in stage 2, wafer level die 2310 is coupled to package substrate 2300 by a set of solder balls 2320. In particular, wafer level die 2310 is coupled to first pad 2302 by a set of solder balls 2320. In some implementations, a set of solder balls 2320 can be used or replaced by other forms of interconnects, such as pillars (eg, copper posts). Thus, wafer level die 2310 can be coupled to first pad 2302 by other forms of interconnects, and thus is not limited to the set of solder balls 2320 shown in FIG.Stage 3 illustrates an encapsulation layer 2330 that encapsulates the wafer level die 2310, which forms a die package. Different materials can be used for the encapsulation layer 2330. In some implementations, the encapsulation layer 2330 includes at least one of a mold, a polymer, and/or a filler.Stage 4 illustrates a second set of interconnects 2340 and a third set of interconnects 2342 on the die package. In particular, the second set of interconnects 2340 are coupled to the sides of the encapsulation layer 2330 and/or to the sides of the package substrate 2300. The third set of interconnects 2342 are coupled to a first surface (eg, a top surface) of the encapsulation layer 2330. Stage 4 illustrates that the first set of interconnects 2308 are coupled to the second set of interconnects 2340. The second set of interconnects 2340 are coupled to a third set of interconnects 2342. In some implementations, the first set of interconnects 2308, the second set of interconnects 2340, and/or the third set of interconnects 2342 form a set of connectors of the die packaged die to the cable connector.Exemplary method for fabricating a die package including a die to cable connectorIn some implementations, providing (eg, fabricating) a die package including a die to cable connector includes several processes. 24 illustrates an exemplary flow chart of a method for providing a die package including a die-to-cable connector. In some implementations, the flowchart of FIG. 24 can be used to provide/manufacture the die package of FIGS. 21 and/or 22 and/or other die packages described in this disclosure.It should be noted that the sequence of Figure 24 may combine one or more stages to simplify and/or clarify the method for fabricating a die package.The method (at 2405) provides a package substrate. In some implementations, providing a package substrate can include receiving a package substrate from a vendor or fabricating (eg, forming) a package substrate. The package substrate can include a first pad, a via, a second pad, and a first set of interconnects.The method (at 2410) provides a wafer level die on a package substrate. In some implementations, providing a wafer level die includes coupling a wafer level die to a package substrate through a set of solder balls.The method then provides (at 2415) an encapsulation layer. In some implementations, providing an encapsulation layer includes forming an encapsulation layer that encapsulates the wafer level die, which forms a die package. Different implementations can use different materials for the encapsulation layer. In some implementations, the encapsulation layer includes at least one of a mold, a polymer, and/or a filler.The method also provides (at 2420) at least one connector. In some implementations, providing the at least one connector includes forming a second set of interconnects and a third set of interconnects on the die package. In particular, the second set of interconnects are coupled to the sides of the encapsulation layer and/or to the sides of the package substrate. In some implementations, a third set of interconnects are formed on a first surface (eg, a top surface) of the encapsulation layer. The first set of interconnects are coupled to the second set of interconnects. A second set of interconnects is coupled to the third set of interconnects. In some implementations, the first set of interconnects, the second set of interconnects, and/or the third set of interconnects form a set of connectors of the die packaged die to the cable connector.Exemplary integrated device assembly including a die package including a die-to-cable connectorFIG. 25 conceptually illustrates an example of an integrated device assembly 2500 that includes a die package 2502, a carrier 2504, and a connector head 2506. The die package 2502 is coupled to the carrier 2504 by a set of solder balls 2508 (eg, a solder ball grid array). However, the die package 2502 can be coupled to the carrier 2504 by other forms of interconnects (eg, a ground pad grid array). Different materials can be used for the carrier 2504. In some implementations, the carrier 2504 is a printed circuit board (PCB). In some implementations, the carrier 2504 is a substrate (eg, a composite substrate).25 illustrates that the die package 2502 includes a first set of connectors 2512 and a second set of connectors 2522. Different implementations can use different package substrates. An example of a die package is described in further detail in Figures 21-22 and further described in Figure 26. In some implementations, the first set of connectors 2512 and the second set of connectors 2522 are die-to-cable connectors. The first set of connectors 2512 are located on the sides of the die package 2502 (eg, on the package portion of the die package). However, the first set of connectors 2512 can be located on other portions of the die package 2502. The second set of connectors 2522 are located on a first surface (eg, a top surface) of the die package 2502 (eg, on a top surface of the package portion of the die package 2502).In some implementations, the first set of connectors 2512 and the second set of connectors 2522 are metal layers that are coupled to the die package 2502. The first set of connectors 2512 and the second set of connectors 2522 are configured to provide electrical paths for signals (eg, power signals, data signals). In some implementations, the first set of connectors 2512 are coupled to a carrier 2504 (eg, a trace of the carrier 2504). A second set of connectors 2522 is coupled to the first set of connectors 2512.Figure 25 further illustrates connector head 2506. In some implementations, the connector head 2506 is a cable to die connector device. The connector head 2506 includes a third set of connectors 2516 and a fourth set of connectors 2526. The third set of connectors 2516 and the fourth set of connectors 2526 can comprise a metal layer. The connector head 2506 also includes a set of cables (not visible) that are coupled to the third set of connectors 2516 and/or the fourth set of connectors 2526. In some implementations, the set of cables includes a set of cables. In some implementations, the connector head 2506 is configured to be coupled to a die package (eg, the die package 2502). For example, the third set of connectors 2516 of the connector head 2506 can be configured to be coupled to the first set of connectors 2512 of the die package 2502. Similarly, the fourth set of connectors 25126 of the connector head 2506 can be configured to be coupled to the second set of connectors 2522 of the die package 2502. In some implementations, the connector head 2506 is coupled to a power source (eg, a battery) and/or a data signal source. In some implementations, the connector head 2506 is configured to provide an electrical path from the power source to the die package 2502. In some implementations, the connector header 2506 is configured to provide an electrical path for data signals to the die package 2502.Different implementations may provide different electrical paths for the power signal (from the power supply), the data signal to the die package 2502, and/or the ground signal from the die package 2502.In some implementations, the power signal and/or data signal from the power source can pass through the cable of the connector head 2506, the third set of connectors 2516, the fourth set of connectors 2526, the second set of connectors 2522, the first a set of connectors 2512, a first set of traces in the carrier 2504, at least one capacitor coupled to the carrier 2504, a second set of traces in the carrier 2504, and/or at least a second solder ball from a set of solder balls 2508 Arrived at the die package 2502. In some implementations, the ground signal can pass through the same path, a similar path, or a different path.The integrated device assembly 2500 of Figure 25 provides several technical advantages over conventional integrated device assemblies. First, providing a cable directly coupled to the die package to the die connector header 2506 saves valuable space on the carrier 2504 because the connector is implemented and/or integrated on the die package rather than the carrier 2504 on. Second, providing a connector on the die package 2502 reduces the distance required for signals (eg, power signals, data signals) to pass through to the die package, which can result in improved signal to the die package and/or to the tube Better signal for core package, especially at low voltages. For example, in some implementations, the signal can bypass interconnects (eg, traces, pads, vias) in a carrier (eg, a PCB). In some implementations, the signal can still pass through some of the interconnects of the carrier, but the distance will be much shorter. Third, the absence of a separate connector coupled to the connector head reduces the cost and weight of the assembly 2500. Fourth, the absence of a separate connector on carrier 2504 simplifies the design of interconnections (e.g., traces) in carrier 2504 because the interconnections on carrier 2504 would not have to be designed around the connector.Exemplary die package including die to cable connectorsFIG. 26 illustrates an example of a die package 2600 that includes a die-to-cable connector. As shown in FIG. 26, the die package 2600 includes a package substrate 2602, a wafer level die 2604, and an encapsulation layer 2606. The package substrate 2602 includes a first pad 2610, a via 2612, and a second pad 2614. The first pad 2610 is on the first surface of the package substrate 2602. The through hole 2612 passes through the package substrate 2602. The second pad 2614 is embedded in the second surface of the package substrate 2602. The first pad 2610 is coupled to the via 2612. Via 2612 is coupled to second pad 2614.Wafer level die 2604 is coupled to package substrate 2602. In particular, wafer level die 2604 is coupled to first pad 2610 of package substrate 2602. In particular, wafer level die 2604 is coupled to first pad 2610 by first solder balls 2616. In some implementations, the first solder balls 2616 can be used or replaced by other forms of interconnects, such as pillars (eg, copper posts). Thus, wafer level die 2604 can be coupled to first pad 2610 by other forms of interconnects, and thus is not limited to first solder ball 2616 shown in FIG.FIG. 26 also illustrates encapsulation layer 2606 packaging wafer level die 2604. Different materials may be used for the encapsulation layer 2606. In some implementations, the encapsulation layer 2606 includes at least one of a mold, a polymer, and/or a filler.FIG. 26 further illustrates a first set of interconnects 2620 and a second set of interconnects 2622. The first set of interconnects 2620 and the second set of interconnects 2622 are configured to operate as a die-to-cable connector of the die package 2600. The first set of interconnects 2620 are coupled to a second set of interconnects 2622.The first interconnect 2620 is located on a first surface (eg, a top surface) of the encapsulation layer 2606. In some implementations, the first set of interconnects 2620 are embedded in the first surface of the encapsulation layer 2606.The second set of interconnects 2622 are located on a second surface (eg, a side surface) of the encapsulation layer 2606 and/or a side surface of the package substrate 2602. In some implementations, the second set of interconnects 2622 are embedded in a second surface of the encapsulation layer 2606 and/or a side surface of the package substrate 2602.Different implementations may provide different electrical paths between the first set of interconnects 2620 and the wafer level die 2604. In some implementations, the electrical path between the first set of interconnects 2620 and the wafer level die 2604 includes at least a second set of interconnects 2262, at least one interconnect external to the die package 2600, solder balls 2630, vias 2612, One of the first pad 2610 and/or the solder ball 2616.Figure 26 illustrates a die package 2600 having a wafer level die. However, in some implementations, the die package 2600 can include two or more wafer level dies.Exemplary electronic device27 illustrates various electronic devices that may be integrated with any of the aforementioned integrated devices, connectors, connector devices, semiconductor devices, package substrates, integrated circuits, dies, interposers, or packages. For example, mobile phone 2702, laptop 2704, and fixed location terminal 2706 can include integrated device 2700 as described herein. Integrated device 2700 can be, for example, any of an integrated circuit, a die, or a package as described herein. The devices 2702, 2704, 2706 illustrated in Figure 27 are merely exemplary. Other electronic devices can also be characterized by integrated devices 2700 including, but not limited to, mobile devices, handheld personal communication system (PCS) units, portable data units (such as personal digital assistants), GPS enabled devices, navigation Device, set top box, music player, video player, entertainment unit, fixed location data unit (such as meter reading device), communication device, smart phone, tablet computer or any other device that stores or retrieves data or computer instructions, or Any combination.Figures 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26 and / Or one or more of the components, steps, features and/or functions illustrated in FIG. 27 may be rearranged and/or combined into a single component, step, feature or function, or implemented in several components, steps, or functions. in. Additional elements, components, steps, and/or functions may be added without departing from the disclosure. It should also be noted that Figures 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23 in the present disclosure , 24, 25, 26, and/or 27 and their corresponding descriptions are not limited to the die and/or IC. In some implementations, Figures 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26 and/or 27 and their corresponding descriptions can be used to fabricate, create, provide, and/or produce integrated devices. In some implementations, the integrated device can include a die package, a package substrate, an integrated circuit (IC), a wafer, a semiconductor device, and/or an interposer.The word "exemplary" is used herein to mean "serving as an example, instance or illustration." Any implementation or aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous. Likewise, the term "aspect" does not require that all aspects of the disclosure include the features, advantages, or modes of operation discussed. The term "coupled" is used herein to mean a direct or indirect coupling between two objects. For example, if object A physically contacts object B and object B contacts object C, then objects A and C may still be considered to be coupled to each other even if they are not in direct physical contact with each other. The term "cavity" is used herein to refer to a hollow, space, and/or pore in a subject. The cavity can partially or completely pass through the object.It should also be noted that these embodiments may be described as a process depicted as a flowchart, a flow diagram, a structural diagram, or a block diagram. Although the flowcharts may describe the operations as a sequential process, many of these operations can be performed in parallel or concurrently. Additionally, the order of these operations can be rearranged. The process terminates when its operation is complete.The various aspects of the disclosure described herein may be implemented in different systems without departing from the disclosure. It should be noted that the above aspects of the present disclosure are only examples, and should not be construed as limiting the present disclosure. The description of the various aspects of the present disclosure is intended to be illustrative, and not to limit the scope of the appended claims. Thus, the teachings of the present invention can be readily applied to other types of devices, and many alternatives, modifications, and variations will be apparent to those skilled in the art. |
Systems, apparatuses, and methods related to securing memory access made using virtual addresses are described. For example, a memory coupled to the computer processor can store instructions of routines of predefined, non-hierarchical domains. The computer processor can store separate tables for the different domains. A virtual address is configured with an object identifier and an offset of a location within the object represented by the object identifier. At least the object identifier of the virtual address is hashed to generate an index into a table of the current domain in which the processor is executing instructions. An entry retrieved from the table using the index provides a security configuration for the object represented by the object identifier. The processor secures memory access according to the security configuration in response the execution of an instruction that uses the virtual address. |
CLAI MSWhat is claimed is:1. A computer system, comprising:a memory configured to store at least instructions of routines of a plurality of predefined domains; anda processor coupled with the memory, the processor configured to execute the routines;wherein a virtual address used in execution of an instruction in a current execution domain comprises an object identifier and an offset of a location within an object represented by the object identifier; and wherein the processor is further configured to identify a security configuration of the object for the current execution domain in response to the virtual address being used in the execution of the instruction in the processor2. The computer system of claim 1 , wherein the processor is configured to hash at least the object identifier into an index and apply the index in an address translation table for the current execution domain to retrieve the security configuration.3. The computer system of claim 2, wherein the plurality of predefined domains comprises at least one of a domain for hypervisor, a domain for operating system, or a domain for application, or any combination thereof; wherein the domains have no predefined levels of trust; and the virtual address is programmed and stored in a routine loaded from the memory.4. The computer system of claim 3, wherein the security configuration identifies an object length; and the processor is further configured to compare the offset with the object length.5. The computer system of claim 4, wherein the processor is configured to reject a memory access request associated with the virtual address in response to a determination that the offset exceeds a bound identified by the object length.6. The computer system of claim 4, wherein the security configuration includes a field; and the processor is further configured to compare the offset with the object length in response to the field having a first predetermined value.7. The computer system of claim 6, wherein the processor is further configured to skip comparing the offset with the object length in response to the field having a second predetermined value different from the first predetermined value.8. The computer system of claim 4, wherein the security configuration includes a permission bit for a type of memory access for the current execution domain; and wherein the processor is further configured to reject a memory access request associated with the virtual address based on a value of the permission bit.9. The computer system of claim 8, wherein the type of memory access is read data from virtual addresses, write data to virtual addresses, or execute instructions stored at virtual addresses, or any combination thereof.10. The computer system of claim 8, wherein the security configuration includes a field; and the processor is further configured to check the permission bit in response to the field having a first predetermined value.1 1. The computer system of claim 10, wherein the processor is configured to skip checking the the permission bit in response to the field having a second predetermined value different from the first predetermined value.12. The computer system of claim 4, wherein the security configuration includes a key for cryptographic operations on an item stored at the virtual memory address.13. The computer system of claim 4, wherein the virtual address identifies a memory location of a called routine that is called by the instruction in a calling routine; the security configuration includes a setting; and the processor is configured to isolate execution of the calling routine and execution of the called routine based on the setting.14. The computer system of claim 13, wherein the processor is configured to use separate call stacks for the calling routine and the called routine when the setting has a first predetermined value.15. The computer system of claim 3, wherein the processor is configured to select a table base of the address translation table according to an identifier of the current execution domain among the domains.16. The computer system of claim 15, wherein an entry at the index in theaddress translation table is configured to specify a physical address of a page table or a page directory; and the processor is further configured to use the page table or a page directory to convert the virtual address to a physical address.17. A method, comprising:storing in a memory at least instructions of routines of a plurality of predefined domains;executing, by a processor coupled to the memory, an instruction that uses a virtual address in a current execution domain among the predefined domains, wherein the virtual address is configured to have an object identifier and an offset of a location within the object represented by the object identifier; andidentifying, by the processor, a security configuration of the object for the current execution domain in response to the virtual address being used in the execution of the instruction in the processor18. The method of claim 17, further comprising:identifying a table based on the current execution domain;hashing at least the object identifier provided in the virtual address togenerate an index; andretrieving from a table an entry at the index, the entry containing the security configuration.19. A computer processor, comprising:at least one execution unit configured to execute instructions of a plurality of predefined domains; anda memory management unit configured to convert a virtual address to a physical address during execution of an instruction in a current execution domain among the predefined domains, wherein the virtual address is configured with an object identifier and an offset of a location within the object represented by the object identifier;wherein the memory manage unit is configured to identify a table based on the current execution domain, and receive from the table a security configuration of the object for the current execution domain in response to the virtual address being used in the execution of the instruction in the processor.20. The computer processor of claim 19, wherein the memory management unit is configured to hash at least the object identifier provided in the virtual address to generate the index and retrieve an entry using the index; wherein the entry identifies the security configuration of the object. |
SECURITY CONFIGURATION FOR MEMORY ADDRESS TRANSLATION FROMOBJECT SPECIFIC VIRTUAL ADDRESS SPACES TO A PHYSICAL ADDRESSSPACERELATED APPLICATIONS[0001] The present application claims the benefit of the filing dates of U.S. Pat. App. Ser. No. 16/520,31 1 , filed July 23, 2019 and entitled“Security Configuration for Memory Address Translation from Object Specific Virtual Address Spaces to a Physical Address Space,” Prov. U.S. Pat. App. Ser. No. 62/734,896, filed on Sep.21 , 2018 and entitled“Security Configuration for Memory Address T ranslation from Object Specific Virtual Address Spaces to a Physical Address Space,” Prov. U.S. Pat. App. Ser No. 62/725,092, filed on Aug. 30, 2018 and entitled“Memory Address Translation from Object Specific Virtual Address Spaces to a Physical Address Space,” Prov. U.S. Pat. App. Ser. No. 62/724,896, filed on Aug. 30, 2018 and entitled“Memory Access Control through Permissions Specified in Page Table Entries for Execution Domains,” Prov. U.S. Pat. App. Ser. No. 62/724,913, filed on Aug. 30, 2018 and entitled“Security Configurations in Page Table Entries for Execution Domains," Prov. U.S. Pat. App. Ser. No. 62/724,929, filed on Aug. 30, 2018 and entitled“Access Control for Processor Registers based on Execution Domains,” Prov. U.S. Pat. App. Ser. No. 62/724,999, filed on Aug. 30, 2018 and entitled“Domain Register for instructions being Executed in Computer Processors," and Prov. U.S. Pat. App. Ser. No. 62/725,030, filed on Aug. 30, 2018 and entitled “Domain Crossing in Executing Instructions in Computer Processors,” the entire disclosures of which applications are hereby incorporated herein by reference.FIELD OF THE TECHNOLOGY[0002] At least some embodiments disclosed herein relate generally to computer architecture and more specifically, but not limited to, memory address translation from object specific virtual memory addresses to physical memory addresses. BACKGROUND[0003] Instructions programmed for a computer can be structured in layers.Once layer can provide resources and services for another layer. For example, a hypervisor can create or provision virtual machines that are implemented on the hardware components of the computer. An operating system can offer resources and services using resources available in a computer having predefined architecture. The computer resources or computer operated upon by the operating system can be actual computer hardware components, or virtual machine components provisioned by a hypervisor. An application can provide application specific functions using the services and resources provided by an operating system.BRIEF DESCRIPTION OF THE DRAWINGS[0004] The embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings in which like references indicate similar elements.[0005] FIG, 1 shows a computer processor having a set of registers configured to control the operations of the computer processor according to some embodiments.[0006] FiG, 2 illustrates the identification of a table base of an address translation table in absence of an operating hypervisor in some embodiments.[0007] FIG. 3 illustrates the identification of a table base of an address translation table in the presence of an operating hypervisor in some embodiments.[0008] FiG, 4 illustrates separate address translation tables for respective domains.[0009] FiG. S shows a technique to retrieve an entry from an address translation table to convert a virtual address.[0010] FiG. 6 shows a system to control security operations applied to resources in accordance with a domain register.[0011] FiG. 7 illustrates a page table entry having a security configuration for execution domains.[0012] FiG. 8 shows a computer system having a domain register controlling security operations.[0013] FiG. 9 shows a method to translate an object specific virtual memory address. [0014] FIG. 10 shows a system to identify security configurations for accessing a memory location identified by a virtual address.[001 S] FIG, 11 illustrates security parameters for memory access made using a virtual address.[001 S] FIG. 12 shows a method to perform security operations in response to a memory access request made using a virtual address.DETAILED DESCRI PTION[0017] The present disclosure includes a computer processor and/or a memory management unit (MMU) configured to translate object specific virtual addresses to physical addresses. For example, different virtual memory spaces can be created for different domains of instruction executions (e.g., hypervisor, operation system, application), for different virtual machines, for different running/executing processes of a same program, and/or for different objects. A computer processor can have a virtual machine register to identify the current virtual machine and/or have a domain register to identify the current execution domain. The memory management unit (MMU) can combine (e.g., via hashing, indexing, and/or multiplexing) the information related to the identifications of virtual memory spaces to locate a page table or page directory. Examples of such information include a virtual machine identifier, a domain identifier, a processor identifier, an object identifier provided in a virtual memory address, a portion of an offset within the object represented by the object identifier, etc. The page table or page directory can be used in converting the virtual memory address into a physical memory address. For example, portions of the virtual memory address can be used directly as indexes in a series of page tables or page directories, where the next page table or page directory in the series is identified by an entry retrieved from the current table or page directory using an index. Alternatively, or in combination, at least a portion of the virtual memory address can be hashed to generate an index to retrieve an entry from a table, where the entry identifies the next page table or page directory, or provides a base of a set of physical addresses, or provides a physical address directly.[0018] In a traditional system, different layers of instructions (e.g., user applications vs. operating system) may be given different levels of privilege and/or trust. Conventionally, protection rings have been constructed and implemented in computers to protect data and functionality from fault and malicious behaviors based on a hierarchy of rings. Rings are statically arranged in the hierarchy from most privileged (and thus most trusted) to least privileged (and thus least trusted). For example, the hierarchy can include a ring of operating system kernel that is the most privileged, a ring of device drivers, and a ring of applications that are the least privileged. A program or routine in a lower privilege ring can be limited by a respective special hardware enforced control gate to access the resources and services of a higher privilege ring in the hierarchy. Gating access between rings can improve security.[0019] In the techniques of the present disclosure, instructions or routines programmed for a computer system can be classified into a set of predefined, non- hierarchical, domains, such as a domain of hypervisor, a domain of operating system, a domain of application, etc. Addresses used in different domains can be translated using different address translation tables such that the virtual address spaces of different domains can be isolated from each other if a hypervisor is present (e.g., operating and controlling the lowest level of machine architecture in the computer system), addresses used in different virtual machines managed by the hypervisor can also be translated using different address tables; and thus, the virtual address spaces of different virtual machines can also be isolated from each other. Further, virtual address spaces of different running processes can also be optionally isolated from each other. For exa ple, the virtual machine register can be configured to store an identifier of the current virtual machine for which the processor is executing instructions; and the address translation function of a memory management unit of the processor can be configured, in accordance with the identifier stored in the virtual machine register, the identifier stored in the domain register, and/or the status indication stored in the hypervisor status register, to perform address translation for the execution of a routine in a particular domain for a particular virtual machine.[0020] FIG. 1 shows a computer processor (169) having a set of registers (183) configured to control the operations of the computer processor (169) according to some embodiments. The set of registers (183) can include at least a domain register (1 17), a virtual machine register (231), and/or a hypervisor status register (223).[0021] The domain register (1 17) is configured to store an identifier or indication of the current domain of the instructions that are being executed in the processor (169).[0022] For example, the computer processor (169) of FIG, 1 can be coupled to physical memory (109). The physical memory (109) can store data and instructions for various routines programmed for a computer system. Routines can be classified into various predefined, non-hierarchical, domains (101 , 103, ... , 105), such as a domain (101 ) of hypervisor (102), a domain (103) of operating system (104), a domain (105) of application (106).[0023] For example, routines of a hypervisor (102) can be classified in a domain A (101 ); routines of an operating system (104) can be classified in another domain B (103); and routines of applications (106) can be classified in a further domain C (105). A hypervisor or virtual machine monitor (VMM) creates and manages virtual machines. The hypervisor can control basic functions such as physical memory and input/output (I/O).[0024] The computer processor (169) can be optionally used with or without an operating hypervisor (102). When no hypervisor (102) is operating or present in the computer system of FIG. 1 , the operating system (104) can control the entire computer system. When the hypervisor (102) is operating or present in the computer system of FIG, 1 , the hypervisor (102) can provision one or more virtual machines; each virtual machine can run its instance of the operating system (1 G4); and the operating system (104) running in a virtual machine can control the resources of the virtual machine provisioned by the hypervisor (102) but not the resources not provisioned to virtual machine. Thus, when the hypervisor (102) is present or operating in the computer system, the operating system in a virtual machine hosted on the computer system may not have control over a portion of the computer system that would be controlled by the operating system when the hypervisor (1 Q2) is not present or operating. For example, when the hypervisor (102) provisions a portion of the memory (109) to a virtual machine, the operating system (104) running in the virtual machine can access the portion of the memory (109) via pseudo-physical memory addresses, where the operating system (104) can treat the pseudo-physical memory addresses as physical memory addresses which are actually mapped to the portion of the memory (109) that is allocated by the hypervisor (102) to the virtual machine.[0025] For example, the computer system of FIG. 1 can be powered up or bootstrapped in a mode in which the computer system does not have anoperating/running hypervisor (102). In such a mode, the operating system (104) directly controls the hardware resources (e.g., the processor (169) and the memory (109)). Alternatively, the computer system of FIG. 1 can be started in a mode in which the computer system has an operating/running hypervisor (102); the hypervisor (102) can create and manage one or more virtual machines; and each virtual machine can run a copy of the operating system (104) where the operating system (104) can control the hardware resources provisioned by the hypervisor (102) for the respective virtual machine.[0026] In some instances, the processor (189) is coupled with the memory (109) having the hypervisor (102); and the computer system can optionaily bebootstrapped into operation with or without an operating hypervisor (102). In other instances, the processor (189) can be optionally coupled to memory that does not have the hypervisor (102) and thus cannot run a hypervisor (102).[0027] The hypervisor status register (233) is configured to store an indicator of whether a hypervisor (102) is present in the computer system. For example, the hypervisor status register (233) can have an initialized value during powering up to indicate the lack of hypervisor (102). if the hypervisor (102) is loaded for execution during the bootstrap process, the hypervisor status register (233) is set to indicate the presence of an operating hypervisor (102). The content of the hypervisor status register (233) allows the processor (189) to customize its operations, such as address translation (235) of a memory management unit (MMU) (181 ), based on whether or not a hypervisor (102) is present.[0028] For example, when the hypervisor status register (233) indicates that no hypervisor (102) is present, the operating system (104) running in the computer system does not rely upon a hypervisor (102) for the management of resources and/or services. The domain (101 ) of the hypervisor (102) is not applicable for the instruction execution in the processor (169); and the operating system (104) is provided with full access to resources, such as the entire physical memory (109).[0029] However, when the hypervisor status register (233) indicates that a hypervisor (102) is present, the operating system (104) running in a virtual machine is restricted to resources and/or services provisioned by the hypervisor (102) for the virtual machine. The domain (101 ) of the hypervisor (102) is thus relevant for the instruction execution in the processor (169). For example, certain operations performed in the routines of the operating system (104) can trigger corresponding operations in the hypervisor (102).[0030] in general, a hypervisor (102) can be present, even though the current domain of execution as indicated by the domain register (1 17) is different from the domain (101 ) of hypervisor (102). For example, the processor (169) can execute an application (106) in the domain (105) and rely upon the operating system (104) to access memory (109); and the hypervisor (102) can restrict the operating system (104) in accessing the memory (109) to a portion that is provisioned by the hypervisor (102) to a virtual machine in which the operating system (104) is running. Thus, the execution of the application (106) in the domain (105) can shift to execution in the domain (103) of operating system (104) and/or to execution in the domain (101 ) of hypervisor (102).[0031] In general, the content of the hypervisor status register (223) indicates whether a hypervisor (102) is present, which is an indication of whether the domains (103, .... 105) are operating within the constraint of a hypervisor (102) or a virtual machine.[0032] When a hypervisor (102) is present, the virtual machine register (231 ) can store an identifier of the current virtual machine for which the processor (169) is currently running a routine in a domain (e.g., 101 , 103, or 105). For example, when the processor (169) is executing a routine in the domain (103) of operating system (104), the virtual machine register (231 ) stores the identifier of the virtual machine for which the routine is being executed in the processor (169) For example, when the processor (169) is executing a routine in the domain (105) of applications (106), the virtual machine register (231 ) stores the identifier of the virtual machine in which the application is running.[0033] In some implementations, the virtual machine register (231 ) and the hypervisor status register (233) can be combined. For example, when the virtual machine register (231 ) has a predetermined value (e.g., zero), the virtual machine register (231 ) indicates that no hypervisor is present in the computer system; and when the virtual machine register (231 ) has a value different from the predetermined value, the content of the virtual machine register (231 ) uniquely identifies a virtual machine for which the processor (169) is currently executing instructions.[0034] In some implementations, the virtual machine register (231 ) and the hypervisor status register (233) are separate registers and/or have different access privileges for different domains (e.g., 101 , 103, ... , 105). For example, the hypervisor status register (233) cannot be changed without restarting the computer system in a bootstrap process. For example, the hypervisor status register (233) can be accessed by the domain (101 ) of the hypervisor (102) but not by the domain (103) of the operating system and/or the domain (105) of the applications; and the virtual machine register (231 ) can be accessed by both the domain (101) of the hypervisor (102) and the domain (103) of the operating system (104)[0035] The processor (169) of FUG. 1 includes a memory management unit (MMU) (181 ) that implements a function of address translation (235). The processor (189) can configure the address translation (235) based on the content of the hypervisor status register (233).[0036] For example, when the hypervisor status register (233) has a first value (e.g., 0) indicating the absence of a hypervisor (102), the processor (169) configures the address translation (235) to function without using the virtual machine register (231). When the hypervisor status register (233) has a second value (e.g., 1 ) indicating the presence of a hypervisor (102), the processor (169) configures the address translation (235) to function using the virtual machine register (231 ) such that address translation is specific for a virtual machine.[6037] The processor (169) of FUG. 1 has execution units (e.g., 185), such as an arithmetic-logic unit. The processor (169) can include an internal cache (187) as a proxy of a portion of the memory (109). In additional to the domain register (1 17), the virtual machine register (231), and the hypervisor status register (233), the processor (189) can have other registers (183) to hold instructions for execution, data as operands of instructions, and/or results of instruction executions.[0038] In general, a routine can include a pre-programmed set of instructions stored in the memory (109). The routine can also have input data, output data, and/or, temporary data stored in the memory (109). A routine can invoke or call another routine for services and/or resources. The calling routine and the called routine can be in a same domain or different domains (e.g., 101 , 103, ... , 105).[0039] Optionally, the content of the domain register (1 17) can control security operations in the processor (189) as discussed further below.[6040] In one implementation, when a computer system having the processor (169) is initially powered on (bootstrapped), the processor (169) is configured to automatically execute routines of a hypervisor (102) or an operating system (104) (if no hypervisor is used), as part of the bootstrap process. Thus, the domain register (1 17) is initially set to indicate the domain (101 ) of the hypervisor (102) or the domain (103) of the operating system (104). Subsequently, the execution control can move from one domain to another domain using instructions that identify the destination domains; and the content of the domain register (1 17) can be updated according to the processing of such instructions. Some examples and details of domain crossing can be found in U.S. Pat. App. Ser. No. 82/725,030, filed on Aug. 30, 2018 and entitled“Domain Crossing in Executing instructions in Computer Processors,” the entire disclosure of which application is hereby incorporated herein by reference.[0041] Alternatively, or in combination, the domain of the currently running routine can be identified based on memory addresses, stored attributes of the routines, etc. For example, some techniques to specify the current domain (123) in the domain register (1 17) in the computer processor (189) can be found in U.S. Pat. App. Ser.No. 82/724,999, filed on Aug. 30, 2018 and entitled“Domain Register for Instructions being Executed in Computer Processors,” the entire disclosure of which application is hereby incorporated herein by reference.[0042] In some instances, the current domain can be identified from a memory address used to load an instruction of a routine for execution.[0043] For example, a virtual memory address (e.g., 195 illustrated in FIG. 6) can have a predetermined width (e.g., a predetermined number of bits) for the processor (189). The memory address can include a portion representing an object ID (e.g., 199 illustrated in FIG. S) and a portion representing an offset (e.g., 196 illustrated in FIG. S) within the object represented by the object ID (e.g., 199). For example, the routine can be an object located at the address; and the object ID of the address can be used to identify certain proprieties of the instruction and/or the routine; and the current domain can be determined from the properties.[0044] For example, a static object ID of a predetermined value (e.g., 0) can be used to represent a kernel object of an operating system (104). Thus, the static object ID specified in the memory address can be used to identify the current domain for the execution of the routine. Some details and examples of static object IDs in memory addresses for computer processors to load instructions for execution can be found in U.S. Pat. App. Ser. No. 18/028,840, filed Jul. 6, 2018 and entitled“Static Identifications in Object-based Memory Access,” the entire disclosure of which application is hereby incorporated herein by reference. [0045] In some instances, a memory address and/or the object ID (e.g., 199) of the memory address can include a portion representing an object type (e.g. , 198 illustrated in FIG. 5). For example, an object type (198) of a value from 0 to 3 can be used to identify a kerne! object of an operating system. For example, an object type (198) of a value of 4 to 5 can be used to specify that the offset is an address of different widths (e.g., a 84-bit address or 32-bit address included within the memory address that has 128 bits). For example, an object type (198) of a value of 6 to 7 can be used to specify that a predeter ined portion of the object ID is to be interpreted as an identifier of a local object or an object in Partitioned Global Address Space (PGAS). For example, an object type (198) of a value of 32 can be used to specify that the remaining portion of the object ID is to be interpreted as an identifier of an object defined in a server (e.g., 197). For example, an object name server can store data indicating the name of an object represented by an object ID, access control parameters of the object, and/or other attributes of the object[0046] For example, the object ID (199) of the memory address used to load the routine for execution can have attributes stored in the object name server; and the attributes can be used to determine or infer the current domain of the routine loaded from the memory address.[0047] In some instances, a routine to be executed in the processor (169) can have attributes that are stored in association with the routine (e.g., in the memory (109), in a page table entry for the determination of a physical address of the instruction, in an entry table for making calls for the execution of routines). When the routine is loaded for execution, the attributes of the routine are used to determine the current domain for the execution of the routine.[0048] In one embodiment, when the hypervisor status register (233) indicates the absence of a hypervisor, the processor (169) configures the memorymanagement unit (MMU) (181 ) to identify a table base of an address translation table in accordance with FIG. 2. However, when the hypervisor status register (233) indicates the presence of a hypervisor (102), the processor (169) configures the memory management unit (MMU) (181 ) to identify a table base of an address translation table in accordance with FIG. 3. FIG. 5 shows a technique to retrieve an entry from an address translation table to convert a virtual address to a physical address.[6049] FIG. 2 illustrates the identification of a table base (249) of an address translation table in absence of an operating hypervisor in some embodiments.[0050] In FIG. 2, separate table base registers (241 , 243, ... , 245) are configured for the different domains (101 , 103, ... , 105) respectively.[0051] The domain register (1 17) of the processor (169) stores the identifier of a current domain in which the processor (169) is currently executing instructions. The domain register (1 17) is coupled to a multiplexer (247) to select, as the table base (249) of address translation table used in the address translation. The fable base (249) identifies a memory location of an address translation table that is to be used to perform address translation (235) in the memory management unit (MMU) (181) (e.g., as discussed below in connection with FfG. 5 and/or FIG. 7).[0052] FIG. 2 shows an example of a processor having multiple table base registers (241 , 243, ... , 245).[0053] Alternatively, or in combination, each domain (101 , 103, ... , or 105) can have a separate memory area configured to store the values of domain specific registers used for instruction execution in the respective domain (101 , 103, ... , or 105).[O054] For example, each domain (101 , 103, ... , or 105) can have a separate memory area storing the domain specific vaiues of registers used during the execution of the last executed instruction, before the execution transitions temporarily across into another domain (e.g. , via a domain cal! instruction to execute a routine in another domain). Such a separate memory area for storing the vaiues of registers specific to a particular domain (e.g., 101 ) is accessible for instruction execution in the respective domain (e.g., 101 ) but not accessible for instruction execution in other domains (e.g., 105). Since other domains (e.g., 101 ) are prevented from accessing the register value region of a given domain (e.g., 105), the register states of the given domain (e.g., 105) are isolated and protected from executions in the other domains (e.g., 101 ).[0055] For example, the memory area for domain specific values of registers of a particular domain (e.g., 101 ) can store the value of the program counter (PC) of instructions being executed in the processor, the value of the stack pointer (SP) of a stack for instruction execution, the value of the frame pointer (FP) of the stack, the value of the argument pointer (AP) for the stack, and/or the value of the processor status word (PSW), etc. The value of the table based register for the particular domain (e.g., 101 ) can also be saved in the register value region of the particular domain (e.g., 101 ). In such an implementation, it is not necessary to configure separate registers (241 , 243, 245) for the domains (101 , 103, ... , 105) respectively. A single register can be used to store the table base for the current domain (e.g., 101 , 103, ... , 105) as indicated by the domain register (1 17); and when the execution enters a new domain, the register can be updated using the table base previously stored in the register value region of the new domain. Alternatively, the content of the domain register (1 17) can be used as an index in a table of fable bases to look up the base (249) of address translation table.[0056] In one embodiment, when a domain (101 ) is specifically configured for hypervisor (102), the absence of a hypervisor, as indicated by the hypervisor status register (233), allows the processor (189) to skip the table base register for the domain (101 ) of hypervisor (102); and the domain (101 ) of hypervisor (102) becomes not relevant to the subsequent operations of the processor (169) (e.g. , until the hypervisor status register (233) is changed during a subsequent powering up/bootstrap process).[0057] FIG. 3 illustrates the identification of a table base (249) of an address translation table in the presence of an operating hypervisor (102) in some embodiments.[0058] In FIG. 3, an intermediate base (248) is selected by the multiplexer (247) as an output from the table base registers (241 , 243, ... , 245). The intermediate base (248) is further combined with the content of the virtual machine register (231 ) to generate the table base (249) of address translation table.[0059] In general, for each execution domain (101 , 103, ... , 105) and each virtual machine hosted in the computer system of FIG. 1 , a separate address translation table can be created for the conversion of virtual addresses assigned by the operating system (104) to physical addresses. When an operating hypervisor (102) is present in the computer system, the operating system (104) running in a virtual machine uses pseudo-physical addresses in that the operating system (104) allocates the pseudo-addresses for virtual memory addresses in a way as if the pseudo-addresses were physical addresses, since the operating system (104) cannot tel! apart a virtual machine provided by the hypervisor (102) from a physical machine. The hypervisor (102) can translate the pseudo-physical addresses allocated by the operating system (104) running in a virtual machine to the physical address of the memory (109) in the computer system (e.g., illustrated in FIG. 1). [0080] Preferably, the hypervisor (102) performs the translation at the time of the creation of a page table entry mapping a page of virtual memory addresses and a page of physical addresses. Since the operating system (104) running in a virtual machine operates on pseudo-physical addresses, the page table entry specified by the operating system maps the page of virtual memory addresses to a page of pseudo-physical addresses. The hypervisor (102) can translate the page of pseudo-physical addresses to a page of physical addresses assigned to the virtual machine such that the page table entry can subsequently be used to translate the virtual addresses directly into the physical addresses assigned to the virtual machine. Such a page table entry modified by the hypervisor (102) can improve the memory access performance in the presence of an operating hypervisor (102) by eliminating the need to separately translate a pseudo-physical address in a virtual machine to a physical address for the physical memory (109) at the time of the usage of a virtual address.[0081] For example, when the operating system (104) executes an instruction to create a page table entry to map a virtual memory page to a pseudo-physical memory page, the instruction can be trapped to cause the hypervisor (102) to translate the pseudo-physical memory page to a physical memory page and modify the page table entry to map the virtual memory page to the translated physical memory page. Subsequently, the page table entry can be used to directly translate the virtual memory page to the physical memory page.[0062] The content of the virtual machine register (231) can be combined with the base (248) via a table to look up the base (249) specific to a virtual machine identified by the virtual machine register (231 ). Alternatively, the content of the virtual machine register (231 ) can be used as part of the input for a hash function (e.g., 121 illustrated in FIG. 5) to index into a table at the base (248) to retrieve a virtual machine specific entry of an address translation table (249), as further discussed below in connection with FIG. 5.[0083] For example, for a particular domain (e.g., 103), the processor (169) can store a table of table bases of the virtual machines hosted in the computer system. The table base register (e.g., 243) of the domain (e.g., 103) can store the base (248) of the table of table bases for the virtual machines. The content of the virtual machine register (231 ) can be used as an index into the table at the base (248) to look up the base (249) of an address translation table that is specify for the domain (e.g., 103) and for the virtual machine identified by the virtual machine register (231 ).[0084] FIG. 3 shows an example of a processor having multiple table base registers (241 , 243, 245).[0O6S] Alternatively, or in combination, each domain (101 , 103, ... , or 105) can have a separate memory area configured to store the domain specific values of registers used for instruction execution in the respective domain (101 , 103, ... , or 105), as discussed above in connection with FIG. 2. The values of the table base registers (241 , 243, ... , 245) can be stored in the register value region of the respective domains (e.g., 101 , 103, ... , 105). In such an implementation, it is not necessary to configure separate registers (241 , 243, ... , 245) for the domains (101 , 103, ... , 105) respectively. A single register can be used to store the base (248) retrieved from the register value region of the respective domains (e.g., 101 , 103, ... , 105). In some implementations, the base (248) is further combined with the content of the virtual machine register (231 ) to obtain the base (249) of address translation table and update that register to hold the base (249) for address translation (235). Alternatively, separate registers are used to store the intermediate base (248) and the base (249) of address translation table to avoid the need to reload the base (248) from the register value region of the respective domains (e.g., 101 , 103, .... 105) when the content of the virtual machine register (231) changes. The register value regions of the domains (101 , 103, ... , 105) are be cached in the internal cache (187) to facilitate efficient state changes of the processor (189) in response to the changes in the domain register (117) and/or the virtual machine register (231 ). Alternatively, the content of the domain register (1 17) and the content of the virtual machine register (231 ) can be combined as an index in a table of table bases to look up the base (249) of address translation table.[D068] FIG. 4 illustrates separate address translation tables (217, ... , 227) for respective domains (101 , ... , 105).[0067] In FIG. 4, the domain register (1 17) can store an identifier of a current domain of instruction execution in the processor (169) of FIG. 1. For example, the content of the domain register (1 17) can identify domain A (101 ) or domain C (105).[0088] Each of the domains (101 , .... 105) has a corresponding table base (219, ... , 229) that identifies the memory location of a respective address translation table (217, ... , 227).[0069] For example, when the hypervisor status register (233) indicates the absence of an operating hypervisor (102) in the computer system, the table bases (219, ... , 229) can be loaded from the register value regions of the respective domains (101 , 105) and/or retrieved from respective registers (241 , ... , 245), as discussed above in connection with FIG, 2.[0070] When the hypervisor status register (233) indicates the presence of an operating hypervisor (102) in the computer system, the table bases (219, ... , 229) can be loaded for a particular virtual machine identified by the virtual machine register (231 ) from the register value regions of the respective domains (101 , ,105) and/or looked up for the particular virtual machine using table bases retrieved from respective registers (241 , 245), in a way similar to that discussed above in connection with FUG. 3[0071] Alternatively, when the hypervisor status register (233) indicates the presence of an operating hypervisor (102) in the computer system, the table bases (219, , 229) can be loaded from the register value regions of the respective domains (101 , 105); and the content of the virtual machine register (231 ) can be used to generate an index into the address translation tables (217, 227) at the table bases (219, ... , 229).[0072] In FIG. 4, each address translation table (217, or 227) stores a number/count (21 1 , .... or 221 ) of entries the respective table (217, ... , or 227) has. The number/count (21 1 , ... , or 221 ) allows the processor (169) to check whether an Index used on the address translation table (217, ... , or 227) is within the valid bound defined by the number/count (21 1 , .... or 221 )[0073] During the translation of a virtual address to a physical address, an index is generated from and/or for the virtual address to retrieve an entry that facilities the translation of the virtual address to the physical address. FIG, 5 illustrates an example of the generation of the index in address translation (235).[0074] FIG. 5 shows a technique to retrieve an entry (250) from an address translation table (217) to convert a virtual address (195).[0075] The virtual address (195) can include an object I D (199), an object type (198), and an offset (196). For example, the virtual address (195) can have a width of 128 bits; a number of bits (e.g., 59 or 58) of the virtual address (195) can be used to store the object ID (199), another number of bits (e.g., 5 or 6) of the virtual address (195) can be used to store the object type (198), and the remaining bits (e.g., 64) of the virtual address can be used to store the offset (196) relative to the object that has the type (198) and the ID (199). For example, the virtual address (195) can be an address stored in the memory (109), as configured, programmed, and/or seen by a programmer or user of a routine in a domain (e.g., 105).[0O7S] In FIG. 5, a hash (121) is applied on the object ID (199) to generate an index (125). The index (125) has a less number of bits than the object ID (199) and thus reduces the size of the address translation table (217) for looking up an entry (e.g., 213, .... 215) from the table (217). However, hash collision can occur when multiple items are hashed into a same index. Chaining is one of the techniques to resolve hash collisions. The index resulting from a collision can be used to retrieve a iist/chain of key-value pairs. Each item that is hashed into the index can be configured as the key in a corresponding key-value pair in the list; and the look up result for the item can be configured as the value in the corresponding key-value pair. To retrieve the look up result of one of the items that are hashed into the same index, the Iist/chain of key-value pairs identified via the index can be searched to find a key-value pair where the key matches with the item. The value of the matching key-value pair provides the look up result. When there is no hash collision for the index (125), the entry (e.g., 213, ... , or 21 5) at the index (125) in the address translation table (217) can be retrieved as the resulting entry (250). When there is hash collision for the index (125), the entry (e.g., 213, ... , or 215) at the index (125) in the address translation table (217) identifies a collision chain (260). The collision chain (260) has a Iist/chain showing the entries (e.g., 262, 264, ... ) for the object IDs (e.g., 261 , 263) that are hashed (121 ) into the same index (125). The collision chain (260) can be searched to locate the entry (e.g., 262, or 264) that is specified for an object ID (e.g., 261 or 263) that matches with the object ID (199) before the hash (121). The located entry (e.g., 262, or 264) is illustrated as the resulting entry (250). [D077] In general, the hash (121 ) can be applied to a combination of the object ID (199), optionally the object type (198), a portion of the offset, the content of the virtual machine register (231 ), and/or other information, such as the processor ID of the current process running in the processor (169) and/or the content of the domain register (1 17). In some instances, the content of the domain register (1 17) and/or the content of the virtual machine register (231) can be appended/added to the result of the hash (121 ) to generate the index (125).[0078] A typical entry (250) looked up from the address translation table (217) using the index (125) can have fields for subsequent operations in address translation (235). For example, a valid field (251 ) can have a value indicating whether the entry (250) is a valid for address translation; a type field (253) can have a value indicating a type of translation to be performed using the entry; a page size field (255) can have a value indicating the memory page size for the determination of a page table entry; an address field (257); etc. For example, the entry (250) can further include a field identifying the page table structure, and/or a field specifying security configuration (e.g , 107 illustrated in FIG. 6) for accessing the memory region corresponding to the entry (250). Alternatively, the entry (250) can further include a field identifying a fable; and a hash of the offset (196) or a portion of the offset (196) can be used as an index in the table to retrieve an entry that identifies a page tabie structure (e.g., the page table (151 ) or a page directory leading to the page table (151 ) illustrated in FIG. 7), or a base (157) of a region (137) of physical addresses (159), or the physical address (159) corresponding to the virtual address (195).[0079] The address (257) provided in the entry (250) of the address translation table (217) can be the memory address of a page tabie or page directory. At least a portion of the offset (196) can be used as a virtual page number and an index in the page table or page directory to look up the next page table or page directory. The process of looking up the next page table or page directory can be repeated, until an entry looked up using the last virtual page number in the offset (196) is used to locate a page table entry (e.g. , 153 illustrated in FIG. 7). A base (157) of a physical memory page identified in the page table entry (153) can be combined with the remaining portion of the offset (196) (e.g., as the offset (147) illustrated in FIG. 7) to generate a physical address (e.g., 159 illustrated in FIG. 7).[0080] Optionally, the hash (121) can be applied to the entire virtual address (195) such that the address (257) looked up using the index (125) is a physical address in such an implementation, the entry (250) can be considered as a page table entry and can include security configuration (e.g., 107 illustrated in FIG. 6) for the memory address. However, such an implementation can require a large address translation tabie (217).[0081] Alternatively, the hash (121 ) can be applied to a combination of the object ID (199), optionally the object type (198), and a portion of the offset (196); and the address (257) looked up using the index (125) is a base (e.g., 157 illustrated in FIG. 7) of a page of physical addresses. The remaining portion of the offset (196) can be combined with the base (e.g., as illustrated in FIG. 7) to generate the physicai address (e.g., 159). In such an implementation, the address translation table (217) can be considered as a page table (e.g , 151 illustrated in FIG. 7); the portion of the address (195) used to generate the index (125) from hashing (121) can be considered an entry ID (e.g., 145 illustrated in FIG. 7) or a virtual page number (VPN); and the entry (250) can be considered as a page table entry (e.g., 153 illustrated in FIG. 7) and can optionally include a security configuration (e.g., 107) for the memory address.[0082] Alternatively, the hash (121 ) can be applied to a combination of the object ID (199), optionally the object type (198), and a portion of the offset (196); and the address (257) in the entry (250) looked up using the index (125) is the physical address of a page table (e.g. , 153 illustrated in FIG. 7). Since the entry (250) identifies a page table (e.g., 153), the portion of the address (195) used to generate the index (125) from hashing (121 ) can be considered a table ID (e.g., 143 illustrated in FIG. 7). A portion of the offset (196) can be used as an entry ID (145) or a virtual page number (VPN) in the page table (e.g., 153) to look up the page table entry (e.g., 153) that contains the base (157) of a memory page or memory region (137); and the remaining portion of the offset (196) can be combined with the base (157) to generate the physicai address (159).[0083] Alternatively, the hash (121 ) can be applied to a combination of the object ID (199), optionally the object type (198), and a portion of the offset (196); and the address (257) in the entry (250) looked up using the Index (125) Is the address of a page directory. The offset (196) can have one or more virtual page numbers for one or more page directories or page tables. A virtual page number (VPN) in the offset (196) is used to index into the page directory to look up the base of a subsequent page directory or page table. The last virtual page number (VPN) in the offset (196) is used to index into a page table (e.g., 153) to retrieve the page table entry (153) containing the base (157) of the memory region (137). in such an implementation, the leading portion of the address (195), including the virtual page number (VPN) before the last virtual page number (VPN) can be considered a table ID (143).[0084] In some instances, when different object IDs are hashed to generate the same index (125), a collision chain (260) can be used to identify a unique address associated with each of the object IDs in such a situation, the address (257) can be used to identify a table, list, or chain storing the collision chain (260), from which a unique entry (e.g., 262, or 264) for address translation for the object ID (199) can be located. The unique entry (e.g., 262, or 264) looked up from the collision chain (260) can have a structure similar to the entry (250) looked up directly from the address translation table (217) without collision.[0085] In some implementations, different processes running in the computer system illustrated in FIG. 1 can have different virtual address spaces and thus different entries in the address translation table (217). In such a situation, the process ID can be combined with a portion of the address (195) for the hash (121 ) to generate the index (125). Optionally, the object ID (199) includes or indicates the process ID.[0088] In some implementations, different virtual machines use different page tables or page directories looked up from the address translation table (217). Thus, the content of the virtual machine register (231) can be combined with the object I D (199) and/or a further portion of the virtual address (195) to generate the index (125) through the function of the hash (121 ).[0087] The domain register (1 17) of the computer processor (169) can be used to store the domain identifier of the routine that is currently being executed in the computer processor (169). For example, upon the execution of an instruction that causes domain crossing, the content of the domain register (1 17) can be updated to store the domain identifier specified in the instruction, after the instruction is successfully processed. The content of the domain register can control various security operations of the processor (169).[0088] For example, when the execution of an instruction results in a request to access a memory location identified using a virtual memory address, the virtual memory address can be translated to a physical memory address using one or more page tables. The content of the domain register can be used to select, from a page table entry, a permission bit for the memory access made in the current domain.The selected permission bit can control the processing of the request to access a memory unit identified by the virtual memory address.[0089] For example, when a call is made to execution a routine having a virtual memory address, the content of the domain register can be used to select a security bit from a page table entry that is used to translate the virtual memory address to a physical memory address. The security bit is selected for executing the routine in providing services for the current domain identified by the domain register. The selected security bit controls security operations of separating resources and/or data between the called routine and the calling routine.[0090] For example, when the execution of an instruction generates a request to access a privileged register, the content of the domain register can be used to select, from a permission register for example, a permission bit for the current domain to access the privileged register. The permission bit can control the acceptance or rejection of the request to access the privileged register.[0091] FIG. 6 shows a system to control security operations applied to resources (e.g., 131 ) in accordance with a domain register (1 17).[0092] In FIG. 6, a security control (1 19) is implemented based on the current domain (123) specified in the domain register (1 17), and the security configuration (107) having settings (11 1 , 1 13, .... 1 15) specified separately for the predefined domains (101 , 103, ... , 105) respectively. The security control (1 19) is applied to a resource (131 ), which can be a privileged register (133), a called routine (135), a memory region (137), etc.[0093] The security configuration (107) can have settings (1 1 1 , 113, ... , 1 15) for the domains (101 , 103, ... , 105) respectively, without relying upon a static hierarchy of trust among the domains (101 , 103, ... , 105).[9094] During the executing of a routine in the processor (169), the domain register (1 17) causes the security control (1 19) to select a setting (e.g. , 1 1 1 , 1 13, or 1 15) that is pre-associated with a domain (e.g., 101 , 103, ... , or 105) matching with the current domain (123). The selected setting (e.g., 1 1 1 , 1 13, ... , or 1 15) is used by the security control (1 19) to customize security operations for the resource (131).[0095] For example, when the execution of an instruction) of the routine in the processor (169) requests memory access to the memory region (137), the selected setting (e.g., 1 1 1 , 1 13, .... or 1 15) having its pre-associated domain (e.g., 101 , 103, ... , 105) matching the current domain (123) is used by the security control (119) to determine whether the memory access permissible.[0096] For example, different regions (e.g., 137) in the memory (109) can be configured with different security configurations (e.g., 107); and each security configuration (e.g., 107) can include different permissions (e.g., 11 1 , 1 13, ... , 1 15) for different domains (101 , 103, ... , 105). The security configuration (107) can be specified, for example, in a page table entry used in logical to physical address translation of virtual memory addresses, such that the structure of the memory regions can correspond to the memory page structure, as further discussed below in connection with FUG. 7[0097] For example, the physical memory (109) can be divided into multiple regions; each region (e.g., 137) can be a page of physical memory (109) for memory management, or a set of pages of physical memory (109).[0098] For example, a typical memory region (137) can have a respective security configuration (107) specified for the set of predefined domains (101 , 103, .... 105). The security configuration (107) explicitly identify the permissions (e.g., 1 1 1 , 1 13, ... , 1 15) for the domains (101 , 103, .... 105) respectively. Thus, the privileges of routines to access the memory region (137) are not dependent on a hierarchy of the domains (102, 103, ... , 105).[0099] In one example, when a routine executed in the current domain (123) causes memory access to the memory region (137) for read, write, or execution of instructions, the domain register (1 17) causes the security control (1 19) to check the permission specified in the setting (1 1 1 , 1 13, ... , or 1 15) that is corresponding to the current domain (123). Whether to block (or reject) an access to the memory region (137) for a particular type of operations (e.g., read, write, execution) by the execution of an instruction of the routine in the current domain (123) can be determined based on a respective permission bit that is selected according to the current domain (123) for the memory region (137), and for the type of operations. Some details and examples of permissions for memory access to the memory region (137) can be found in U.S. Pat. App. Ser. No. 62/724,896, filed on Aug. 30, 2018 and entitled “Memory Access Control through Permissions Specified in Page Table Entries for Execution Domains,” the entire disclosure of which application is herebyincorporated herein by reference.[0100] In general, different routines of a same domain (e.g., 103) can be configured to in different memory regions and thus configured to have different permissions and security settings for the same domain (e.g. , 103).[0101] Further, a routine can be configured to store different portions of its data in different memory regions (e.g., 137) and thus configured to have different permissions for accessing from a same domain (e.g., 101 , 103, ... , or 105).[0102] In another example, when a routine executed in the current domain (123) calls a called routine (135) stored in the memory region (137) for execution, the domain register (1 17) causes the security control (1 19) to check the permission specified in the setting (1 1 1 , 1 13, ... , or 1 15) that is corresponding to the current domain (123) Whether or not to deploy a security measure to protect the resources of the calling routine against the called routine (135) and/or protect the resources of the called routine (135) against the calling routine can be determined based on a respective permission bit that is specified for the current domain (123) and for the memory region (137).[0103] Security measures can include sandboxing. Sandboxing in general includes a computer security measure that isolates the execution of a set of instructions (e.g., an application) from certain system resources and/or other sets of instructions/programs. For example, sandboxing can be implemented using a shadow stack structure where the calling routine and the called routine are configured to use separate stacks and control registers related to the stacks, the calling routine can be prevented from accessing the stack assigned to the called routine, and the called routine can be prevented from accessing the stack assigned to the calling routine. Some details and examples of a shadow stack structure can be found in U.S. Pat. App. Ser. No. 62/724,913, filed on Aug. 30, 2018 and entitled “Security Configurations in Page Table Entries for Execution Domains,” the entire disclosure of which application is hereby incorporated herein by reference.[0104] For example, the security configuration (107) of a typical memory region (137) can have sandboxing settings (e.g., 1 1 1 , 1 13, ... , 1 15) specified for the set of predefined domains (e.g., 101 , 103, ... , 105) respectively. The sandboxing configuration (107) explicitly identifies whether or not a sandboxing operating is required for a call to execution a called routine (135) stored in the region (137).Calls to execute the same routine (135) from routines executed in the different domains (101 , 103, 105) can have different settings (1 1 1 , 1 13, 1 15); and the settings (1 1 1 , 1 13, ... , 115) specify whether the calls from the respectively domains (101 , 103, ... , 105) require sandboxing (e.g., to protect the called routine (135) and the calling routine from each other). Thus, the sandboxing operations can be selectively applied for the execution of the called routine (135) stored in the memory region (137), based on the current domain (123) identified in the domain register (1 17) and the explicit settings (e.g. , 1 1 1 , 1 13, ... , 1 15) configured for the respective domains (101 , 103, ... , 105), without relying upon a predefined hierarchy of domains (102, 103, ... , 105). [0105] For example, a calling routine in the current domain (123) can call the called routine (135). Whether to invoke a sandboxing operation for the call to execute the called routine (135) stored in the memory region (137) can be determined based on the sandbox setting (e.g., 1 1 1 , 1 13, ... , or 1 15) that is specified for the respective domain (e.g., 101 , 1 G3, ... , or 1 G5) matching with the current domain (123) for the memory region (137). Thus, the sandboxing operation can be invoked independent of a relative hierarchy between the domain of the called routine (135) and the current calling domain (123).[0106] The sand box settings (107) for routines stored in the memory region (137) can be specified, for example, in a page table entry used in logical to physical address translation of virtual memory addresses, such that the structure of the memory regions can correspond to the memory page structure, as further discussed below in connection with FIG. 7.[0107] in a further example, when a routine executed In the current domain (123) requests access to a privileged register (133), the domain register (1 17) causes the security control (1 19) to check the permission specified in the setting (1 1 1 , 1 13, ... , or 1 15) for the privileged register (133). Whether to permit or block the access can be determined based on a respective permission bit that is specified for the current domain (123) and for the privilege register (133).[0108] For example, the privileged register (133) can have different permissions ( 1 1 1 , 1 13, ... , 1 15) for the different domains (101 , 103, ... , 105) respectively. When an instruction executed in the current domain (123) requests to access the register privileged (133), the domain register (1 17) causes the security control (1 19) to select a respective permission (e.g., 11 1 , 1 13, ... , or 1 15) corresponding to the current domain (123) to control the access.[01 OS] The register (133) can have explicit permissions (1 1 1 , 113, ... , 1 15) specified separately for the domains (101 , 1 Q3, ... , 105) respectively (e.g., non· hierarchical), without relying upon a predefined hierarchy of trust for the domains (102, 103, .... 105)[0110] in some instances, the privileged register (133) can be accessed for different types of operations, such as read, write, execution, etc. The permission (e.g., 1 1 1 , 1 13, ... , or 1 15) for a particular domain (e.g., 101 , 103, ... , or 105) to access the privileged register (133) can have separate permission bits for the respective types of operations (e.g., read, write, and/or execution). [0111] The security configuration (107) can be configured to allow an instruction running in one domain (e.g., 101 , 103, ... , 105) to access the register (133) for one type of operations (e.g. , read) but not for another type of operations (e.g., write).[0112] The security configuration (107) can be configured to allow an instruction executing in one domain (e.g., 103) to access the register (e.g. , 133) via one permission setting (e.g., 1 13) for the domain (e.g., 103), but prohibit the same instruction running in another domain (e.g., 101 ) from accessing the register (133) via another concurrent setting (e.g., 1 1 1 ) for that domain (e.g. , 101 ), even when the disallowed domain (e.g., 101 ) can be more privileged (and thus trusted) than the allowed domain (e.g., 103) in traditional protection rings.[0113] In one implementation, the security configuration (107) is hardwired in a processor for the privileged register (133). In another implementation, the security configuration (107) can be set via firmware for the register (133) of a processor during a start-up/boot up process of a computer system. In a furtherimplementation, the security configuration (107) can be changed via privileged software during the normal operations of the computer system.[0114] For example, the security configuration (107) for the privileged register (133) can be changed when the processor (169) switches from running a program in one domain (e.g., 101 ) to running a program in another domain (e.g., 103).[0115] For example, the security configuration (107) for the privileged register (133) can be changed in accordance with a request when the computer system switches from running one routine to another routine, where the routines can be in the same domain (e.g., 101 ).[0116] For example, the security configuration (107) for the privileged register (133) can be configured in a permission register that controls access to the privileged register (133) using permission bits stored in the permission register; and the content of the permission register can be updated by an authorized process to adjust/customize the security level of the computer system for the current computation. Alternatively, permissions bits for different domains (101 , 103,105) can be specified in separate registers that correspond to the domains (101 ,103, ... , 105) respectively. Some details and examples of permission registers can be found In U.S. Pat. App. Ser. No. 62/724,929, filed on Aug. 30, 2018 and entitled “Access Control for Processor Registers based on Execution Domains,” the entire disclosure of which application is hereby incorporated herein by reference. [0117] Since the security control system of FIG. 8 does not rely upon a predefined domain hierarchy of trust (i.e., non-hierarchicai), it can provide better flexibility and finer control granularity than the conventional protection rings.[0118] FIG, 7 illustrates a page table entry (153) having a security configuration (107) for execution domains (e.g., 101 , 103, 1 G5).[0119] For example, the security configuration (107) in the page table entry can be permissions for accessing the memory region (137) identified by the page table entry (153) and/or sandboxing configuration for calling routines stored in the memory region (137) that is identified by the page table entry (153).[O120] A typical virtual address (141 ) in a virtual address space (127) can be translated into a corresponding physical address (159) in a physical address space (129) using a page table (151 ). In general, multiple page tables (e.g., 151 ) can be used to map the virtual address space (127) to the physical address space (129).[0121] The virtual address (141) can include a table ID (143), an entry ID (145), and an offset (147). The table ID (143) can be used to identify a page table (151) that contains a page table entry (153) for a page that contains the memory unit that is identified by the virtual address (141 ) and the physical address (159). The entry ID (145) is used as an index into the page table (151 ) to locate the page table entry (153) efficiently. The page table entry (153) provides a base (157) of the physical address (159). Physical addresses in the same page of memory share the same base (157). Thus, the base (157) identifies the region (137) in the memory (109). The offset (147) of the virtual address (141 ) is used as a corresponding offset (147) in the page or region (137) in the memory (109). The combination of the base (157) and the offset (147) provides the physical address (159) corresponding to the virtual address (141 ).[D122] In FIG. 7, the page table entry (153) specifies not only the base (157) for the page or region (137), but also the security configuration (107) for the page or memory region (137), such as permissions for reading data into the memory region (137) corresponding to the base (157), permissions for writing data into the memory region (137), permissions for executing instructions stored in the memory region (137), sandboxing requirements for calling routines stored in the memory region (137). The security configuration (107) can have separate settings (11 1 , 1 13, ... , 1 15) respectively for the predefined, non-hierarchical domains (101 , 103, .... 105) illustrated in FIGS. 1 and 8. The current domain (137) in the domain register (1 17) controls which one of the settings (1 1 1 , 1 13, 1 15) is used for a current memory access, or a current call to a routine (135) stored in the memory region (137).[0123] Optionally, the page table entry (153) can specify other attributes (155) of the page of physical memory, such as whether the data in the page is valid, whether the page is in main memory, whether the page is dirty (e.g., the changes in data in the page of physical memory have not yet been flushed to a longer-termmemory/storage device relative to the memory region (137)) For example, the attributes (155) can include a page fault bit indicating whether the page is in the main memory of the computer or in a storage device of the computer. If the permissions in the security configuration (107) allow the current access to the page of memory and the page fault bit indicate that the page is currently not in the main memory of the computer, the memory management unit (181 ) can swap the page from the storage device into the main memory of the computer to facilitate the access to the page identified by the page table entry (153) However, if the permissions in the security configuration (107) deny the current access to the page for the current execution domain, it is not necessary to evaluate the page fault bit and/or to swap in the page corresponding to the page table entry (153).[0124] In general, the table ID (143) can be divided into multiple fields used to locate the page table (151 ). For example, the table ID (143) can include a top table ID identifying a top-level page table and a top table entry I D that is used as an index into the top-level page table to retrieve a page table entry containing an identifier of the page table (151 ), in a way similar to the entry ID (145) indexing into the page table (151) to identify the page table entry (153) containing the base (157).[0125] In general, an entry ID (145) can be considered a virtual page number in the page table (151 ); and the virtual page number (e.g., 145) can be used in the page table (151 ) to look up the page table entry (153) containing the base (157).[0126] For example, the table ID (143) can include a set of virtual page numbers that can be used to identify a chain of page tables (e.g., 151 ). Each virtual page number is used as an index in a page table (or page directory) to identify the page table entry (or page directory entry) that contains the identity or base of the next level page table (or page directory).[0127] In some instances, different running processes in a computer can have different virtual address spaces (e.g., 127); and the process I D of a running process can be used in determine the top-level page table (or page directory) in some instances, a hash of a portion of the virtual address (141 ), the process ID, and/or an identification of a virtual machine hosted in the computer system can be used to locate the top-level page table (or page directory). In some instances, a hash is used as an index or key to look up a page table entry. Regardless of how the page table entry (153) is located (e.g., via indexing through multiple page tables, via the use of a hash as an index or key), the content of the page table entry (153) can be configured in a way as illustrated in FIG. 7 to provide the security configuration (107) for different domains (101 , 103, ... , 105) to access the page/memory region (137) and/or the routines stored in the memory region (137) that corresponds to the base (157).[0128] In F!G„ 7, the security configuration (107) for a page or region (137) is specified in the bottom-level page table (151 ), where the page table entry (153) in the bottom-level page table (151 ) provides the base (157) of the physical address (159).[0129] Alternatively, or in combination, higher-level page tables (or page directories) can also have security configurations for their page table entries (or page directory entries). For example, a page table entry (or page directory entry) identifying the page table (151 ) can have security configurations for ail of the pages in the page table (151 ); and thus, the domain permission data in the page table entry is applicable to the memory region defined by the page table (151 ). The hierarchy of security configurations in the chain of page table entries leading to the page table (151) and the security configuration (107) in the bottom-level page table entry (153) can be combined via a logic AND operation or a logic OR operation.[0130] For example, a routine running in a domain (e.g., 101 , 103, .... 105) can be allowed to access a page identified by the base (157) if all of the permission bits in the chain of page table entries leading to the base (157), including the bottom- level table entry (153), have the value that allows access. Alternatively, a routine running in a domain (e.g., 101 , 103, ... , 105) can be allowed to access a page identified by the base (157) if any of the permission bits in the chain of page table entries leading to the base (157), including the bottom-level table entry (153), have the value that allows access.[0131] For example, a routine running in a domain (e.g., 101 , 103, ... , 105) can be denied of access to a page identified by the base (157) if any of the permission bits in the chain of page table entries leading to the base (157), including the bottom- level table entry (153), have the value that denies access. Alternatively, a routine running in a domain (e.g., 101 , 103, ... , 105) can be denied of access to a page identified by the base (157) only when all of the permission bits in the chain of page table entries leading to the base (157), including the bottom-level table entry (153), have the value that denies access.[0132] For example, when a non-bottom-level page table entry (or page directory entry) indicates that the memory access is prohibited, the operations to translate from the virtual address (141 ) to the physical address (159) can be interrupted to reject the memory access associated with the virtual address (141 ). in response to the rejection, a trap to the software designated to handle the rejection is used.[0133] For example, the security configuration (107) can include a set of sandbox setting bits (e.g., 1 1 1 , 1 13, ... , 1 15) for the set of domains (101 , 103, ... , 105) respectively. When a sandbox setting bit (e.g., 1 1 1 , 113, ... , or 1 15) corresponding to the current domain (123) in the domain register (1 17) is set to have a first value (e.g., 1 or 0), a current call from a routine in the current domain (123) to a called routine (135) stored in the region (137) is implemented to use a sandboxing operation to protect the calling routine and the called routine (135) from each other (e.g., by using a shadow stack to separate the caller and cai!ee in stack usage). When a sandbox setting bit (e.g., 1 1 1 , 1 13, .... or 1 15) corresponding to the current domain (123) in the domain register (117) is set to have a second value (e.g., 0 or 1), a call from the routine in the current domain (123) to the called routine (135) stored in the memory region (123) is implemented without using the sandboxing operation to isolate the caller and ca!iee from each other (e.g., without using a shadow stack).[0134] Optionally, the security configuration (e.g., 107) is specified in the bottom- level page table (151) but not in the higher-level page tables (directories).[0135] RG. 8 shows a computer system having a domain register (1 17) controlling security operations.[0138] For example, the computer system of RG. 8 can optionally have a page table (e.g., 151 ) storing security configuration (107) for accessing memory region identified by a page table entry (153) of FSG. 7 by routines in predefined domains (101 , 103, ... , 105) illustrated in FIGS. 1 and S. Further the computer system of FIG. 8 can optionally have the domain access tables (217, ... , 227) of FIGS 1 and 2 to facilitate and secure domain crossing.[0137] For example, the computer system of FIG. 8 can have one or more permission registers storing the security configuration (107) for accessing the privileged register (133) for predefined domains (101 , 103, ... , 105) illustrated in FIGS, 1 and 8[0138] The domain register (1 17) of the processor (169) stores the identifier of the current domain (123). The content of the domain register (1 17) selects a set of applicable settings of the security configuration (107) corresponding to the current domain (123).[0139] The computer system of FIG. 8 has a host system (165) coupled to a memory system (161 ) via one or more buses (163). The memory system (161 ) has memory components (171 , .... 173).[0140] For example, the buses (163) can include a memory bus connecting to one or more memory modules and/or include a peripheral internet connecting to one or more storage devices. Some of the memory components (171 , ... , 173) can provide random access; and the some of the memory components (171 , .... 173) can provide persistent storage capability. Some of the memory components (171 ,... , 173) can be volatile in that when the power supply to the memory component is disconnected temporarily, the data stored in the memory component will be corrupted and/or erased. Some of the memory components (171 , 173) can be non-volatile in that the memory component is capable of retaining content stored therein for an extended period of time without power.[0141] In general, a memory system (161 ) can also be referred to as a memory device. An example of a memory device is a memory module that is connected to a central processing unit (CPU) via a memory bus. Examples of memory modules include a dual in-line memory module (DIMM), a small outline DIMM (SO-DiMM), a non-volatile dual in-line memory module (NVDIMM), etc. Another example of a memory device is a storage device that is connected to the centra! processing unit (GPU) via a peripheral interconnect (e.g., an input/output bus, a storage area network). Examples of storage devices include a solid-state drive (SSD), a flash drive, a universal serial bus (USB) flash drive, and a hard disk drive (HDD). In some Instances, the memory device is a hybrid memory/storage system that provides both memory functions and storage functions.[0142] The memory components (171 , ... , 173) can include any combination of the different types of non-volatile memory components and/or volatile memory components. An example of non-volatile memory components includes a negative-20 and (NAND) type flash memory with one or more arrays of memory cells such as single level cells (SLCs) or multi-level cells (MLCs) (e.g., triple level cells (TLCs) or quad-level cells (QLCs)). In some instances, a particular memory component can include both an SLC portion and an MLC portion of memory cells. Each of the memory ceils can store one or more bits of data (e.g., data blocks) used by the host system (165). Alternatively, or in combination, a memory component (171 , ... , or 173) can include a type of volatile memory in some instances, a memory component (171 , ... , or 173) can include, but is not limited to, random access memory (RAM), read-only memory (ROM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), phase change memory (PCM), magneto random access memory (MRAM), Spin Transfer Torque (STT)-IV!RAM, ferroelectric random-access memory (FeTRAM), ferroelectric RAM (FeRAM), conductive bridging RAM (CBRAM), resistive random access memory (RRAM), oxide based RRAM (OxRAM), negative-or (NOR) flash memory, electrically erasable programmable read-only memory (EEPROM), nanowire-based non-volatile memory, memory that incorporates memristor technology, and/or a cross-point array of non-volatile memory cells. A cross-point array of non-volatile memory can perform bit storage based on a change of bulk resistance, in conjunction with a stackable cross-gridded data access array. Additionally, in contrast to many flash- based memories, cross-point non-volatile memory can perform a write in-place operation, where a non-volatile memory ceil can be programmed without the non- volatile memory ceil being previously erased.[0143] In general, a host system (165) can utilize a memory system (161 ) as physical memory (109) that includes one or more memory components (171 , ... ,173). The host system (165) can load instructions from the memory system (161 ) for execution, provide data to be stored at the memory system (161 ), and request data to be retrieved from the memory system (161 ).[0144] In FIG. 8, the host system (165) includes a memory management unit (MMU) (181 ) and a processor (169). The processor (169) has execution units (e.g., 185), such as an arithmetic-logic unit. The processor (169) has registers (183, e.g., 133) to hold instructions for execution, data as operands of instructions, and/or results of instruction executions. The processor (169) can have an internal cache (187) as a proxy of a portion of the memory system (161 ).[0145] In some instances, the host system (165) can include multiple processors (e.g., 169) integrated on a same silicon die as multiple processing cores of a central processing unit (CPU).[0148] Routines programmed for executing in the processor (169) can be initially stored in the memory system (161 ). The routines can include instructions for a hypervisor (102), an operating system (104), and an application (106). The routines stored initially in the memory system (161 ) can be loaded to the internal cache (187) and/or the registers (183, e.g., 133) for execution in the execution units (185).[0147] The running instances of the routines form the executions (167) of the hypervisor (102), the operating system (104), and the application (106). In some instances, a hypervisor (102) is not used; and the operating system (104) controls the hardware components (e.g., the memory system (161 ), peripheral input/output devices, and/or network interface cards) without a hypervisor.[0148] The executions (167) of the hypervisor (102), the operating system (104), and/or the application (106) access memory (137) (e.g., in memory components (171 , .... 173)) using virtual memory addresses (e.g., 141 ) defined in one or more virtual memory spaces (e.g., 127). At least one page table (151 ) (e.g., as illustrated in the FIG. 7) can be used to translate the virtual memory addresses (e.g., 141 ) used in the execution to the physical memory addresses (e.g., 159) of the memory components (e.g. , 171 , ... , 173).[0149] As illustrated in FIG. 1 , the executions of the routines of hypervisor (102), the operating system (104), and the application (106) can be organized into a plurality of domains (101 , 103, ... , 105). For each of the execution domains (101 , 103, .... 105) and a memory region (137) identified by a page table entry (153), the page table entry (153) identifies a set (e.g. , 1 1 1 , 1 13, ... , 115) of security configuration bits for accessing the region (137) in predefined types of operations such as read, write, execution, etc. The configuration bits of the corresponding security configuration (e.g., 107) controls the memory accesses of the corresponding types from a respective execution domain (e.g., 101 ) and/or controls the sandboxing operations for isolating calling routines and called routines (e.g., 135).[01 SO] The security configuration (107) of the privileged register (133) can be stored in separate permission registers. Each of the permission registers is pre associated with a domain (e.g., 101 , 103, ... , 105). A permission register stores a permission bit for accessing the privileged register (133) from the corresponding domain (e.g., 101 , 103, ... , or 105). Different permission bits in the permission register can be configured for different privileged registers (e.g., 133). in some instances, a privileged register (133) can have multiple permission bits in a permission register for different types of accesses (e.g., read, write, execution).[0151] Alternatively, permission bits for the privileged register (133) can be specified in a same permission register. Further, permission bits for different privileged register (e.g. , 133) can be stored in different portions of the same permission register.[01 S2] FIG. 9 shows a method to translate an object specific virtual memory address.[0153] For example, the method of FIG. 9 can be performed in a computer system of FIG. 1 or 8. The method of FIG. 9 can be performed in combination of address translation techniques of FIGS. 2 - 5 and 7 and/or the security techniques of FIGS. 6 - 8.[0154] At block 301 , a memory (109) stores at least instructions of routines of a predefined set of domains (101 , 103, ... , 105).[0155] For example, the predefined set of domains can include at least one of a domain for hypervisor, a domain for operating system, or a domain for application, or any combination thereof.[0158] At block 303, a computer processor (169) coupled to the memory (109) executes the routines in a plurality of virtual machines.[0167] At block 305, the computer processor (169) executes an Instruction that uses a virtual address (195 or 141 ) in a current execution domain (123) among the predefined domains (101 , 103, ... , 105) and in a current virtual machine in the plurality of virtual machines.[0158] For example, the current execution domain (123) can be identified by a domain register (1 17) of the processor (169); and the current virtual machine can be identified by a virtual machine register (231 ) of the processor (169).[0159] As illustrated in FIG. 5, the virtual address has an object identifier (199) and an offset (196) of a location within the object represented by the object identifier (199).[0180] At block 307, the computer processor (169) hashes at least the object identifier (199) provided in the virtual address (195 or 141 ) to generate an index (125).[0161] At block 309, a memory management unit (MMU) (181 ) of the computer processor (169) translates the virtual address (195 or 141) into a physical address (159) by retrieving from an address translation table (217) an entry (250) at the index (125)[0162] For example, the computer processor (169) can store separate address translation tables (e.g., 217, ...227) for different domains (e.g., 101 , ... , 103) and/or for different virtual machines. The content of the domain register (1 17) and/or the content of the virtual machine register (231 ) can be used to select the address translation table (217).[0163] In other instances, different domains (e.g., 101 , .... 103) and/or different virtual machines may share an address translation table (217) but use different entries in the address translation table (217). In such instances, the content of the domain register (1 17) and/or the content of the virtual machine register (231 ) can be combined with the object identifier (199) and/or a portion of the offset (196), and the combination is hashed (121 ) to generate the index (125)[0184] When the index (125) corresponds to the collision of different values mapped from the hashing (121 ), the entry (250) at the index (125) in the address translation table (217) can identify a collision chain to resolve ambiguity in hashing (121).[0165] Optionally, security configurations (107) for an object represented by the object ID (e.g., 199) of a virtual address (e.g., 195) can be specified in entries (e.g., 250) looked up from an address translation table (217), as illustrated in FIG. 10.[0166] FIG, 10 shows a system to identify security configurations (107) for accessing a memory location identified by a virtual address (195).[0167] In FIG. 10, security configurations (e.g. , 107) related to an object identified by an object ID (199) are specified in entries (e.g , 250) of the address translation table (217). Each entry (e.g., 250) retrieved from the address translation table (217) or its associated collision chain (260) represents or corresponds to a memory region (137) that stores the object identified by the object ID (199) (or a portion of the object). The security configurations (107) can have security settings for accessing and/or using the resources (131 ) related to the object.[0188] For example, the security configurations (107) can specify the permissions of instructions running in the current domain (123) in accessing the object for various memory operations, such as reading any portion of the object represented by the object ID (199), writing over any portion of the object represented by the object ID (199), loading any portion of the object as instructions for execution, etc. For example, the security configurations (107) can include sandboxing requirements for isolating calling routines and called routines (e.g., 135) when the virtual memory address (195) is used to load a routine of the object having the object ID (199) for execution.[0169] As discussed above in connection with FIG. 6, the virtual address (195) can include an object ID (199), an object type (198), and an offset (196). For example, the virtual address (195) can have a width of 128 bits; a number of bits (e.g., 59 or 58) of the virtual address (195) can be used to store the object ID (199), another number of bits (e.g., 5 or 6) of the virtual address (195) can be used to store the object type (198), and the remaining bits (e.g., 64) of the virtual address can be used to store the offset (196) relative to the object that has the type (198) and the ID (199). For example, the virtual address (195) can be an address stored in the memory (109), as configured, programmed, and/or seen by a programmer or user of a routine in a domain (e.g., 105).[0170] In FIG. 10, a hash (121) is applied on the object ID (199) to generate an index (125). Since the index (125) has a less number of bits than the object I D (199), hash collision can occur when multiple items are hashed into a same index.[6171] When there is no hash collision for the index (125), the entry (e.g., 213, ... , or 215) at the index (125) in the address translation table (217) can be retrieved as the resulting entry (250).[0172] When there is hash collision for the index (125), the entry (e.g., 213, ... , or 215) at the index (125) in the address translation table (217) identifies a collision chain (260). The collision chain (260) has a list/chain showing the entries (e.g., 262, 264, ...) for the object iDs (e.g., 261 , 263) that are hashed (121 ) into the same index (125). The collision chain (260) can be searched to locate the entry (e.g., 262, or 264) that is specified for an object ID (e.g., 261 or 263) that matches with the object ID (199) before the hash (121). The located entry (e.g., 262, or 264) is illustrated as the resulting entry (250).[0173] A typical entry (250) looked up from the address translation table (217) using the index (125) can have security configurations (107) that can be evaluated prior to subsequent operations in address translation (235).[6174] In one embodiment, the address translation table (217) is specific for the current domain (123) identified by the domain register (1 17). The domain register (1 17) can be used, as illustrated in FiG, 4, to select the table base (e.g., 219, .... or 229) of the respective address translation table (e.g., 217, ... , or 227) as the address translation table (217) used in the operations to look up the resulting entry (250) illustrated in FIG. 10. in such an embodiment, the security configuration (107) has the setting (1 1 1 , 113, ... , or 1 15) for the current domain (123) but not the settings for other domains.[0175] Aiternatively, when the address translation table (217) is not specific for a particular domain (101 , 103, ... , 105), the security configuration (107) can include the settings (1 1 1 , 1 13, ... , and 1 15) for the domains (101 , 103, ... , 105) respectively, as illustrated in FIG. 6; and the domain register (1 17) can be used to selectively apply the setting (e.g. , 1 1 1 , 1 13, ... , or 1 15) corresponding to the current domain (123).[0176] In general, the security configuration (1 Q7) can optionally specify whether instructions running in the current domain (123) is permitted to access the object having the object ID (199) for read, write, execution, etc. Further, the security configuration (107) can optionally specify whether it is required to isolate (e.g., using a shadow stack structure), the current routine running the processor (169) and the routine of the object having the object ID (199) that Is being called by the current routine.[6177] In some instances, the security configuration (107) is applicable for any instructions currently running in the processor (169). In other instances, the security configuration (107) can be applicable for any instructions running in the current domain (123) identified by the domain register (1 17) of the processor (169), in the current virtual machine identified by the virtual machine register (231 ), in a current instance of a running program identified by a process ID, in a current user account, in a current object containing the instruction that is executed to access the virtual address (195), or any combinations.[6176] The entry (250) can include a valid field (251 ) having a value indicating whether the entry (250) is a valid. If the entry (250) is valid and the security configuration (107) allows the current access made using the virtual address (195), the processor (169) can further evaluate the other fields for address translation.[0179] For example, the entry (250) can include or identify a type field (253) having a value Indicating a type of translation to be performed using the entry, a page size field (255) having a value indicating the memory page size for the determination of a page table entry, and an address field (257) having an address of a page table or a page directory for the translation of the offset (196) of the object having the object ID (199) to a physical address (159). A page table entry (153) (or a page directory entry) can have a similar security configuration (107) for a portion of the object corresponding to a memory region (137) controlled/represented by the page table entry (153) (or the page directory entry).[0180] In general, applicable security configures (107) can be specified in multiple locations for memory regions of different sizes. For example, the entry (250) retrieved from the address translation table (217) and/or the collision chain (260) can specified a security configure (107) applicable to the entire object represented by the object ID (199); and the page table entry (153) containing the base (157) of a set of physical addresses (e.g., 159) can specified a security configure (107) applicable to the set of physical addresses (e.g., 159) at the base (157). Similarly, a page directory entry identifying the page table (151 ) can specified a security configuration applicable to the sent of physical addresses defined by the page table (151 ).[0181] When applicable security configures (107) are specified in multiple locations for memory regions of different sizes, the security configure (107) specified for the largest one of the memory regions can supersede the security configures (107) specified for the other memory regions. Thus, when the applicable security configuration (107) specified for the largest one of the memory regions is found, the processor (169) can skip processing of security configurations (107) specified for the other memory regions.[0182] Alternatively, when applicable security configures (107) are specified in multiple locations for memory regions of different sizes, the security configure (107) specified for the smallest one of the memory regions can supersede the security configures (107) specified for the other memory regions.[0183] Alternatively, when applicable security configures (107) can be specified in multiple locations for memory regions of different sizes, a prohibition for access specified in any of the security configurations (107) of the applicable memory regions can cause an access request to be rejected.[0184] The address (257) provided in the entry (250) of the address translation table (217) can be the memory address of a page table or page directory. At least a portion of the offset (196) can be used as a virtual page number and an index in the page table or page directory to look up the next page table or page directory in some instances, the portion of the offset (196) is hashed to generate an index into the page table or page directory to look up the next page table or page directory.The process of looking up the next page table or page directory can be repeated, until an entry looked up using the last virtual page number in the offset (196) is used to locate a page table entry (e.g., 153 illustrated in FIG. 7). A base (157) of a physical memory page identified in the page table entry (153) can be combined with the remaining portion of the offset (196) (e.g., as the offset (147) illustrated in FIG. 7) to generate a physical address (e.g., 159 illustrated in FIG. 7).[0185] As discussed above in connection with FIG. 5, the hash (121 ) can be applied to a combination of the object ID (199), optionally the object type (198), a portion of the offset, the content of the virtual machine register (231 ), and/or other information, such as the processor ID of the current process running in the processor (169) and/or the content of the domain register (1 17). In some instances, the content of the domain register (1 17) and/or the content of the virtual machine register (231) can be appended/added to the result of the hash (121 ) to generate the index (125).[018S] FIG. 11 illustrates security parameters for memory access made using a virtual address (195).[0187] For example, the security parameters in the security configuration (107) illustrated in FIG. 11 can be specified in an entry retrieved from the address translation table (217) and/or its associated collision chain (260) illustrated In FIG.10.[0188] In FIG. 10, when the processor (169) uses the virtual address (195) to access a memory location, the processor (169) identifies the security configuration (107) (e.g., using a technique illustrated in FIG. 10). The security configuration (107) can have a field of bound check (331 ).[0189] The security configuration (107) can have a bound check field (331 ) that identifies the requirement for performing (322) a bound check on the offset (196) of the virtual address (195). When the field of bound check (331 ) has a predetermined value (e.g., 1 ), the processor (169) compares the offset (196) with the object length (333) to determine whether the offset (196) is with the bounds of valid offsets defined by the object length (333). For example, if the offset (196) is larger than the object length (333), the processor (169) can reject the memory access; and in response to the rejection, a trap to the software designated to handle the rejection can be used. When the field of bound check (331 ) has another predetermined value (e.g., 0), the processor (169) can skip performing (323) the bound check on the offset (196) and/or ignore the object length (333).[0190] The security configuration (107) can have a permission check field (341) that identifies the requirement for enforcing the permissions (343, 345, ... , 347) specified in the security configuration (107). When the field of permission check (341) has a predetermined value (e.g., 1 ), the processor (169) checks the permission bit (e.g , 343, 345, .... or 347) corresponding to the type of memory operation requested via the virtual address (195). For example, if the virtual address (195) is used in an instruction causing a memory read operation, the read permission (343) is checked. If the virtual address (135) is used in an instruction causing a memory write operation, the write permission (345) is checked. If the virtual address (195) is used in an instruction causing the execution of an instruction at the memory location, the execution permission (347) is checked. If the respective permission bit prohibits the type of current memory access requested via the virtual address (195), the processor (169) can reject the memory access; and in response to the rejection, a trap to the software designated to handle the rejection can be used. However, when the field of permission check (341 ) has another predetermined value (e.g., 0), the processor (169) can proceed with address translation (236) of the virtual address (195) without enforcing the permissions (343, 345, ... , 347).[0191] Optionally, the processor (169) can include an object register (321 ) that stores the object ID of the current object when the instructions of the current object is running in the processor (169). For example, when the virtual address (195) is used to load an instruction of the object having the object ID (199) for execution, the object register (321 ) stores the object ID (199) during the execution of instructions of the object having the object ID (199).[0192] Optionally, when the virtual address (195) is used to access memory, the security configuration (107) can include the permissions (e.g., 343, 345, ... , 347) for an object identified by the object register (321 ) to access the object having the object ID (199). For example, the security configuration (107) identified via the entry (250) can include a permission table for a set of objects. From the permission table, the processor can look up the permissions specified for the object identified by the object register. The permission table can use the hash of object IDs to look up the permissions specified for an object, in a way similar to the use of the hash (121 ) to locate an entry (250) from an address translation table. [0193] In some implementations, when the permission table does not specify permissions for a given object, the default permissions (e.g., 343, 345, 347) can be used for the object that makes the memory access request using the virtual address (195). In other implementations, when the permission table does not specify permissions for a given object, the memory access is rejected in further implementations, the memory access is allowed, unless the default permissions (e.g., 343, 345, .... 347) and/or the permission table has a permission bit that prohibits the access.[0194] Optionally, the security configuration (107) can include a key (335) for cryptographic operations of the data stored at the memory location identified by the virtual address (195). For example, the item stored at the memory location can be in an encrypted or scrambled form; and the key (335) can be used to decrypt or unscramble the data item. Some examples and details of protecting data within a processor (169) can be found in U.S. Pat. App. Ser. No. 16/054,913, filed Aug. 3, 2018 and entitled“Data Protection in Computer Processors,” and U.S. Pat. App. Ser. No. 16/134,387, filed Sep. 18, 2018 and entitled“Key Management in Computer Processors," the entire disclosures of which applications are hereby incorporated herein by reference.[9195] Optionally, the security configuration (107) can include a sandbox setting for the object having the object ID (199). When the sandbox setting has a predetermined value, a routine called via the virtual address (195) is to be isolated from the calling routine using a shadow stack structure, where separate call stacks are used for the calling routine and the called routine; otherwise, the calling routine and the called routine can be executed using a same call stack. Some details and examples of a shadow stack structure can be found in U.S Pat. App. Ser. No 62/724,913, filed on Aug. 30, 2018 and entitled“Security Configurations in Page Table Entries for Execution Domains,” the entire disclosure of which application is hereby incorporated herein by reference.[9196] FIG. 12 shows a method to perform security operations in response to a memory access request made using a virtual address.[9197] For example, the method of FIG. 12 can be performed in a computer system of FIG. 1 or 8. The method of FIG, 9 can be performed in combination of address translation techniques of FIGS. 2 - 5, 7, and 9 and/or the security techniques of FIGS. 6 - 8 and 10 - 11. [0198] At block 351 , a computer system (e.g., as illustrated in FiG. 1 or 8) stores in a memory (109) at least instructions of routines of a predefined set of domains (e.g., 101 , 103, ... , 105). The domains (e.g., 101 , 103, .... 105) have no predefined levels of trust and/or hierarchy.[0199] At block 353, a processor (169) of the computer system executes an instruction that uses a virtual address (e.g., 195 or 141) in a current execution domain (123) among the predefined domains (e.g. , 101 , 103, .... 105). The virtual address (195) has an object identifier (199) and an offset (196) of a location within the object represented by the object identifier (199). The virtual address (195) can be programmed and stored in a routine that is loaded from the memory (109).[0200] At block 355, the processor (169) identifies a table (217) corresponding to the current execution domain (123) among the set of domains (e.g., 101 , 103,105). For example, a technique illustrated in FIG. 4 can be used to identify the table (217) using a domain register (1 17) that stores the identifier of the current execution domain (123).[0201] At block 357, the processor (169) hashes (121 ) at least the object identifier (199) provided In the virtual address (195) to generate an index (125).[0202] At block 359, the processor (169) retrieves from the table (217) an entry (250) using the index (125). The entry (250) includes or identifies a security configuration (107) specific for the object represented by the object identifier (199).[0203] At block 359, the processor (169) secures a memory access made via the execution of the instruction that uses the virtual address (195) based on the security configuration (250).[0204] In one example, the security configuration (250) identifies an object length (333); and the processor (169) compares the offset (196) with the object length (333) to determine whether the memory access resulting from executing the instruction that uses the virtual address (195) is to be rejected. For example, in response to a determination that the offset (196) exceeds a bound identified by the object length (333), the processor can reject the memory access request associated with the virtual address (195).[0205] in some implementations, the security configuration (250) includes a bound check field (331 ). When the bound check field (331 ) has a predetermined value (e.g., 1 or 0), the processor (169) compares the offset (196) with the object length (333) for a bound check (323); otherwise, the processor (169) can skip comparing the offset (196) with the object length (333).[020S] In another example, the security configuration includes a permission bit (e.g., 343, 345, . , or 347) for a type of memory access for the current execution domain (123). The processor (169) can reject the memory access request associated with the virtual address (195) in accordance with a value of the permission bit. For example, the permission bit (e.g., 343, 345, ... , or 347) can be set to a predetermined value (e.g., 1 or 0) to prohibit the type of memory access for the current execution domain (123) among the set of domains (101 , 103, .... 105); and another value of the permission bit (e.g., 343, 345, ... , or 347) does not prohibit the type of memory access. Examples of the type of memory access include read data from virtual addresses, write data to virtual addresses, or execute instructions stored at virtual addresses, or any combination.[0207] In some implementations, the security configuration (250) includes a permission check filed (341). When the permission check filed (341 ) has a predetermined value (e.g., 1 or O), the processor (169) checks the permission bit (e.g.., 343, 345, ... , or 347); otherwise, the processor (169) can skip checking the permission bit (e.g.., 343, 345, ... , or 347).[0208] In a further example, the security configuration includes or identifies a key (335) for cryptographic operations on an item stored the memory (109) at the virtual memory address (195). For example, the item can be stored in an encrypted or scrambled form; and the key (335) is used to decrypt the item for calculation during the execution of the instruction and/or a result of the execution of the instruction is encrypted according to the key (335) for storing in the memory (109) at the virtual memory address (195).[0209] In yet another example, the security configuration includes a sandbox setting. When the virtual address identifies a memory location of a called routine that is called by the instruction in a calling routine, the processor (169) can selectively isolate the execution of the calling routine and the execution of the called routine based on the sandbox setting. For example, when the sandbox setting has a predetermined value, the processor (169) uses separate call stacks for the calling routine and the called routine; otherwise, the processor (169) can use a same call stack for the execution of the calling routine and the execution of the called routine. [Q210] The techniques disclosed herein can be applied to at least to computer systems where processors are separated from memory and processors communicate with memory and storage devices via communication buses and/or computer networks. Further, the techniques disclosed herein can be applied to computer systems in which processing capabilities are integrated withinmemory/storage. For example, the processing circuits, including executing units and/or registers of a typical processor, can be implemented within the integrated circuits and/or the integrated circuit packages of memory media to performing processing within a memory device. Thus, a processor (e g., 101 ) as discussed above and illustrated in the drawings is not necessarily a central processing unit in the von Neumann architecture. The processor can be a unit integrated within memory to overcome the von Neumann bottleneck that limits computingperformance as a result of a limit in throughput caused by latency in data moves between a central processing unit and memory configured separately according to the von Neumann architecture.[0211] The description and drawings of the present disclosure are illustrative and are not to be construed as limiting. Numerous specific details are described to provide a thorough understanding. However, in certain instances, well known or conventional details are not described in order to avoid obscuring the description. References to one or an embodiment in the present disclosure are not necessarily references to the same embodiment; and, such references mean at least one.[0212] In the foregoing specification, the disclosure has been described with reference to specific exemplary embodiments thereof. It will be evident that various modifications can be made thereto without departing from the broader spirit and scope as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. |
Methods, apparatuses, and non-transitory machine-readable media for image location based on a perceived interest and display position are provided. Apparatuses can include a display, a memory device, and a controller. an example controller can assign a perceived interest and sort images based in part on the perceived interest. In another example, a method can include assigning, by a controller coupled to a memory device, a perceived interest to an image of a plurality of images, wherein the perceived interest is assigned based in part on a change in position of a display coupled to the memory device while the image is viewable on the display, selecting the image from an initial viewing location on the display responsive to the assigned perceived interest, and transferring the image to a different viewing location, wherein the initial viewing location and the different viewing location are visible on the display. |
1.A method for image localization based on perceptual interest, the method comprising:assigning perceptual interest to images of the plurality of images (218-1, . . . , 218-N, 318, 418-1, . . , 418-N) by a processor (791) coupled to a memory device (792), wherein the perceived interest is assigned based in part on a change in position of a display (202, 402) coupled to the memory device while the image is viewable on the display;selecting the image from an initial viewing position (424-1) on the display in response to the assigned perceived interest (422); andThe image is transmitted to a different viewing position (424-2), wherein the initial viewing position and the different viewing position are viewable on the display.2.The method of claim 1, further comprising grouping the plurality of images based on the perceptual interest.3.10. The method of claim 1, wherein the change in position of the display comprises changing the display from an initial position to a subsequent position.4.The method of claim 3, further comprising:receiving input from an image sensor coupled to a controller (108, 208, 308, 408) when the display is in the subsequent position; andThe image is communicated to a new viewing location based on the input received from the image sensor.5.The method of claim 3, further comprising:receiving input from an image sensor (103, 203, 303, 403) coupled to a controller (408) when the display is in the subsequent position; andAvoid transmitting the image to a new viewing location based on the input received from the image sensor.6.The method of claim 1, further comprising transmitting a prompt to a computing device (210, 310, 410) to discard the image based on the assigned perceptual interest.7.The method of claim 1, further comprising:determining an assigned perceptual interest for each of the plurality of images; andThe plurality of images are classified into groups based on the assigned perceptual interests.8.The method of claim 7, wherein:each image included in a first group of the plurality of groups includes an image having an assigned perceptual interest corresponding to a desired preference; andEach image included in a second group of the plurality of groups includes an image having an assigned perceptual interest corresponding to an undesired preference.9.The method of claim 8, further comprising transmitting a prompt to a computing device (210, 310, 410) to discard the second set of images.10.A non-transitory machine-readable medium for image location based on perceived interest, the medium comprising instructions executable to:by a controller (108, 208, 308) coupled to a mobile device (210, 310, 410) containing a plurality of images (218-1,..., 218-N, 318, 418-1,..., 418-N) , 408) determining that a position of a display (202, 402) coupled to the mobile device changes when one or more of the plurality of images is viewable on the display (202, 402) coupled to the mobile device;assigning a respective perceptual interest to each of the respective plurality of images, wherein each respective perceptual interest is based in part on whether the respective plurality of images are already viewable on the display when the position of the display changes ;andThe respective plurality of images are classified into a plurality of viewing positions based on the assigned respective perceived interests, wherein the plurality of viewing positions are viewable on a display of the mobile device.11.11. The medium of claim 10, further comprising instructions executable to:determining when the display is in an initial position and a subsequent position, wherein the change in position of the display comprises the display moving from the initial position to the subsequent position; andInput is received from an image sensor (103, 203, 303) coupled to the mobile device when the display is in the subsequent position.12.11. The medium of claim 11, further comprising instructions executable to generate a new viewing position based on the input received from the image sensor.13.11. The medium of claim 11, further comprising instructions executable to avoid generating a new viewing position based on the input received from the image sensor.14.13. The medium of any one of claims 10-13, further comprising instructions executable to determine when the display is in an initial position and a subsequent position, wherein:The change in position of the display includes moving the display from the initial position to the subsequent position;the plurality of view positions includes discarding view positions; andIn response to the subset of the respective plurality of images having been viewable on the display but the display being in the subsequent position less than a threshold number of times, classifying the subset to the discard viewing location (424-3) middle.15.14. The medium of any one of claims 10-13, further comprising instructions executable to determine when the display is in an initial position and a subsequent position, wherein:the position change of the display includes moving the display from the initial position to the subsequent position;the plurality of viewing positions includes a preferred viewing position (424-2); andThe respective plurality of images classified into the preferred viewing position have been viewable on the display while the display has been in the subsequent position for greater than a threshold number of times.16.A device for image localization based on perceived interest, the device comprising:a memory device (792) coupled to the display (202, 402);an image sensor (103, 203, 303) coupled to the display; andA controller (108, 208, 308, 408) coupled to the memory device, wherein the controller is configured to:selecting from a plurality of images (218-1, . . . , 218-N, 318, 418-1, . . , 418-N) an image to be viewable on the display;receiving input from the image sensor when the image of the plurality of images is viewable on the display;assigning perceptual interest to the image based at least in part on the received input from the image sensor; andThe image is communicated from an initial viewing position (424-1) on the display to a different viewing position on the display in response to the assigned perceived (422) interest.17.17. The device of claim 16, wherein the image sensor is a camera for providing the input in the form of facial recognition input.18.The apparatus of claim 17, wherein the controller is further configured to:generating a new viewing location based on the facial recognition input; andThe device is prompted to confirm the new viewing location.19.19. The device of claim 18, wherein the controller is further configured to group together subsequent images having a common assigned perceptual interest corresponding to the facial recognition input.20.17. The apparatus of claim 16, wherein the controller is further configured to generate a copy of the image and transmit the copy of the image from the initial position to the different viewing position. |
Image localization based on perceived interest and display locationtechnical fieldThe present disclosure generally relates to apparatus, non-transitory machine-readable media, and methods for image localization based on perceived interest and display location.Background techniqueThe image can be viewed on a computing device. Computing devices are mechanical or electrical devices that transmit or modify energy used to perform or assist in performing human tasks. Examples include thin clients, personal computers, printing devices, laptops, mobile devices (eg, e-readers, tablets, smartphones, etc.), Internet of Things (IoT) enabled devices, game consoles, and the like. IoT-enabled devices may refer to devices embedded with electronics, software, sensors, actuators, and/or network connectivity that enable such devices to connect to a network and/or exchange data. Examples of IoT enabled devices include mobile phones, smart phones, tablets, phablets, computing devices, implantable devices, vehicles, home appliances, smart home devices, monitoring devices, wearable devices, smart shopping system enabled devices, and Other cyber-physical systems.A computing device may include a display for viewing images and/or text. The display may be a touch screen display used as an input device. Relevant data may be received by the computing device when the touch screen display is touched by a finger, a digital pen (eg, a stylus), or other input mechanism. The touch screen display may contain pictures and/or text, as well as other content that the user may touch to interact with the device.SUMMARY OF THE INVENTIONA method is described. In some instances, the method can include assigning, by a processor coupled to the memory device, a perceptual interest to an image of the plurality of images, wherein the perceptual interest is based in part on the image being available in the assigning the image from an initial viewing position on the display in response to an assigned perceived interest; and transferring the image to a different viewing positioning, wherein the initial viewing position and the different viewing positions are visible on the display.A non-transitory machine-readable medium is described. In some examples, the non-transitory machine-readable medium may store instructions executable to: determine, by a controller coupled to a mobile device that includes a plurality of images, when one of the plurality of images or A plurality of images are viewable on a display coupled to the mobile device and the position of the display changes; assigning a respective perceptual interest to each of the respective plurality of images, wherein each respective perceptual interest is based in part on the current whether the respective plurality of images are already viewable on the display when the position of the display is changed; and classifying the respective plurality of images into a plurality of viewing positions based on the assigned respective perceived interests, wherein the respective plurality of images are The plurality of viewing positions are visible on a display of the mobile device.A device is described. In some examples, the apparatus may include: a memory device coupled to a display; an image sensor coupled to the display; and a controller coupled to the display a memory device, wherein the controller is configured to: select an image from a plurality of images to be viewable on the display; when the image of the plurality of images is visible on the display An image sensor receives input; assigning perceptual interest to the image is based at least in part on the received input from the image sensor.Description of drawings1 is a functional block diagram in the form of an apparatus having a display, an image sensor, a memory device, and a controller in accordance with several embodiments of the present disclosure.2 is a diagram representing an example of a computing device including a display with a visible image, in accordance with several embodiments of the present disclosure.3A-3B are diagrams representing example displays including visible images in accordance with several embodiments of the present disclosure.4A-4B are functional diagrams representing an example computing device for image localization based on perceived interest and display position in accordance with several embodiments of the present disclosure.5 is a block diagram of an example of image localization based on perceived interest and display location, in accordance with several embodiments of the present disclosure.6 is a flowchart representing an example method for image localization based on perceived interest and display location, in accordance with several embodiments of the present disclosure.7 is a functional diagram representing a processing resource in communication with a memory resource having instructions written thereon for image positioning based on perceived interest and display location, in accordance with several embodiments of the present disclosure.detailed descriptionApparatuses, machine-readable media, and methods related to image localization based on perceived interest and display location are provided. A computing device display (eg, a monitor, mobile device screen, laptop screen, etc.) can be used to view images (eg, still images, video images, and/or text) on the display. The image may be received by the computing device from another device and/or generated by the computing device. A user of a computing device may prefer some images over others, and classify these images into various viewing positions (eg, viewing positions) on the display. The computing device may organize the images into viewing positions for the convenience of the user. For example, a computing device may contain a controller and memory device for organizing images based on a user's preferences. The preference may be based on the user's perceived interest in the image. In an example, a method may include assigning, by a controller coupled to a memory device, a perceptual interest to an image of a plurality of images, wherein the perceptual interest is based in part on the presence of the image in the memory coupled to the memory assigning the image from an initial viewing position on the display in response to an assigned perceived interest; and transmitting the image to a different viewing position, wherein The initial viewing position and the different viewing positions are visible on the display.As used herein, the term "view location" refers to a location that may be visible on a display of a computing device. The display may be part of a user interface for a computing device, where the user interface allows a user to receive information from and provide input to the computing device. The viewing location may be selected by the user of the computing device. For example, a user may select a viewing position visible on the display to view the image assigned to that viewing position. Images assigned to a particular viewing location may share a common perceptual interest.As used herein, the term "perceived interest" refers to the level of importance an image is determined to have. For example, the perceived interest in the image may be an assignment corresponding to the user's subjective interest in the image. For example, a user may generate images using a computing device such as a mobile device (eg, a smartphone) equipped with an image sensor (eg, a camera). In other instances, the computing device may receive the image from the Internet, email, text message, or other transmission. In other instances, the computing device may receive (or otherwise obtain) the image from the Internet, screen shot, email, text message, or other transmission. Additionally, the computing device may generate groups of images based on criteria in an attempt to correlate perceived interest in grouped images.The computing device can group images without requiring user input. For example, some methods for generating a group of images without input from a user of a computing device include grouping the images by geolocation (eg, GPS) where the images were generated and/or received, by grouping the images by geolocation of objects in the images Facial recognition (eg, grouping images based on people/things contained in them) and/or time (eg, time of day, month, year, and/or season).However, images that are grouped by a computing device using location, facial recognition and/or time of objects in the images may be inaccurate and fail to capture a user's subjective perception of interest in the images. For example, grouping images may not represent images that the user subjectively (eg, actually) perceive as interesting, but may instead group images that are repetitive, poor quality, uninteresting, or otherwise unwanted. Inaccurate grouping of images can lead to confusion in the viewing positioning of images on a display of a computing device and a situation where users frequently search for a particular image. This can lead to frustration, wasted time, resources, and computing power (eg, battery life).When the user of the computing device determines that the image is interesting, the user may show the image to another person. In some instances, the act of showing an image to another person on a computing device involves moving a display of the computing device such that the display is at an angle where the other person can view the image. In other instances, the act of showing the image to another person on the computing device involves the different person being close enough to the display to be at an angle where the person can view the image. For example, a person may position himself or herself beside or behind the user so that the display of the computing device is visible to both the user and the person.Examples of the present disclosure may reduce frustration, confusion, and conserve resources and/or computing power by grouping together images that share a user's perceived interests. In an example embodiment, the perceived interest may be assigned based on a change in the position of a display of a computing device (eg, a smartphone) when the image generated, received, and/or otherwise obtained is viewable on the display of the computing device to the image. In other words, if the user positions the image so that the image is visible on the display, and moves the display to a suitable angle so that different people can view the image, the computing device can assign the image a perceptual interest corresponding to a desired preference.In another example embodiment, when an image generated, received, and/or otherwise obtained by a computing device (eg, a camera of a smartphone) is visible on the display, the input may be based on receiving input from an image sensor coupled to the display to assign perceptual interest to the image. In other words, an image sensor coupled to the display can transmit facial recognition data if the person other than the user is at an angle that makes the image visible on the display (the person is standing next to or behind the user). The computing device may assign a perceived interest to the image corresponding to the desired preference. Embodiments described herein include a viewing location where a computing device transmits (eg, copies) images of shared perceptual interest onto a display so that a user can easily find images that are frequently presented and/or viewed by others. As used herein, the term "transfer" refers to moving an image and/or creating a copy of an image and moving it from an initial viewing position to a different viewing position. In some instances, corresponding viewing locations may include other images that share a common perception of interest.Further, the computing device may group the images based on the received facial recognition input corresponding to the person viewing the images. In other embodiments, undesired images generated by a computing device can be identified and made available on a display so that a user can review and discard the images, thereby eliminating clutter.For example, images generated by a computing device that are not visible on the display and/or not provided for viewing by another person when the display position changes may be assigned a perceived interest corresponding to an undesired preference (eg, lack of perceived interest) And moving to the viewing position allows the user to review and discard images. In other words, sometimes a user may capture, receive, and/or otherwise obtain images, repeat images, etc. on a computing device (eg, a smartphone) that may not necessarily be important to the user. These infrequently viewed images can be grouped together and the computing device can prompt the user to discard the images.In the following detailed description of the present disclosure, reference is made to the accompanying drawings, which form a part hereof, and in which are shown diagrammatically how one or more aspects of the present disclosure may be practiced example. These embodiments are described in sufficient detail to enable those of ordinary skill in the art to practice the embodiments of the present disclosure, and it is to be understood that other embodiments may be utilized and may be Make process changes, electrical changes, and structural changes.As used herein, designators such as "N", "M", etc., particularly with reference to reference numerals in the drawings, indicate that several of the particular features so designated may be included. It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, the singular forms "a", "an" and "the" can include both singular and plural references unless the context clearly dictates otherwise. Additionally, "several," "at least one," and "one or more" (eg, several memory devices) may refer to one or more memory devices, although "plurality" is intended to refer to more than one such item . Furthermore, throughout this application, the words "can" and "may" are used in the permissible sense (ie, may, can) and not in the mandatory sense (ie, must). The term "include" and its derivatives mean "including but not limited to". The terms "coupled" and "coupling" mean a direct or indirect connection, as the case may be, either physically or for accessing and moving (transmitting) commands and/or data. The terms "data" and "data value" are used interchangeably herein and may have the same meaning, as the case may be.The figures herein follow a numbering convention in which one or more first numerals correspond to the figure number and the remaining numerals identify elements or components in the figures. Similar elements or components may be identified between different figures by using similar numbers. For example, 222 may reference element "22" in FIG. 2, and a similar element may be referenced as 322 in FIG. As will be appreciated, elements shown in the various embodiments herein may be added, exchanged, and/or eliminated to provide several additional embodiments of the present disclosure. Additionally, the proportions and/or relative dimensions of elements provided in the figures are intended to illustrate certain embodiments of the present disclosure and should not be taken in a limiting sense.1 is a functional block diagram in the form of a computing system including a device 100 having a display 102, a memory device 106, and a controller 108 (eg, a processor, control circuitry, hardware, etc.), in accordance with several embodiments of the present disclosure , firmware and/or software). In some embodiments, memory device 106 may comprise a non-transitory machine-readable medium (MRM), and/or may be similar to memory device 792 described with respect to FIG. 7 .Device 100 may be a computing device, for example, display 102 may be a touch screen display of a mobile device such as a smartphone. Controller 108 may be communicatively coupled to memory device 106 and/or display 102 . As used herein, "communicatively coupled" can include coupling through various wired and/or wireless connections between devices such that data can be transferred in various directions between the devices. Couplings need not be direct connections, and in some instances, may be indirect connections.Memory device 106 may include non-volatile or volatile memory. For example, non-volatile memory can provide persistent data by retaining written data when power is not applied, and non-volatile memory types can include NAND (NAND) flash memory, or non- (NOR) flash memory Memories, Read Only Memory (ROM), Electrically Erasable Programmable ROM (EEPROM), Erasable Programmable ROM (EPROM), and Storage Class Memory (SCM) that can include resistance variable memory, such as phase change random access memory (PCRAM), three-dimensional crosspoint memory (eg, 3D XPointTM), resistive random access memory (RRAM), ferroelectric random access memory (FeRAM), magnetoresistive random access memory (MRAM), and programmable Conductive memory and other types of memory. Volatile memory may require power to maintain its data and may include random access memory (RAM), dynamic random access memory (DRAM), and static random access memory (SRAM), among others.In other embodiments, as shown in FIG. 1, memory device 106 may include one or more memory media types. 1 shows non-limiting examples of various memory media types in the form of DRAM 112 including control circuitry 113, SCM 114 including control circuitry 115, and NAND 116 including control circuitry 117. Although three memory media types are shown (eg, DRAM 112, SCM 114, and NAND 116), embodiments are not so limited, and there may be more or less than three memory media types. Further, the types of memory media are not limited to the three specifically shown in FIG. 1 (eg, DRAM 112, SCM 114, and/or NAND 116), and other types of volatile and/or non-volatile memory media are contemplated Types of. In several embodiments, controller 108, memory media DRAM 112, SCM 114, and/or NAND 116 may be physically located on a single die or within a single package (eg, a managed memory application). Also, in several embodiments, multiple memory media (eg, DRAM 112, SCM 114, and NAND 116) may be contained on a single memory device.The computing device may include an image sensor (eg, camera) 103 . Image sensor 103 may generate images (video, text, etc.) that are visible on display 102 . Additionally, image sensor 103 may capture and/or receive input from objects, people, items, etc. and transmit the input to controller 108 for analysis. In some instances, image sensor 103 is a camera and can provide input to controller 108 as facial recognition input. For example, display 102 may be part of a mobile device (eg, a smartphone) that includes a camera.Images generated by image sensor 103 may be written (eg, stored) on memory device 106 . Controller 108 may present images on display 102 in response to selections made by the user on display 102 . For example, a user may make selections to show images viewable on display 102 through a menu displayed on display 102 (eg, a "Settings" menu, "Images" or "Pictures" menu, etc.). Such menus may give the user options as to what images the user would like to view and/or the user may manually select images and customize the images into groups. For example, a user may make a set of images that the user selects as "favorite images" and may group other "favorite images" together to create albums and/or folders that may be marked as desired by the user.Manually selecting images as "favorite images" can be tedious, and as mentioned above, grouping images without user input (eg, by geolocation, facial recognition, etc.) can be inaccurate and contain undesired repetitions images, so let the user still manually search and select the desired image. Grouping images by assigning a user's perceived interest to the images may increase the grouping accuracy and efficiency of the computing device and/or memory device 106 .Perceptual interest may be assigned to the image by determining whether the image is visible on the display when the position of the display changes and/or by receiving input from the image sensor 103 (eg, facial recognition input). A position change of the display 102 includes a change of the display 102 from an initial position to a subsequent position. An example of a change in position of the display 102 may include rotating the display 102 a certain number of degrees from the perspective of a user viewing the display 102 so that the display can be viewed by another person, animal, and/or device. Selecting an image to be visible on display 102 and changing the position of the display while the image is visible on display 102 may indicate that the image is perceived as interesting by the user. In other words, a user viewing an image on display 102 and turning display 102 to show another person may indicate that the user has a preference for the image.In a non-limiting example, the controller 108 may be configured to assign, through the controller 108 coupled to the memory device 106, a perceptual interest to an image of the plurality of images, wherein the perceptual interest is based in part on the The images can be assigned by changing the position of the display 102 coupled to the memory device 106 when viewed on the display. For example, a user may be viewing an image on the smartphone's display 102 and turn the smartphone so that the display 102 is viewable by a different person. In response to the change in position, controller 108 may assign perceptual interests to images viewable on display 102 . Controller 108 may be configured to select an image from an initial viewing position on display 102 in response to the assigned perceived interest; In this example, the controller 108 may copy the image from the initial viewing location (eg, a default album or folder) and transmit the copy to a different viewing location (eg, for images that have been detected to contain perceived interest).In some instances, controller 108 may be configured to include a threshold amount by which the position of display 102 changes when an image is visible on display 102 . The threshold value determined by the user may prevent accidental assignment of perceptual interest to images due to accidental changes in the position of display 102 . For example, the user may use settings on the computing device to set a threshold for the number of times the position of the display 102 has changed three or more times, and then assign perceived interests corresponding to desired preferences to images and/or prompt the computing device (eg, the user ) confirms the perceived interest and/or the new viewing position on the display 102 . Although the number three is used herein, the threshold number can be more or less than three. Using this method, the user will need to change the position of the display three or more times while the image is visible on the display before the computing device assigns the perceived interest corresponding to the desired preference. In some instances, the computing device may assign perceptual interest by receiving input in image sensor 103 .For example, apparatus 100 may be a computing device and include memory device 106 coupled to display 102 through controller 108 . Image sensor 103 may be coupled to display 102 directly or indirectly through controller 108 . In order to group images into viewing locations on the display based on perceived interest, the controller 108 may be configured to select an image from a plurality of images to be viewable on the display 102 . Images may be selected from an initial viewing location on display 102 (eg, a default album and/or folder), generated by image sensor 103, received (via text or email from another computing device) and/or otherwise generated by obtained by the computing device. When the image is visible on the display, the user may desire to show the image to another person.Controller 108 may be configured to receive input from image sensor 103 when an image of the plurality of images is visible on display 102 . The display 102 may undergo a change in position and/or the display 102 may be in view of another person (eg, standing near the user). The input received by the controller 108 from the image sensor 103 may be a facial recognition input associated with the person viewing the image. Controller 108 may assign perceptual interests to images based at least in part on input received from image sensor 103 . The controller 108 may transmit images from an initial viewing position on the display to a different viewing position on the display in response to the assigned perceived interest.In a non-limiting example, the computing device may be a smartphone, the image sensor 103 may be the smartphone's camera, and the user may configure the camera's settings to capture when the camera is positioned such that it can collect facial data of people, animals, etc. Facial recognition input. In this example, the camera may capture facial recognition data while the image is visible on display 102 . The controller 108 coupled to the camera (eg, image sensor 103) may generate a new viewing position based on the facial recognition input and prompt the smartphone (eg, the user of the smartphone) to confirm the new viewing position. The controller 108 may be configured to group together subsequent images having a commonly assigned perceptual interest corresponding to the facial recognition input.For example, if the user selects an image on their smartphone, the user may show the image to their mother (eg, or any other person), the smartphone's camera may receive facial recognition data from the mother, and the controller 108 The user may be prompted to create a new folder labeled "Mother" (eg, a new viewing location). The user can then select a different image to show his mother, and when the facial data is collected, the controller 108 can add the different picture to the "Mother" folder. This can be done without user input. In other instances, the controller 108 may determine that one or more images have a perceived interest corresponding to the user's dislike, indifference, or undesired preferences.The controller 108 may assign perceived interests corresponding to undesired preferences. This may be in response to the image on display 102 not changing position while the image is visible on display 102 . Additionally, other images that have not been selected by the user and/or are not viewable on the display 102 when the display changes position may be grouped together by having a perceived interest corresponding to an undesired preference. Grouped images with undesired preferences can be delivered to the user's folder for review for discard. For example, the controller 108 may transmit the image to a particular viewing location on the display 102 . In some instances, controller 108 may write image data corresponding to images in viewing positions on display 102 to various memory types.In an example embodiment, controller 108 may be coupled to a variety of memory media types (eg, DRAM 112, SCM 114, and/or NAND 116), wherein the image contained in the initial viewing position may be written to the first memory medium Types (eg, DRAM 112) and images contained in the different viewing positions may be written to a second memory media type (eg, NAND 116). For example, the different viewing positions on display 102 may include images written to a storage media type that is more secure and/or more suitable for long-term storage on the computing device. As such, viewing locations written to respective memory media types (eg, DRAM 112, SCM 114, and/or NAND 116) may contain other images that have been selected by controller 108 based on respective perceptual interests.FIG. 2 is a diagram representing an example of a computing device 210 including a display 202 having a visible image 218 in accordance with several embodiments of the present disclosure. 2 illustrates a computing device 210, such as a mobile device, that includes an image sensor 203 similar to image sensor 103 of FIG. 1 and a display 202 similar to display 102 of FIG. Computing device 210 further includes memory device 206 similar to memory device 106 of FIG. 1 . Memory device 206 may be coupled to controller 208 , which may be similar to controller 108 of FIG. 1 . 2 shows display 202 as including a plurality of images 218-1, 218-2, 218-3, 218-4, and 218-N, which may be referred to herein as images 218.2 shows a non-limiting example of a particular image 218-3 represented by a star and other images 218-1, 218-2, 218-4, and 218-N represented by circles. Other blocks shown in display 202 are similar to image 218, but are not labeled here to avoid obscuring examples of the present disclosure.Display 202 includes a plurality of images 218 . In some instances, multiple images 218 may be included in the initial viewing position on display 202 and presented in chronological order. In other words, the plurality of images 218 may be the content of the initial viewing location. For example, the plurality of images 218 may be images presented to the user in an order in which the image sensor 203 (eg, camera) has generated and/or received, transmitted, or otherwise obtained by the computing device 210 . The user may select one or more images 218-1, 218-2, 218-3, 218- from the plurality of images 218 using an appendage (eg, a finger) or a device (eg, a stylus, a digital pen, etc.) 4. 218-N. Selecting a particular image 218-3 over other images 218-1, 218-2, 218-4, and/or 218-N may indicate a perceived interest corresponding to the user's desired preferences.Controller 208 may assign perceptual interest to image 218 using a variety of methods. For example, controller 206 may assign a perceptual interest based on selecting a particular image 218-3 such that image 218-3 is visible on display 202 when display 202 changes position, as will be described in conjunction with Figures 3A-3B. When a particular image 218 - 3 is selected, the image may be enlarged so that it covers all or most of the display 202 . A user may configure computing device 210 (eg, controller 209 ) to change the position of the display 3 when an image (eg, image 218 - 3 ) corresponding to the user's desired preference is selected from a set of images 218 Perceptual interest is assigned to the image when visible on the display one or more times. Although three or more times are used as examples herein, the number of times the display needs to change position may be greater or less than three times. Computing device 210 may store metadata, including metadata values associated with the image that may indicate the perceived interest of the image, the positioning of the image on the display, the grouping of the image, and other information that may be included in the metadata associated with the image . Eliminating the requirement for a user to manually denote an image as a "favorite image" may reduce clutter and frustration in the user experience of computing device 210 .In another non-limiting example, controller 208 may assign perceptual interest to one or more images 218 when image 218 is shown to another person and image sensor 203 may collect facial recognition data. For example, when a particular image 218-3 is selected and positioned on the display 202 so that another person (eg, and/or animal or device) can view the image, the controller 208 may place the perceived interest corresponding to the desired preference assigned to the image.In some instances, computing device 210 and/or controller 208 may be configured (eg, by setting, etc.) to generate a new viewing position corresponding to the collected facial recognition data without user input. In other examples, computing device 210 and/or controller 208 may be configured to prompt the user for confirmation before generating a new viewing position corresponding to the collected facial recognition data.In a non-limiting example, a user may be positioned in front of computing device 210 such that display 202 is visible to the user while particular image 218-3 is visible on display 202. Different persons may position themselves beside and/or behind the user so that persons may also view display 202 and certain images 218-3. When the image sensor 203 detects that other people are positioned to view a particular image 218-3, the controller 208 may assign a perceived interest to the image 218-3 corresponding to the user's desired preference. Image sensor 203 may collect facial recognition data from the person and controller 208 may generate a new viewing position corresponding to the person to transmit image 218-3.In another non-limiting example, a user may be positioned in front of computing device 210 such that display 202 is visible to the user while particular image 218-3 is visible on display 202. The user can change the position of the display 202 so that different people can also view the display 202 and the particular image 218-3. When the image sensor 203 detects that other people are positioned to view a particular image 218-3 and/or when the display 202 changes from an initial position to a subsequent position, the controller 208 may assign a perceived interest corresponding to the user's desired preference to Image 218-3. Image sensor 203 can collect facial recognition data from the person and controller 208 can generate a new viewing position on display 202 that corresponds to the person.In some instances, perceptual interest may be assigned to image 218 that was not selected, not viewed by another person, and/or not visible on display 202 when the position of display 202 was changed from an initial position to a subsequent position. For example, assume that images 218-1, 218-2, 218-4, and 218-N have not been selected, viewed by another person, and/or not on display 202 when the position of display 202 is changed from an initial position to a subsequent position. visible above. In this example, the controller 208 may assign a perceived interest corresponding to an image that is not intended by the user. In this example, images with perceived interests that reflect the user's disinterest may be sorted and communicated to different viewing positions on display 202 . In some instances, this viewing position may be used to prompt the user to discard these images to reduce clutter and memory space on the memory device 206 .In some embodiments, controller 208 may change the perceived interest in image 218 . For example, image 218 - 1 may be assigned a perceived interest corresponding to an undesired preference of a user of computing device 210 . Then, in response to image 218-1 being selected, viewed by another person, and/or visible on display 202 when the position of display 202 is changed from the initial position to the subsequent position, controller 208 may assign a preference corresponding to the user's desired of new perceived interest.As will be discussed in conjunction with Figures 4A and 4B, the controller 208 may classify the plurality of images 218 by grouping the plurality of images 218 based on perceptual interest. This can be done without user input (eg, the controller 208 can be configured with user preferences when setting up the computing device 210) or the user can choose to be prompted to ask whether sorting and/or grouping is a preference. For example, upon loading the application, the controller 208 determines that the user may want to include a perceived interest in a particular image 218 and may prompt the user for confirmation. Alternatively, the controller 208 may determine that the user may want to include an image that is not selected, not viewed by another person, and/or not visible on the display 202 when the position of the display 202 is changed from the initial position to the subsequent position. Perceived interest.3A-3B are diagrams representing several embodiments in accordance with the present disclosureDiagram of example display 302 including visible image 318 . FIGS. 3A-3B each illustrate a display 302 that is similar to the displays 102 and 202 of FIGS. 1 and 2 . Display 302 may be part of a computing device (eg, computing device 210 of FIG. 2 ) and coupled to a controller (eg, controller 208 of FIG. 2 ) and a memory device (eg, memory device 206 of FIG. 2 ). 3A-3B each contain an image 318 that may be similar to image 218 of FIG. 2 . Figures 3A-3B also show person 321. Although Figures 3A-3B are shown to contain a single person, there may be more than one person. Further, although the depictions of Figures 3A-3B include representations of humans, any animal or device may be used.FIG. 3A shows display 302 including visible image 318 . In the presentation of FIG. 3A, computing device 310 is in an initial position where a user (not shown) may face display 318 so that image 318 is visible to the user. As shown in FIG. 3A , person 321 is not in the position for viewing image 318 . 3B shows an example of display 302 coupled to computing device 310 in a subsequent position. In this example, the display 302 has changed position from the initial position shown in Figure 3A to the subsequent position shown in Figure 3B. Although the subsequent location of person 321 of FIGS. 3A and 3B is to the right of computing device 310, person 321 may be oriented to the left of computing device 310, in front of, and/or anywhere in between.In the subsequent position shown in FIG. 3B , image 318 is visible to person 321 . The controller of computing device 310 may assign a desired preference corresponding to image 318 based on image 318 being visible on display 302 when the position of display 302 changes from the initial position (of FIG. 3A ) to the subsequent position (of FIG. 3B ). Perceived interest. In another example, when display 302 is in a subsequent position, the controller of computing device 310 may receive input from image sensor 303 coupled to the controller; and based on the input received from the image sensor, transmit image 318 to a new View positioning.In other words, subsequent positions change the angle of the display so that person 321 can view image 318 . Image sensor 303 may collect input (eg, facial recognition input) and generate a new viewing position to transmit image 318 (and/or a copy of image 318). In this example, the new viewing position may correspond to person 321 , and other subsequent images showing person 321 may be transmitted to the new viewing position on display 302 . This can be done without user input. For example, upon receipt of a subsequent image, the controller may determine that the facial recognition input is for a person 321 and transmit the subsequent image to the new viewing location without user prompting, or the user may choose to ask if this is a preferred prompt. For example, upon receipt of subsequent images, the controller may determine, based on facial recognition input corresponding to person 321, that the user may want to transfer the image to a new viewing location, and may prompt the user for confirmation. In some instances, the controller of computing device 310 may avoid transmitting image 318 to a new viewing location.In another non-limiting example, the controller of computing device 310 may receive input from image sensor 303 coupled to the controller when display 302 is in the subsequent position; and avoid image 318 based on the input received from the image sensor Teleport to new viewing location. For example, image sensor 303 may collect input (eg, facial recognition input) and controller 308 may generate a new viewing position to transmit image 318 (and/or a copy of image 318 ) based on the input received from image sensor 303 . The controller may prompt the user to confirm the creation of the new viewing location. Person 321 may be unknown to the user (eg, or rarely encountered, etc.), and the user may not wish to dedicate new viewing locations to unknown person 321 .In the above example, where person 321 is unknown to the user, the controller may assign a perceived interest to image 318 corresponding to an undesired preference. In this example, the controller may further transmit a prompt to the computing device 310 and/or the user to discard the image 318 based on a perceived interest that is an undesired preference.4A-4B are functional diagrams representing computing device 410 for image positioning on display 402 based on image positioning based on perceived interest and display position, in accordance with several embodiments of the present disclosure. FIGS. 4A and 4B each illustrate a display 402 similar to displays 102 , 202 and 302 of FIGS. 1 , 2 and 3A-3B and an image similar to images 218 and 318 of FIGS. 2 and 3 and may be referred to herein as image 418 418-1 to 418-N. Display 402 may be part of computing device 410, which may be similar to computing device 210 of FIG. 2, and coupled to controller 408, which may be similar to controllers 108 and 208 of FIGS. 1 and 2, and memory, which may be similar to that of FIGS. 1 and 2. Memory device 406 of devices 106 and 206 .Figure 4A shows images 418-1 through 418-N contained in initial viewing position 424-1. FIG. 4B illustrates the image viewing position visible on display 402. FIG. The initial viewing position 424-1 may contain each of the plurality of images 418. Image 418 may be viewable in initial viewing position 424-1 in chronological order and/or the default image viewing position for images generated, received, or otherwise obtained by computing device 410. Another viewing position may be the preferred image viewing position 424-2, where the viewable images may include images that have been assigned (by controller 408) perceived interests corresponding to the user's desired preferences.Discard view location 424-3 may contain images that have been assigned (by controller 408) a perceived interest corresponding to the user's undesired preferences. The discarded viewing location 424-3 may contain images that the user may not want to keep because they are not frequently viewed or shown to another person. The controller 408 may prompt the user to review the image contained in the discard viewing location 424-3 and discard the image from the computing device 410. Yet another viewing location may include images corresponding to facial recognition input collected by image sensor 403 . Facial recognition viewing location 424-M may contain images that have been viewed by a person (eg, person 321 of FIG. 3). Images 418 may be grouped and communicated to viewing locations on display 402 based at least in part on perceived interests assigned by controller 408 . As described herein, transferring the image 418 may include generating a copy of the image 418 and transferring the copy to a different viewing location 424 . In other words, the controller 408 may be further configured to generate a copy of the image 418 and transmit the copy of the image 418 from the initial viewing position 424-1 to the different viewing positions 424-2, 424-3, 424-M.As shown in FIG. 4A , the controller 408 may be configured to assign a perceptual interest to each of the plurality of images 418 . For example, the controller 408 may be further configured to determine an assigned perceptual interest for each of the plurality of images 418 and, as shown in FIG. 4A, classify the plurality of images 418 into a plurality of groups based on the assigned perceptual interest. For example, images 418-1, 418-3, 418-5, 418-8, 418-9, and 418-N represented by stars and triangles may be included in the first group. Images 418-2, 418-4, 418-6, and 418-7 represented by circles may be included in the second group.In the above non-limiting example, each image 418-1, 418-3, 418-5, 418-8, 418-9, and 418-N included in the first group of the plurality of groups includes having the Preference corresponds to images of assigned perceived interest. Images 418-1, 418-3, 418-5, 418-8, 418-9, and 418-N may have been assigned perceptual interests corresponding to desired preferences because the images were shown to another person, The image may be viewed on display 403 as display 403 changes the position from the initial position to the subsequent position, or a combination thereof.Further, in the above non-limiting example, each image 418-2, 418-4, 418-6, and 418-7 included in the second of the plurality of groups includes having a preference corresponding to an undesired The assigned perceptual interest in the image. Images 418-2, 418-4, 418-6, and 418-7 may have been assigned a perceived interest corresponding to an undesired preference because the images were not shown to another The image is not viewable on the display 403 when changing from the initial position to the subsequent position, or a combination thereof. Controller 408 may be further configured to transmit a prompt to computing device 410 to discard the second set of images.As described above, controller 408 may group and categorize images 418 based on perceived interest. The controller 408 may further transmit the image to the viewing location 424 based on the perceived interest assigned at block 422 of Figure 4A. In some instances, image 418 may exist in multiple viewing positions 424 .For example, all images 418-1 through 418-N are viewable in the initial viewing position 424-1. Controller 408 may assign (at 422 ) a perceived interest corresponding to a desired preference to images 418-1, 418-3, 418-5, 418-8, 418-9, and 418-N and communicate the images to a preferred viewing location 424-2 so that the image is now viewable in the initial viewing position 424-1 and the preferred viewing position 424-2.Further, images 418-9 and 418-N represented by triangles may correspond to input from an image sensor corresponding to a person who viewed images 418-9 and 418-N and communicated to facial recognition viewing location 424-M . In this example, images 418-9 and 418-N represented by triangles may be viewable in initial viewing position 424-1, preferred viewing position 424-2, and facial recognition viewing position 424-M.Images 418-2, 418-4, 418-6, and 418-7 (at 422) may have been assigned a perceived interest corresponding to an undesired preference because the images were not shown to another person, The image is not viewable on the display 403 when the display 403 changes the position from the initial position to the subsequent position, or a combination thereof. These images are viewable in the initial viewing position 424-1 and the discarding viewing position 424-3, so that the user can check the discarding viewing position 424-3 and discard the images as desired. In some instances, discarding an image from any of the plurality of viewing positions 424 may discard the image from the computing device 410 .5 is a block diagram 539 of an example of image localization based on perceived interest and display location, in accordance with several embodiments of the present disclosure. 5 depicts a computing device (eg, computing device 410 of FIG. 4 ) equipped with a camera for generating images and a controller (eg, the control of FIG. 1 ) for receiving, transmitting, or otherwise obtaining images device 108). At block 540, the computing device may generate (eg, or receive, etc.) an image and the controller may receive the image. The image may be saved to an initial viewing position (eg, initial viewing position 424-1 of Figure 4B). At block 542, the controller may determine that the position of the display of the mobile device has changed.For example, the controller may determine when the display is in an initial position and a subsequent position, where a change in position of the display includes movement of the display from the initial position to the subsequent position. At block 544, the controller may assign perceptual interest to the image. If the image is not visible on the display when the display changes the position from the initial position to the subsequent position, the controller may assign a perceived interest corresponding to the undesired preference. If the image is visible on the display as the display changes position from the initial position to the subsequent position, the controller may assign a perceived interest corresponding to the desired preference.At block 546, the controller may transmit the image from the initial viewing position on the display (eg, initial viewing position 424-1) to a different viewing position on the display (eg, preferred viewing position 424-2 of FIG. 4 or discard viewing) Location 424-3). At block 548, the controller may receive facial recognition input from an input sensor (eg, a camera on the mobile device). The facial recognition input may come from the person to whom the user shows the image as the image visible on the display changes position from the initial position to the subsequent position.At block 550, the controller may assign the new perceptual interest to the image. For example, the controller may assign the new perceived interest and/or refrain at 558 from transmitting the image to the viewing location corresponding to the facial recognition input. In this instance, the user may have declined the prompt to generate the viewing location corresponding to the person. In another example, at 556, the controller may transmit the image to a viewing location corresponding to the facial recognition input. Although "preferred viewing positions", "discarded viewing positions" and "initial viewing positions" are discussed, additional and/or different viewing positions may be used, such as "editing viewing positions", "frequently emailed and/or texted" View Positioning" etc.In a non-limiting example, the mobile device may be configured by the user to include the threshold. The user may have configured settings on the mobile device to set thresholds that require a change in the display from an initial position (FIG. 3A) to a subsequent position (FIG. 3B) before assigning a perceived interest corresponding to the user's desired preferences of the computing device Occurs three or more times.In a non-limiting example, the controller may determine (at 542) when the display is in an initial position and a subsequent position, wherein the change in position of the display includes moving the display from the initial position to the subsequent position, and the plurality of viewing positions includes discarding viewing positions , and responsive to a subset of the corresponding plurality of images (eg, image 418 represented by the circles of FIG. 4 ) that have been viewable on the display but the display has been in a subsequent position less than a threshold number of times, the subset is classified as a discard viewing location middle.In another non-limiting example, the controller may determine (at 542) when the display is in an initial position and a subsequent position, wherein the change in position of the display includes movement of the display from the initial position to the subsequent position, and the plurality of viewing positions includes a preferred A viewing position, and a corresponding plurality of images sorted into the preferred viewing position when the display is in a subsequent position for greater than a threshold number of times, are already viewable on the display.FIG. 6 is a flowchart representing an example method 680 for image localization based on perceived interest and display location, in accordance with several embodiments of the present disclosure. At 682, method 680 includes assigning, by a processor coupled to the memory device, a perceptual interest to an image of the plurality of images, wherein the perceptual interest is based in part on the image being viewable on a display coupled to the memory device assigned when the position of the display changes.For example, a change in the position of the display includes moving the display from an initial position to a subsequent position. In other examples, the perceptual interest may be assigned based on input received by the computing device through the image sensor.At 684, method 680 includes selecting an image from an initial viewing position on the display in response to the assigned perceived interest. The perceived interest may correspond to undesired preferences of the user of the computing device, and the image may be communicated from the initial viewing position to the discarding viewing position. In other instances, the perceived interest may correspond to the desired preferences of the user of the computing device, and the image may be communicated from the initial viewing position to the preferred viewing position.In other words, at 686, method 680 can include transmitting the image to different viewing positions, wherein the initial viewing position and the different viewing positions are visible on the display.In several embodiments, methods in accordance with the present disclosure may include: identifying data for an image displayed through a user interface; determining a relative position of the user interface or input from a sensor or both when the image is displayed on the user interface; and at least Metadata associated with the data of the image is written to memory coupled to the user interface based in part on the relative position of the user interface or input from the sensor.Embodiments of the present disclosure may also include reading metadata from memory and displaying an image at a location on the user interface or for a certain duration, or both, based at least in part on the value of the metadata.Embodiments of the present disclosure may also include reading metadata from memory, and writing data for the image to different addresses in memory or external storage based at least in part on the value of the metadata.Embodiments of the present disclosure may also include reading metadata from memory and modifying the data of the image based at least in part on the value of the metadata.7 is a functional diagram representing a processing resource 791 in communication with a memory resource 792 having instructions 794, 796, 798. In some embodiments, memory device 792 may be similar to memory device 106 described with respect to FIG. 1 . In some instances, processing resource 791 may be similar to controller 108 described with respect to FIG. 1 .System 790 may be a server or computing device (or the like) and may contain processing resources 791 . System 790 may further include a memory resource 792 (eg, a non-transitory MRM) on which instructions, such as instructions 794, 796, and 798, may be stored. Although the following description refers to processing resources and memory resources, the description may also apply to systems having multiple processing resources and multiple memory resources. In such instances, the instructions may be distributed (eg, stored) across multiple memory resources and the instructions may be distributed across (eg, executed by) multiple processing resources.Memory resource 792 may be an electronic, magnetic, optical, or other physical storage device that stores executable instructions. Thus, memory resource 792 may be, for example, a non-transitory MRM including random access memory (RAM), electrically erasable programmable ROM (EEPROM), storage drives, optical disks, and the like. Memory resources 792 may reside within the controller and/or computing device. In this example, executable instructions 794, 796, and 798 may be "installed" on the device. Additionally and/or alternatively, memory resource 792 may be, for example, a portable, external or remote storage medium that allows system 790 to download instructions 794, 796 and 798 from portable/external/remote storage media. In this case, the executable instructions may be part of an "installation package". As described herein, memory resource 792 may be encoded with executable instructions for image positioning based on perceived interest.When executed by a processing resource, such as processing resource 791, the instructions 794 may include instructions for determining, by a controller coupled to a mobile device that includes the plurality of images, when one or more of the plurality of images The image can change the position of a display coupled to the mobile device when viewed on the display. In some examples mentioned herein, a computing device may be configured by a user to include thresholds. In a non-limiting example, the user may have configured settings on the computing device to set thresholds that require the display to change from the initial position (FIG. 3A) to subsequent times before assigning a perceived interest corresponding to the user's desired preferences of the computing device The change in position (Fig. 3B) occurred three or more times.When executed by a processing resource, such as processing resource 791, the instructions 796 may include instructions for assigning a respective perceptual interest to each of the respective plurality of images, wherein each respective perceptual interest is based in part on Whether the corresponding plurality of images are already viewable on the display when the position of the display is changed. The plurality of images may be assigned different perceptual interests. In some instances, one or more of the images may correspond to a person who has viewed the images (eg, through facial recognition data received by the computing device).When executed by a processing resource, such as processing resource 791, instructions 798 may include instructions for classifying respective plurality of images into a plurality of viewing positions based on assigned respective perceived interests, wherein the plurality of View positioning is visible on the display of the mobile device. The plurality of view positions may include discard view positions, preferred view positions, and/or facial recognition view positions.Although specific embodiments have been shown and described herein, those of ordinary skill in the art will appreciate that arrangements may be substituted for the specific embodiments shown, which are calculated to achieve the same result. This disclosure is intended to cover adaptations or variations of one or more embodiments of the present disclosure. It is to be understood that the above description has been made in an illustrative fashion, and not a restrictive one. Combinations of the above-described embodiments, as well as other embodiments not specifically described herein, will be apparent to those skilled in the art upon reading the above description. The scope of one or more embodiments of the present disclosure includes other applications in which the above structures and processes are used. Therefore, the scope of one or more embodiments of the present disclosure should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.In the above Detailed Description, some features have been grouped together in a single embodiment for the purpose of simplifying the disclosure. This approach of the disclosure should not be interpreted as reflecting an intention that the disclosed embodiments of the disclosure must employ more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment. |
The invention relates to virtualizing physical memory in a virtual machine system and discloses a processor including a virtualization system of the processor with a memory virtualization support system to map a reference to guest-physical memory made by guest software executable on a virtual machine which in turn is executable on a host machine in which the processor is operable to a reference to host-physical memory of the host machine. |
1.A processor for executing host software on a host. The host software is used to control operation of client software executed on a virtual machine. The processor includes:The first control register is used to store the first pointer to the first plurality of page tables for converting the first address to the second address, the first address is in the linear virtual address space of the client, and the second address is in the virtual machine In the physical address space; andThe second control register is used to store a second pointer to a second plurality of page tables for converting the second address to a third address, and the third address is in the physical address space of the host.2.The processor of claim 1, wherein the host software is a virtual machine monitor.3.The processor of claim 1, wherein the host software will control the operation of the client software based on the data stored in the virtual machine control structure.4.The processor of claim 3, including logic to determine whether the processor will use the second plurality of page tables to change the second address based on an indicator stored in the virtual machine control structure Switch to the third address.5.The processor of claim 3, wherein the second control register is loaded with the second pointer from the virtual machine control structure.6.The processor of claim 1, wherein the third address is a base address of a page in memory accessible by the processor.7.The processor of claim 6, wherein the second plurality of page tables includes permission information.8.The processor of claim 7, wherein the permission information indicates whether the page is current.9.The processor of claim 7, wherein the permission information indicates whether the page is writable.10.The processor of claim 7, wherein the license information indicates whether the page is executable.11.A method for executing host software on a host, the host software for controlling operation of client software executed on a virtual machine, the method includes:During the execution of the guest software on the virtual machine, the first plurality of page tables pointed to by the first control register in the processor are used to convert the first address to the second address, the first address is in the linear virtual address space of the guest software , The second address is in the physical address space of the virtual machine; andDuring the execution of the guest software on the virtual machine, the second plurality of page tables pointed to by the second control register in the processor are used to translate the second address to a third address, which is in the physical address space of the host.12.The method of claim 11, wherein the host software is a virtual machine monitor.13.The method of claim 11, wherein the host software will control the operation of the client software according to the data stored in the virtual machine control structure.14.The method of claim 13, further comprising executing instructions in conjunction with the processor for transferring control of the processor from host software to guest software and loading the second control register from the virtual machine control structure.15.The method of claim 11, further comprising walking through the second plurality of page tables to translate the second address to the third address.16.The method of claim 11, wherein the third address is a base address of a page in a memory accessible by the processor, and further includes transferring control of the processor from the host software to the host software in response to a page error.17.A system for executing host software on a host. The host software is used to control operation of client software executed on a virtual machine. The system includes:Memory; andProcessor, including:The first control register is used to store a first pointer to the first plurality of page tables in the memory for converting the first address to the second address, the first address is in the linear virtual address space of the client software, the second The address is in the physical address space of the virtual machine; andThe second control register is used to store a second pointer to the second plurality of page tables in the memory for converting the second address to a third address, and the third address is in the physical address space of the host.18.The system of claim 17, wherein the host software will control the operation of the client software according to the data stored in the virtual machine control structure in the memory.19.The system of claim 18, wherein the second control register is loaded from the virtual machine control structure using the second pointer.20.The system of claim 17, wherein the third address is the base address of the page in the memory. |
Physical storage in virtualized virtual machine systemThis application is a divisional application, the application number of the parent application is 200610004027.3, the application date is January 13, 2006, and the name of the invention is "physical storage in a virtualized virtual machine system".Background techniqueVirtualization enables a single host with hardware and software support for virtualization to present multiple abstractions of the host, so that the underlying hardware of the host appears to be one or more independently running virtual machines. Therefore, each virtual machine can function as an independent and complete platform. Frequently, virtualization technology is used to co-exist multi-client operating systems and / or other guest software (guestsoftware) and apparently execute simultaneously and apparently independently on multiple virtual machines, but in fact on the same hardware platform Physically executed. Virtual machines can mimic the hardware of the host or alternately present completely different hardware abstractions.The virtualization system may include a virtual machine monitor (VMM) that controls the host. The VMM provides a set of resources (eg, processor, memory, IO device) to the client software running on the virtual machine. The VMM can map some or all components of the physical host to a virtual machine, and can create all virtual components, which are simulated in software in the VMM and included in the virtual machine (eg, virtual IO device). Therefore, it can be said that VMM provides a "virtual bare metal" interface to the client software. VMM uses devices in a hardware virtualization architecture to provide services to virtual machines and provide protection between multiple virtual machines executing on the host.When the client software is executed on the virtual machine, if the client software is executed directly on the hardware platform, then certain instructions executed by the client software (for example, instructions to access peripheral devices) usually directly access the hardware. In a virtualization system supported by VMM, these instructions will cause a conversion to VMM, which is referred to herein as a virtual machine exit. The VMM processes these instructions in the software in a manner suitable for the host hardware and host peripherals, which is consistent with the virtual machine executing the client software. Similarly, certain interrupts and exceptions generated in the host need to be interrupted and managed by the VMM or adapted to the client software through the VMM before being delivered to the client software for service. Then, VMM transfers control to the client software and the virtual machine restarts. The conversion from VMM to client software is referred to here as virtual machine entry (virtual machine entry).It is well known that on most operating systems, programs executed on machines can use virtual address spaces, which are abstractions of the underlying physical memory system. It is well known in the art that when the term virtual is used in the context of memory management such as "virtual address", "virtual address space", "virtual memory address" or "virtual memory space", it refers to the well-known of processor-based systems Technology, usually combined with an operating system, presents the abstraction of the underlying physical memory to processes executed on processor-based systems. For example, a process can access virtual, continuous, and linear address space abstractions, and the underlying operating system maps the address space abstractions to non-linear and non-contiguous physical memory. This use of virtualization is different from the use of the same terminology in the context of virtualization. In the latter case, virtualization usually refers to abstractions that simulate physical machines, such as "virtual machine", "virtual bare metal", "virtual hardware" , "Virtual Processor" or "Virtual Network Interface". Based on the context of the term used herein, the intended meaning of the term will be clear to those skilled in the art.Figure 1 shows a process performed on a processor-based system that includes a processor and a memory communicatively coupled to the processor through a bus. Referring to FIG. 1, when the process 105 references the storage unit 110 (process virtual storage space) in its virtual address space 115, the actual address in the physical memory 145 (machine physical memory) of the machine 125 is generated through the memory management 130 140. Memory management can be implemented in hardware (sometimes incorporated into the processor 120) and software (usually in the operating system of the machine). Among other functions, the memory management 130 maps locations in the virtual address space to locations in the physical memory of the machine. As shown in FIG. 1, the process may have a different memory view than the memory actually available in the physical machine. In the example described in Figure 1, the process runs in a virtual address space from 0 to 1MB, which is actually part of the physical memory mapped by the memory management hardware and software, which itself has from 10 to 11MB Address space; calculate the physical address from the process space address and add the displacement 135 to the process virtual address. A more complex mapping from process virtual memory space to physical memory is possible, for example, the physical memory corresponding to process virtual memory can be divided into parts such as pages and interleaved with pages from other processes in physical memory .The memory can generally be divided into pages, and each page contains a known amount of data, which varies with the implementation, for example, a page may contain 4096 bytes of memory. When the storage units are referenced by the execution process, they are converted into page references. In a typical machine, memory management maps references to pages in process virtual memory to pages in machine physical memory. In general, memory management can use page tables to specify physical page locations corresponding to process space page locations.One aspect of managing client software in a virtual machine environment is memory management. Handling memory management by guest software executing in virtual machines creates complexity for controlling systems such as virtual machine monitors. Consider, for example, a system that executes two virtual machines via virtualization on a host on a 32-bit IA-32 Intelarchitecture platform (IA-32). The platform is described in IA-32Architecture Software Developer ’s Manual (IASoftware Developer ’s Manual) (IA-32 document). The IA-32 platform may contain IA-32 page tables implemented as part of the IA-32 processor. Further, assume that each virtual machine itself presents the abstraction of the IA-32 machine to the client software executing on it. The guest software executing on each virtual machine can refer to the guest process virtual memory address, which can be converted into a guest-physical memory address by the guest's memory management system. However, the guest-physical memory itself can be realized by a further mapping in the host-physical memory through the virtualization subsystem in the hardware on the VMM and the host processor. Therefore, references to the client memory by the client process or the client operating system, including, for example, references to the client IA-32 page table control registers, must be intercepted by the VMM because they cannot be passed directly to the host ’s IA-32 page table without Further reprocessing, since the client-physical storage does not actually directly correspond to the main-physical storage but is further remapped by the host's virtualization system.BRIEF DESCRIPTIONFigure 1 describes the relationship between the process and physical memory (prior art).Figure 2 abstractly illustrates the relationship between a virtual machine and a host in an embodiment.FIG. 3 depicts a high-level structure of a virtual machine environment in an embodiment.4a and 4b illustrate processing in an embodiment in a virtual machine environment.Figure 5 depicts the use of an extended paging table for address calculation in an embodiment.FIG. 6 illustrates the use of a hierarchical extended paging table for address calculation in an embodiment.Figure 7 depicts the extended page table base address pointer in one embodiment.Figure 8 depicts an extended paging table entry in an embodiment.detailed descriptionFigure 2: Figure 2 depicts the relationship between one or more virtual machines executing on the host, particularly regarding guest memory mapping, in an embodiment. Figure 2 illustrates how client-physical storage is remapped through the host's virtualization system. For example, each virtual machine A, 242 and virtual machine B, 257 presents virtual processors 245 and 255 to the guest software running on the virtual machine, respectively. Each machine provides an abstraction of the physical memory to the client operating system or other client software, which is the client-physical memory 240 and 250, respectively. When the guest software is executed on the virtual machines 242 and 257, it is actually executed by the host 267 on the host processor 265 using the main-physical memory 260.As shown in FIG. 2, in this embodiment, the guest-physical memory 240 that appears as physical memory space in the virtual machine A, 242 starting at address 0 is mapped to some contiguous areas 270 in the main-physical memory 260 . Similarly, the guest-physical memory 250 in the virtual machine B, 257 is mapped to a different portion 275 of the main-physical memory 260. As shown in Figure 2, the host may have 1024MB of main-physical memory. If each virtual machine 242 and 2547 is allocated 256MB of memory, one possible mapping might be to allocate a range of 128-384MB to virtual machine A, 242 and a range of 512-768MB to virtual machine B, 257. Virtual machines 242 and 257 both reference 0-256MB of client-physical address space. Only the VMM knows that each virtual machine address space is mapped to a different part of the main-physical address space.The virtual machine and memory map shown in FIG. 2 is only a representative of an embodiment. In other embodiments, the actual number of virtual machines executing on the host may vary from one to multiple; The actual memory size is variable and variable from virtual machine to virtual machine. This example describes a simple, continuous memory allocation to a virtual machine. In a more general case, the physical memory pages allocated to the virtual machine may be discontinuous and may be distributed in main-physical memory, interleaved with each other and with pages belonging to VMM and other main processes.For example, as described in FIG. 2, a processor-based system that appears as a virtual machine in the system can implement the virtual machine as complex as possible. Thus, for example, a virtual machine can present a full view of the guest-physical memory to the guest OS, and use the memory management provided by the guest OS and the virtual processor or other virtual hardware of the virtual machine to perform guest software execution on the virtual machine Memory management. In an exemplary embodiment, the virtual machine may present the IA-32 platform including IA-32 hardware support (eg, a page table for memory management) to the guest OS, and in turn actually execute on the host platform, the platform It is also an IA-32 platform that includes IA-32 hardware for memory management. There is no additional mechanism, the virtualization system in this embodiment must use (as a possible solution) IA-32 page table shadowing (shadowing) to remap, divide and protect the physical memory to implement the physical memory in the VMM Virtualization algorithm. Therefore, for example, when the guest software tries to access the IA-32 page table of the virtual machine, the VMM must overwrite the functions required for virtualization (for example, remapping physical addresses) to the functions required by the guest OS.To this end, the VMM must capture various events around the use of the paging mechanism through client software. As stated in the IA-32 document, this includes writing to control registers such as the IA-32 memory management system's control registers (for example, CR0, CR3, and C4), access and paging, and memory access (for example, memory type range registers ( MTRR)) associated with the specific model register (MSR), handling certain exceptions (eg, page faults). This use of IA-32 page tables to virtualize physical memory is complex and requires huge performance overhead.Figure 3: Figure 3 illustrates an embodiment of a virtual machine environment 300. In this embodiment, the processor-based platform 316 can execute the VMM 312. This VMM, although generally implemented in software, can simulate and output virtual bare metal interfaces to higher-level software. Such higher layer software may include a standard OS, a real-time OS, or may be a sttripped-down environment with limited operating system functions, and may not include OS tools typically available in the standard OS in some embodiments. Alternatively, for example, the VMM 312 may be run in another VMM or using a service of another VMM. For example, VMM may be implemented in hardware, software, firmware, or a combination of various technologies in some embodiments.Platform hardware 316 may be a personal computer (PC), mainframe, handheld device such as a personal digital assistant (PDA) or "smart" mobile phone, portable computer, set-top box, or other processor-based system. The platform hardware 316 includes at least one processor 318 and memory 320. The processor 318 may be any type of processor capable of executing programs, such as a microprocessor, a digital signal processor, a microcontroller, or the like. In an embodiment, the processor may include microcode, programmable logic, or hard-coded logic for execution. Although FIG. 3 shows only one such processor 318, there may be one or more processors in the system of the embodiment. In addition, the processor 318 may include multiple cores, support multi-threading, and so on. The memory 320 may include a hard disk, a floppy disk, a random access memory (RAM), a read only memory (ROM), a flash memory, any combination of the above devices, or any other type of machine media that can be read by the processor 318 in various embodiments. The memory 320 may store instructions and / or data for program execution and other method embodiments.VMM312 presents one or more virtual machine abstractions to client software, which can provide the same or different abstractions to various clients. Figure 3 shows two virtual machines, 302 and 314. The client software such as the client software 303 and 313 running on each virtual machine may include a client OS such as the client OS 304 or 306 and various client software applications 308 and 310. The client software 303 and 313 can access physical resources (eg, processor registers, memory, and I / O devices) within the virtual machine on which the client software 303 and 313 runs and performs other functions. For example, depending on the processor architecture and platform presented in the virtual machines 302 and 314, the client software 303 and 313 are expected to access all registers, caches, structures, I / O devices, memory, and so on.In one embodiment, the processor 318 controls the operation of the virtual machines 302 and 314 in accordance with the data stored in the virtual machine control structure (VMCS) 324. VMCS324 is a structure that can contain the status of client software 303 and 313, MM312 status, execution control information indicating how VMM312 wants to control the operation of client software 303 and 313, information to control the conversion between VMM 312 and virtual machine, and so on. The processor 318 reads information from the VMCS 324 to determine the execution environment of the virtual machine and restrict its behavior. In one embodiment, VMCS 324 is stored in the memory 320. In some embodiments, multiple VMCS structures are used to support multiple virtual machines.VMM 312 may need to manage physical memory accessible by guest software running in virtual machines 302 and 314. In one embodiment, to support physical memory management, the processor 318 provides an extended page table (EPT) mechanism. In this embodiment, the VMM 312 may include a physical memory management module 326 that provides values for the domain associated with the physical memory virtualization that needs to be provided before switching control to the virtual machine 302 or 314. These fields are collectively called EPT control. The EPT control may include, for example, an EPT activation indicator that specifies whether the EPT mechanism should be activated and one or more EPT table configuration controls indicating the form and semantics of the physical memory virtualization mechanism. These will be discussed in detail below. In addition, in one embodiment, the EPT table 328 indicates physical address translation and protection semantics, and the VMM 312 may place them on the client software 303 and 313.In one embodiment, the EPT control is stored in VMCS324. Alternatively, EPT control may be present in the processor 318, the memory 320, and the processor 318 in combination or any other storage unit. In one embodiment, separate EPT control is maintained for each virtual machine 302 and 314. Alternatively, the same EPT control is maintained for both virtual machines and updated by VMM 312 before each virtual machine logs in.In one embodiment, the EPT table 328 is stored in the memory 320. Alternatively, the EPT table 328 may exist in the processor 318, the combination of the memory 320 and the processor 318, or any other storage unit. A separate EPT table 328 is maintained for each virtual machine 302 and 314. Alternatively, the same EPT table 328 is maintained for the two virtual machines 302 and 314 and updated by the VMM 312 before each virtual machine logs in.In one embodiment, the processor 318 includes EPT access logic 322, which is responsible for deciding whether to activate the EPT mechanism according to the EPT activation indicator. If the EPT mechanism is activated, the processor translates the client-physical address into a main-physical address based on the EPT controller and EPT table 328.In an embodiment, where the system 300 includes multiple processors or multiple thread processors, each logical processor is associated with a separate EPT access logic 322, and the VMM 312 configures an EPT table 328 for each logical processor And EPT control.Resources accessed through client software (eg, 303, including client OS 304 and application 308) may be classified as "privileged" or "non-privileged". For priority resources, VMM312 promotes the functionality required by client software while retaining ultimate control over these priority resources. Further, each client software 303 and 313 is expected to handle various platform events such as exceptions (eg, page faults, general protection faults, etc.), interrupts (eg, hardware interrupts, software interrupts) and platform events (eg, initialization (INIT) And System Management Interrupt (SMI)). Some of these platform events are "priority" because they must be handled by VMM 312 to ensure the correct operation of virtual machines 302 and 314 and provide protection in the client software. Both guest operating systems and guest applications attempt to access priority resources and can both cause or experience priority events. Priority platform events and attempts to access priority resources are collectively referred to herein as "priority events" or "virtualization events."Figures 4a and 4b: In one embodiment, the operation of the virtual machine environment such as described previously and described in Figure 3 is described by the processes shown in Figures 4a and 4b. 4a depicts the operation of a VM environment that processes priority events that occur in client software in one embodiment; and the operation of an embodiment that processes non-priority events through client software. Figure 4b describes the operation of the VM environment in an embodiment that is particularly relevant to extended paging tables, in particular to the client software's access to client-physical memory and the EPT mechanism in the VMM management hardware in this embodiment. 4a and 4b do not describe all the components or all operations that may occur in the environment described in FIG. 3, for example. This is just for clarity. Although a small group of components and some specific operations are shown in FIGS. 4a and 4b, the VM environment in an embodiment may include many other components, and many other operations may occur in such an embodiment.Consider first Figure 4a. FIG. 4a depicts an exemplary set of operations of the client software 303 executing on the virtual machine abstraction 302, and the platform hardware 316 previously described in FIG. 3. The operations are described in blocks that indicate where they occurred in the system (eg, in VMM 312, in client software 303, etc.). In addition to the other components of the VM environment previously described, the VM abstraction 302 can store virtual machine state and other state information for the client software 303 at 412, and can also provide other resources such as virtual network connections or general register sets, and name many examples for clients Two of them. Of course, the platform hardware 316 on which the VM is executed actually provides the physical resources to realize the VM state, the customer state, and other VM resources. Platform hardware includes memory 320, VMCS 324 and processor 318.At 440, the client software 303 accesses the non-priority resource 442. Non-priority resources do not need to be controlled by VMM 312 and can be directly accessed by the client software. The client software continues without calling VMM 312, allowing the client to continue operation at 445 after accessing non-priority resources 422. Non-priority platform events are also handled without the intervention of VMM312 (not shown in Figure 4a).At 405, the client software 303 attempts to access priority resources, and / or experiences priority platform events. When this priority event occurs at 405, control can be transferred 407 to VMM 312. The control transfer 407 from the client software to the VMM 312 is referred to herein as virtual machine exit. After facilitating resource access or otherwise properly handling priority events, VMM 312 may return control to the client software at 432, and then the client software resumes operation, 435. The control transfer 432 from the VMM 312 to the client software is called virtual machine login. In one embodiment, VMM 312 initiates virtual machine login by executing an instruction specifically designed to trigger conversion (referred to herein as a virtual machine login instruction), 430.In one embodiment, when a virtual machine exit occurs, the processor state component used by the client software is saved, 410, the processor state component required to load the VMM 312 is loaded, and execution resumes in the VMM 312 at 420. In one embodiment, the processor state components used by the guest software are stored in the client-state area of VMCS 324, and the processor state components required by VMM312 are stored in the monitor-state area of VMCS 324. In one embodiment, when the transfer from VMM 312 to the client software occurs, the saved processor state component is restored at 425 and will be restored at 430 Control is returned to the virtual machine 302 or 314.Next, consider Fig. 4b. As mentioned previously, FIG. 4b depicts those operations of the VM environment described above and described in FIG. 4, specifically regarding extended paging tables, client program access to client-physical memory, and in one embodiment, hardware The management of the EPT mechanism in China. As mentioned above, for clarity, FIG. 4b does not describe all the components or all operations that may appear in the VM environment in the embodiment. Although a small group of components and some specific operations are shown in FIGS. 4a and 4b, the VM environment in an embodiment may include many other components, and many other operations may occur in this embodiment.The components of the VM environment in the embodiment described in FIG. 4b are client software 303, VM302, VMM 312 with physical memory management module 326, and platform hardware or physical machine 316. The platform hardware further includes a memory 320 and a processor 318 with EPT access logic 322, which in this embodiment includes a set of EPT tables 328 and VMCS 324. Generally, as shown in FIG. 4 at 450, when the client-physical memory is accessed by, for example, the client software 303, the use of the EPT device in the platform hardware can be initiated by the client software. The client-physical memory access is referred to as the VM abstraction 451 of the memory provided by the VM 302, which in turn refers to the physical machine 316. If the EPT mechanism is enabled, platform hardware 316 may use EPT access logic 322 and EPT table 328 to process references to client-physical memory to convert access to client-physical memory into access to main-physical memory 320. The details of EPT operation will be discussed with reference to Figures 5 and 6 below.The EPT mechanism itself can be configured by VMM 312, and EPT table 328 and EPT control that can be stored in VMCS 324 are configured by VMM 312. In this embodiment, the configuration of the EPT mechanism can be performed by the VMM 312 as part of the operation of the physical memory management module 326 after the processing of the priority event 405 in the VMM 312 and before the VM login 430. In configuring the EPT mechanism, in order to enable, disable, or otherwise control the EPT mechanism, 460, the VMM 312 may update the EPT table 328 and the EPT control.Of course, in order to use the extended paging table combined with the VM environment, many other forms of processing are also possible. For example, for different locations of the EPT control and EPT table 328 previously discussed with reference to FIG. Multiple threads, multiple clients and combinations of these changes, etc.Figure 5: Figure 5 shows an example of processing using the extended page table introduced above. When the client software in the virtual machine refers to the client virtual address, the main-physical address is finally calculated. The described example shows the client software running on the IA-32 platform using simple 32-bit virtual addressing and a simple page table format. Those skilled in the art can easily extend this example to understand, for example, other paging modes (for example, 64-bit addressing in client software), other instruction set architectures (for example, Inter NER4_ architecture, for example, available from Intel The Intel Itanium Architecture Software Developer Manual obtained by the company) or other configurations.In FIG. 5, the reference to the guest virtual address 510 is performed by the guest software executed in the virtual machine. A memory management mechanism activated in the guest (ie, configured by the guest operating system) is used to translate the virtual address into a guest-physical address. Before accessing the main-physical address through EPT, each client-physical address used in the conversion and the resulting client-physical address are converted into the main-physical address. This process is explained in detail in the following discussion.In this example, the appropriate bit 502 in the CR3 register 520 points to the base address of the customer page directory table 560 in the customer-physical memory. Combine this value 502 with the upper bit from the customer virtual address 510 (according to the semantics of IA-32 to properly adjust by multiplying by 4, because in this example, each entry in the table is 4 bytes) To form the customer-physical address 512 of the page directory entry (PDE) in the customer's PD table 560. The value 512 is converted by the EPT table 555 to form the main-physical address 504 of the page directory entry. The main-physical address 504 is used by the processor to access the page directory entry.The information from the PDE includes the base address 522 of the customer page table 570. The customer-physical address 522 and the customer virtual address 510 bits 21:12 are combined and adjusted appropriately to form the customer-physical address 532 of the page table entry in the customer page table 570. The client-physical address 532 is converted through the EPT table 565 to form the main-physical address 514 of the client's page table entry (PTE). The master-physical address 514 is used by the processor to access the PTE.The information from the PTE includes the base address 542 of the page in the accessed client-physical memory. This value is combined with the low-order bits (11: 0) of the guest virtual address 510 to form the guest-physical address 552 of the accessed memory. The value 552 is converted by the EPT table 575 to form the main-physical address 524 of the memory being accessed.Each time the EPT table is used to convert the client-physical address to the main-physical address, the processor also confirms that the access is allowed according to the control in the EPT table, which will be described below. In addition, it must be understood that although marked as different in FIG. 5, the EPT tables 555, 565, and 575 in one embodiment may be the same group of EPT tables (ie, a single group of EPT tables is used for all slaves Client-physical to main-physical address translation).Figure 6: Figure 6 describes another example of processing using the extended page table introduced earlier, using a multi-level EPT table to finally convert the client-physical address to the main-physical address. In the exemplary embodiment shown in FIG. 6, the appropriate bit 602 in the EPT base pointer (EPTP) 620 indicates the main-physical address of the base address of the first-level EPT table 650, which is stored in the main in this embodiment. -In physical memory. The EPTP will be discussed in detail based on FIG. 7 below. In this example, each entry in the EPT table is 8 bytes. The bit 38:30 from the client-physical address 610 (601) is appropriately adjusted by multiplying by 8 (for example, by shifting the value to the left by 3 bits) to obtain the adjusted high-order client-physical address 603. The EPT table base address value 602 is combined (added) with the adjusted high-order client-physical address bits 603 to form the main-physical address 604 of the EPT table entry 651 in the first-level EPT table 650. Exemplary formats of entries such as 651 in the first-level EPT table 650 and entries in other EPT tables 660 and 670 will be discussed below according to FIG. 8.The part of the EPT table entry 651 is the base address 612 of the EPT table 660 of the next level. The second adjusted address part 613 is formed from bits 29:21 (611) of the client-physical address 610. The adjusted value 613 is combined (added) with the base address 612 to form the main-physical address 614 of the EPT table entry 661 in the EPT table 660 of the next level. The processor uses the master-physical address 614 to access the EPT table entry 661.The information from the EPT table entry 661 includes the base address 622 of the last EPT table 670. The base address 622 and the adjusted bit 20:12 (623) of the guest virtual address 610 after appropriate adjustment are combined to form the address 624 of the EPT table entry 671 in the final EPT table 670. The processor uses this master-physical address 624 to access the EPT table entry.The information from the EPT table entry 671 includes the base address 632 of the accessed page in the main-physical memory 690. The page address value 690 and the lower digits (11: 0) of the client physical address 610 are combined to form the last main-physical address 634 of the accessed memory.In the exemplary embodiment shown in FIG. 6, the EPT table is hierarchical. In form they are similar to conventional multi-level page tables. Moreover, in this example, each EPT table entry in each EPT table is 8 bytes in size. Although the size may be different in other embodiments, those skilled in the art will understand the mechanism of changing the access table. In this example, the size of each EPT table is 4KB. In other embodiments, different table sizes may be used; moreover, all tables in the hierarchy as described in FIG. 6 need not be the same size. This change in size may affect the number of bits used from the client-physical address to the next-level EPT table of the index. It is obvious to those skilled in the art that many other EPT table configurations are possible.The hierarchical configuration described in the figure shows three hierarchical levels, of which two EPT tables 650 and 660 are used as indexes to lower-level EPT tables 660 and 670, respectively. In other embodiments, there may be fewer, such as two levels, or more, such as four levels or more levels in such a ranking table. Generally, the number of hierarchical levels may vary depending at least in part on one or more of the number of bits of the client-physical address, the size of each table, and the number of bytes in each table entry. The client-physical address in the example of FIG. 6 is 32 bits in size. In other embodiments, the client-physical address may be of different sizes; this change in size will require a change in the number of levels of the EPT table, which is required for the conversion. For example, if the client-physical address is 48 bits, a 4-level EPT table is required for conversion (assuming a 4KB EPT table at each level and an 8-byte EPT table entry in each EPT table).In the embodiment shown in FIG. 6, the EPT control includes a single field, EPT pointer (EPTP). This field contains the base address of the first-level EPT table. In this example, each EPT is 4KB in size.FIG. 7: As shown in the exemplary embodiment described in FIG. 7, the EPT base address pointer (EPTP) includes a base address used to form a base address such as the first-level EPT table described in FIG. 6 above (in Main-physical memory) bits. In the example depicted in Figure 7, bits 59:12 form the base address. Assume that bits 11: 0 and 63:60 are 0. Of course, the width of the various bit fields may vary in other embodiments, for example, the base address field will vary depending on the number of address bits in a particular architecture or implementation. The remaining bits in the EPTP register can be used for other purposes in other embodiments. In one embodiment, the EPTP register can be accessed only through virtual machine login or virtual machine logout. In this embodiment, the EPTP register in the processor is loaded from the EPTP domain in the VMCS when the virtual machine logs in, and the EPT mechanism is activated when the client software runs. As shown above, the activation (and loading of the EPTP field) can be controlled by other control bits within the VMCS or elsewhere.Figure 8: This figure describes an exemplary embodiment of the format of the entries in the EPT table. In this example, each entry in the EPT table is 8 bytes in size. In a practical example, each EPT table is 4KB in size, which means that there are 512 EPT table entries per EPT table page. As shown in the example in FIG. 8, each EPT table entry contains the base primary-physical address (ADDR) of the next level EPT table or page in the memory and permissions and other configuration information. As mentioned above, the width of various bit fields can be changed in other embodiments, for example, the width of ADDR can be changed depending on the number of address bits in a specific architecture or implementation. Figure 8 only describes two permission bits (permission), current (Present) and writable (Writable). In other embodiments, other permission and configuration information may be given in each EPT table entry. For example, in one embodiment, the permission bit indicates whether a page of memory can be executed (ie, whether the content of the page can be fetched and interpreted by the processor into instructions).The EPT table can be in various formats. For example, it can be implemented as a simple ranking table as shown in FIG. 6. Alternatively, they may be single-level page tables (where the size of the first-level EPT table indicates the maximum size of the client-physical address space.) Alternatively, they may be some form of hash table. It is obvious to a person skilled in the art that countless possible configurations are possible in other embodiments.The EPT table can support one or more page sizes in the main-physical memory. In one embodiment, each entry in each EPT table includes a super page bit that indicates where the EPT table walk should stop and use the address information and The remaining bits in the client-physical address form the main-physical memory address. In the example shown in FIG. 6, for example, if the super page bit is set in the EPT table 660, the last page generated in the main-physical memory has a size of 2MB and is combined with bits 20: 0 of the client-physical address 610 The final master-physical address is formed with the address bits from the EPT table 660.In some embodiments, the extended paging table and the EPT address translation mechanism can be started by the virtual machine login and canceled by the virtual machine to cancel and invalidate it. Therefore, as a result, the EPT mechanism may not be used by client software or VMM software to manage its own address translation. Moreover, in these embodiments, the EPT mechanism may be different from and independent of other conventional memory page management mechanisms available to client or host software, such as the IA-32 paging table in the IA-32 embodiment, although EPT operation may utilize conventional Features of the page management mechanism. Therefore, in contrast to the execution of client software using virtualization and EPT mechanisms of the host, the organization and operation of the EPT table may be completely different from other page conversion tools provided by the processor for conventional program execution and running directly on the host . In one embodiment, the EPT mechanism may utilize a table in the same format as that used by the conventional page management mechanism of this embodiment, which table is available to clients and VMM software. However, the tables that control the EPT mechanism may still be different from those that control the translation from client-virtual addresses to guest-physical addresses and those that control the translation from master-virtual addresses to master-physical addresses.Although the examples provided may describe providing support for physical memory virtualization in a virtual machine system in the context of execution units and logical circuits, other embodiments may be implemented using software methods. Some embodiments are provided as software program products or software that may include a machine or machine-readable medium on which instructions have been stored, and the processes of this embodiment are performed when the instructions are accessed by the machine. In other embodiments, the process may be performed by specific hardware components that contain hard-wired logic for performing the process, or by any combination of programming components and custom hardware components.In the previous description, for the sake of explanation, many specific details were clarified in order to provide a complete understanding of the described embodiments, however, those skilled in the art will realize that many other embodiments can be implemented without these specific details.Some parts described in detail above are given in terms of algorithms and symbolic representations of operations on data bits in processor-based systems. These algorithm descriptions and representations are the means used by those skilled in the art to most effectively pass on their work to other technicians in the art. The operations are those operations that require physical manipulations of physical quantities. These physical quantities may use electrical, magnetic, optical, or other physical signals that can be stored, transmitted, combined, compared, and otherwise manipulated. For reasons of common usage, it has often proved convenient to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, etc.However, it should be borne in mind that all these and similar terms are associated with appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise, for clarity of description, terms such as "execution", "processing", "calculation", "calculation" or "decision" refer to the operation and processing of processor-based systems or similar electronic computing devices, Operate in the memory of a processor-based system and convert data represented as a physical quantity into data similar to other data or other information storage, transmission, or display devices.In the description of this embodiment, reference will be made to the corresponding drawings. In the drawings, the same numerals are used to describe substantially similar components throughout the several views. Other embodiments may be used and structural, logical, and electrical changes may be made. Moreover, it should be understood that the various embodiments, although different, are not necessarily mutually exclusive. For example, specific characteristics, structures, or features described in one embodiment may also be included in other embodiments.Further, the design of the embodiments implemented in the processor can go through various stages, from creation to simulation to production. The data representing the design can represent the design in many methods. First, because it is useful in simulations, hardware description languages or other functional description languages can be used to represent hardware. In addition, circuit-level models with logic and / or transistor gates can be fabricated at some stages of the design process. Moreover, at some stages, most designs reach the level where the data represents the physical placement of various devices in the hardware model. In the case of using conventional semiconductor manufacturing techniques, the data representing the hardware model may be data specifying the presence or absence of various characteristics on different mask layers of the mask used to manufacture the integrated circuit. In any representation of the design, data can be stored in any form of machine-readable medium. Modulating or generating light or electric waves for transmitting such information, a memory or a magnetic or optical memory such as a disk may be a machine-readable medium. Any of these media may "carry" or "indicate" design or software information. A new copy is made when transmitting an instruction or carrying a code or designed electrical carrier to the extent that the electrical signal is copied, buffered, or retransmitted. Therefore, the communication provider or the network provider can make a copy (carrier wave) of the article constituting or representing the embodiment.Embodiments provided as program products may include machine-readable media having data stored thereon, which when accessed by the machine may cause the machine to perform processes according to the claimed subject matter. The machine-readable medium includes, but is not limited to, floppy disk, optical disk, DVD-ROM disk, DVD-RAM disk, DVD-RW disk, DVD + RW disk, CD-R disk, CD-RW disk, CD-ROM disk and magnetic -CD-ROM, ROM, RAM, EPROM, EEPROM, magnetic or optical card, flash memory or other types of media / machine-readable media suitable for storing electronic instructions. Moreover, the embodiment can also be downloaded as a program product, in which the program can be transferred from the remote data source to the desired program by embedding the data signal in the carrier wave or other propagation media via a communication link (eg, modem or network connection) device.Many methods are described in the most basic way, but steps can be added or deleted from any method and information can be added or deleted from any description data without departing from the basic scope of the claimed subject matter. Many further modifications and adjustments will be apparent to those skilled in the art. Special embodiments are provided not to limit the claimed subject matter but to illustrate it. The claimed subject matter is not determined by the specific examples provided above but only by the claims that follow. |
Technologies for establishing and managing a high-performance memory region of a solid state drive include reserving a region of a volatile memory of the solid state drive for storage of host data. Memory accesses received from a host may be directed toward the reserved region of the volatile memory or toward a non-volatile memory of the solid state drive. Due to the structure of the volatile memory, memory accesses to the reserved region may exhibit lower access timing relative to memory accesses to the non-volatile memory. As such, the reserved region may be utilized as storage space for journaling and logging of data and/or other applications. Upon shutdown or a power failure event, data stored in the reserved region of the volatile memory is copied to the non-volatile memory and subsequently reinstated to the volatile memory upon the next initialization event. |
WHAT IS CLAIMED IS:1. A solid state drive for managing a high-performance memory region, the solid state drive comprising:a non- olatile memory;a volatile memory; anda drive controller to (i) reserve a region of the volatile memory for storage of host data, (ii) receive a storage access request from a host of a computing system, (iii) determine whether the storage access request is directed to the reserved region of the volatile memory, and (iv) access the reserved region of the volatile memory in response to a determination that the storage access request is directed to the reserved region.2. The solid state drive of claim 1 wherein the drive controller is further to expose the reserved memory region to the host of the computing system.3. The solid state drive of claim 2, wherein to expose the reserved memory region comprises to inform the host of a namespace corresponding to the reserved memory region of the volatile memory.4. The solid state drive of claim 2, wherein to expose the reserved memory region comprises to inform the host of a logical block addressed region corresponding to the reserved region of the volatile memory.5. The solid state drive of claim 2, wherein to expose the reserved memory region comprises to memory map the reserved memory region of the volatile memory for use by the host.6. The solid state drive of claim 1, wherein to determine whether the storage access request is directed to the reserved region of the volatile memory comprises to determine whether the storage access request includes an address that indicates the storage access request is for the reserved region of the volatile memory.7. The solid state drive of claim 1, wherein to determine whether the storage access request is directed to the reserved region of the volatile memory comprises to determine whether the storage access request is directed to a logical block addressed region corresponding to the reserved region of the volatile memory.8. The solid state drive of claim 1, wherein the drive controller is further to reinstate data stored in the non- volatile memory to the reserved region of the volatile memory during an initialization procedure of the solid state drive.9. The solid state drive of claim 8, wherein to reinstate data stored in the nonvolatile memory to the reserved region of the volatile memory comprises to:retrieve data stored in the non- volatile memory;store the data retrieved from the non-volatile memory to the reserved region of the volatile memory; andupdate a logical-to-physical indirection table of the solid state drive based on the storage of the data to the reserved region of the volatile memory.10. The solid state drive of claim 1, wherein the drive controller is further to pre- erase a storage region of the non-volatile memory based on a storage capacity of the reserved region of the volatile memory.11. The solid state drive of claim 10, wherein the drive controller is further to:receive a shutdown request for the solid state drive;retrieve data stored in the reserved region of the volatile memory in response to the shutdown request; andstore the data retrieved from the reserved region of the volatile memory to the storage region of the non- olatile memory of the solid state drive.12. The solid state drive of claim 10, further comprising a power fail response circuit to (i) detect a power failure event of the solid state drive, (ii) provide power to the drive controller, the volatile memory, and the non- volatile memory in response to the detection of the power failure event for a period of time, (iii) retrieve, during the period of time, data stored in the reserved region of the volatile memory in response to the detection of the power failure event, and (iv) store, during the period of time, the data retrieved from the reserved region of the volatile memory to the storage region of the non- volatile memory of the solid state drive.13. A method for managing a high-performance memory region of a solid state drive, the method comprising:reserving, by a drive controller of the solid state drive, a region of a volatile memory of the solid state drive for storage of host data;receiving, by the drive controller, a storage access request from a host of a computing system;determining, by the drive controller, whether the storage access request is directed to the reserved region of the volatile memory; andaccessing, by the drive controller, the reserved region of the volatile memory in response to a determination that the storage access request is directed to the reserved region.14. The method of claim 13 further comprising exposing the reserved memory region to the host of the computing system.15. The method of claim 14, wherein exposing the reserved memory region comprises informing, by the drive controller, the host of a namespace or a logical block addressed region corresponding to the reserved memory region of the volatile memory.16. The method of claim 14, wherein to exposing the reserved memory region comprises memory mapping the reserved memory region of the volatile memory for use by the host.17. The method of claim 13, wherein determining whether the storage access request is directed to the reserved region of the volatile memory comprises determining whether the storage access request includes an address that indicates the storage access request is for the reserved region of the volatile memory.18. The method of claim 13, wherein determining whether the storage access request is directed to the reserved region of the volatile memory comprises determining whether the storage access request is directed to a logical block addressed region corresponding to the reserved region of the volatile memory.19. The method of claim 13, further comprising reinstating data stored in a nonvolatile memory of the solid state drive to the reserved region of the volatile memory during an initialization procedure of the solid state drive.20. The method of claim 19, wherein reinstating data stored in the non-volatile memory to the reserved region of the volatile memory comprises:retrieving, by the drive controller, data stored in the non-volatile memory;storing, by the drive controller, the data retrieved from the non- volatile memory to the reserved region of the volatile memory; andupdating, by the drive controller, a logical-to-physical indirection table of the solid state drive based on the storage of the data to the reserved region of the volatile memory.21. The method of claim 13, further comprising pre-erasing a storage region of a non-volatile memory of the solid state drive based on a storage capacity of the reserved region of the volatile memory.22. The method of claim 21, further comprising:receiving, by the drive controller, a shutdown request for the solid state drive;retrieving, by the drive controller, data stored in the reserved region of the volatile memory in response to the shutdown request; andstoring, by the drive controller, the data retrieved from the reserved region of the volatile memory to a non- volatile memory of the solid state drive.23. The method of claim 21, further comprising:detecting, by a power fail response circuit of the solid state drive, a power failure event of the solid state drive;providing, by the power fail response circuit, power to the drive controller, the volatile memory, and a non-volatile memory of the solid state drive in response to the detection of the power failure event for a period of time;retrieving, by the drive controller and during the period of time, data stored in the reserved region of the volatile memory in response to the detection of the power failure event; andstoring, by the drive controller and during the period of time, the data retrieved from the reserved region of the volatile memory to a non-volatile memory of the solid state drive.24. One or more machine -readable storage media comprising a plurality of instructions stored thereon that, when executed, cause a solid state drive to perform the method of any of claims 13-23.25. A solid state drive for managing a high-performance memory region, the solid state drive comprising means for performing the method of any of claims 13-23. |
TECHNOLOGIES FOR MANAGING A RESERVED HIGH-PERFORMANCE MEMORYREGION OF A SOLID STATE DRIVECROSS-REFERENCE TO RELATED APPLICATION[0001] The present application claims priority to U.S. Utility Patent Application SerialNo. 14/843,581, entitled 'TECHNOLOGIES FOR MANAGING A RESERVED HIGH- PERFORMANCE MEMORY REGION OF A SOLID STATE DRIVE," which was filed on September 2, 2015.BACKGROUND[0002] Many software applications utilize some form of data journaling or logging to ensure system reliability, data redundancy, catastrophe recovery, and/or improve the overall functionality of the software. For example, typical database systems often store multiple copies of the managed data, along with associated metadata, to facilitate a rollback to a previous context point in the event of a system failure or unrecoverable error during a write cycle. Similarly, many redundant array of independent disks (RAID) systems buffer writes, along with associated metadata, to improve performance of the RAID system and facilitate recovery of data should an error occur. Such applications typically duplicate the data by saving it to a nonvolatile memory, such as a battery-backed volatile memory or non-volatile dual in-line memory module (NVDIMM), or other non-volatile memory solution. However, such solutions are typically expensive relative to the system and/or are unreliable over long periods of time.[0003] Solid state drives (SSDs) are data storage devices that rely on memory integrated circuits to store data in a non-volatile or persistent manner. Unlike hard disk drives, solid state drives do not include moving, mechanical parts, such as a movable drive head and/or drive spindle. As such, solid state drives are generally more durable to physical contact (e.g., bumping) during operation and operate more quietly than traditional disk drives. Due to the reliance on solid state memory devices to store data, solid state drives generally exhibit lower access time relative to typical disk drives.[0004] A typical solid state drive includes a large amount of non-volatile memory, which is oftentimes based on NAND flash memory technology, although NOR flash memory may be used in some implementations. The majority of data stored on a solid state drive is stored in the non-volatile memory for long-term storage. To further improve the access times, some solid drives may also include a small amount of volatile memory, which is generally embodied as dynamic random-access memory (DRAM) and has faster access times than the relatively slower NAND flash memory. During operation, the solid state drive may utilize the volatile memory as a cache to store data waiting to be written to the non-volatile memory or read from the solid state drive. Additionally, the volatile memory may be used to store a working copy of the metadata used by the solid state drive to control the operations thereof, such as an indirection table, wear leveling information, error correction tables, and so forth.BRIEF DESCRIPTION OF THE DRAWINGS[0005] The concepts described herein are illustrated by way of example and not by way of limitation in the accompanying figures. For simplicity and clarity of illustration, elements illustrated in the figures are not necessarily drawn to scale. Where considered appropriate, reference labels have been repeated among the figures to indicate corresponding or analogous elements.[0006] FIG. 1 is a simplified block diagram of at least one embodiment of a solid state drive configured to reserve and manage a high-performance memory region;[0007] FIG. 2 is a simplified block diagram of at least one embodiment of a computing system including the solid state drive of FIG. 1 ;[0008] FIG. 3 is a simplified block diagram of at least one embodiment of an environment that may be established by the solid state drive of FIG. 1;[0009] FIG. 4 is a simplified block diagram of at least one embodiment of a method for initialization that may be executed by the solid state drive of FIG. 1;[0010] FIG. 5 is a simplified block diagram of at least one embodiment of a method for managing storage access requests that may be executed by the solid state drive of FIG. 1;[0011] FIG. 6 is a simplified block diagram of at least one embodiment of a method for handling a power failure event that may be executed by the solid state drive of FIG. 1 ; and[0012] FIG. 7 is a simplified block diagram of at least one embodiment of a method for accessing a reserved high-performance memory region that may be executed by a host of the computing system of FIG. 2.DETAILED DESCRIPTION OF THE DRAWINGS[0013] While the concepts of the present disclosure are susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will be described herein in detail. It should be understood, however, that there is no intent to limit the concepts of the present disclosure to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives consistent with the present disclosure and the appended claims.[0014] References in the specification to "one embodiment," "an embodiment," "an illustrative embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include that particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. Additionally, it should be appreciated that items included in a list in the form of "at least one A, B, and C" can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C). Similarly, items listed in the form of "at least one of A, B, or C" can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C).[0015] The disclosed embodiments may be implemented, in some cases, in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on a transitory or non-transitory machine- readable (e.g., computer-readable) storage medium, which may be read and executed by one or more processors. A machine-readable storage medium may be embodied as any storage device, mechanism, or other physical structure for storing or transmitting information in a form readable by a machine (e.g., a volatile or non-volatile memory, a media disc, or other media device).[0016] In the drawings, some structural or method features may be shown in specific arrangements and/or orderings. However, it should be appreciated that such specific arrangements and/or orderings may not be required. Rather, in some embodiments, such features may be arranged in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of a structural or method feature in a particular figure is not meant to imply that such feature is required in all embodiments and, in some embodiments, may not be included or may be combined with other features.[0017] Referring now to FIG. 1, an illustrative solid state drive 100 includes a drive controller 102, a non- volatile memory 110, and a volatile memory 120. As discussed in more detail below, in use, the drive controller 102 is configured to reserve a region of the volatile memory 120 as a high-performance memory region, which may be used by a host to store important and/or often-accessed data (e.g., during journaling or logging procedures). Because the reserved region is established in the volatile memory 120, the memory accesses to the reserved region are typically of a higher speed and endurance relative to memory accesses of data stored in the non-volatile memory 110.[0018] As discussed in more detail below, the drive controller 102 is configured to expose the reserved region of the volatile memory 120 to host applications, and the host application may access the reserved region by directing memory accesses to the reserved region. Because the data stored in the reserved region of the volatile memory 120 may be of an important nature, the drive controller 102 is also configured to copy any data presently stored in the reserved region of the volatile memory 120 to the non-volatile memory 110 in response to any shutdown requests. Additionally, to protect against unforeseen power interruptions or failures, the solid state drive 100 includes a power fail response circuit 130, which is configured to supply power to components of the solid state drive 100 in the event of a power failure to allow the drive controller 102 to copy data stored in the reserved region of the volatile memory 120 to the non- volatile memory 110 in the event of such unforeseen power interruptions.[0019] The drive controller 102 of the solid state drive 100 may be embodied as any type of control device, circuitry, or collection of hardware devices capable of establishing and managing the reserved region of the volatile memory 120 and performing the functions described herein. In the illustrative embodiment, the drive controller 102 includes a processor or processing circuitry 104, a non-volatile memory controller 106, and a host interface 108. Of course, the drive controller 102 may include additional devices, circuits, and/or components commonly found in a drive controller of a solid state drive in other embodiments.[0020] The processor 104 may be embodied as any type of hardware processor or processing circuitry capable of performing the functions described herein. For example, the processor 104 may be embodied as a single or multi-core processor(s), digital signal processor, microcontroller, or other processor or processing/controlling circuit. In the illustrative embodiment, the processor 104 controls and manages operation of other components of the drive controller 102.[0021] Similar to the processor 104, the non-volatile memory controller 106 may be embodied as any type of hardware processor, processing circuitry, or collection of devices capable of managing the non-volatile memory 110. In use, the non- volatile memory controller 106 manages read and write access to the non-volatile memory 110. Additionally, the nonvolatile memory controller 106 may manage various metadata associated with the non-volatile memory 110 including, but not limited to, a logical-to-physical indirection table, which may be temporarily stored in the volatile memory 120 during operation of the solid state drive 100. [0022] In some embodiments, the processor 104 and the non-volatile memory controller106 may be embodied as the same hardware processor, processing circuitry, and/or collection of devices. Additionally, in some embodiments, the processor 104 and the non-volatile memory controller 106 may form a portion of a System-on-a-Chip (SoC) and be incorporated, along with other components of the drive controller 102, onto a single integrated circuit chip.[0023] The host interface 108 may also be embodied as any type of hardware processor, processing circuitry, input/output circuitry, and/or collection of components capable of facilitating communication of the solid state drive 100 with a host device or service (e.g., a host application). That is, the host interface 108 embodies or establishes an interface for accessing data stored on the solid state drive 100 (e.g., stored in the non-volatile memory 110 or the volatile memory 120). To do so, the host interface 108 may be configured to utilize any suitable communication protocol and/or technology to facilitate communications with the solid state drive 100. For example, the host interface 108 may be configured to communicate with a host device or service using Serial Advanced Technology Attachment (SATA), Peripheral Component Interconnect express (PCIe), Serial Attached SCSI (SAS), Universal Serial Bus (USB), and/or other communication protocol and/or technology.[0024] The non- volatile memory 110 may be embodied as any type of non-volatile memory capable of storing data in a persistent manner. In the illustrative embodiment, the nonvolatile memory 110 is embodied as NAND flash memory, but other types of non- volatile memory may be used in other embodiments including, but not limited to, NOR flash memory, phase change memory (PCM), electrically erasable programmable read-only memory (EEPROM), resistive memory, nanowire memory, three-dimensional cross point memory arrays ferro-electric transistor random access memory (FeTRAM), magnetoresistive random access memory (MRAM), spin transfer torque MRAM, and/or other non-volatile memory. It should be appreciated that the non- volatile memory 110 may be formed from multiple, discrete memory devices (e.g., multiple NAND circuit chips or dies), which may be managed and accessed by the non-volatile memory controller 106 in a parallel manner to increase the memory access speed of the solid state drive 100. In such embodiments, a memory band of physical memory may stretch across multiple, discrete memory devices. Additionally, virtual memory blocks may be located on multiple, discrete memory devices of the non-volatile memory 110.[0025] The volatile memory 120 may be embodied as any type of volatile memory capable of storing data while the solid state drive 100 is operational. In the illustrative embodiment, the volatile memory 120 is embodied as dynamic random access memory (DRAM), but may be embodied as other types of volatile memory in the other embodiments. In the illustrative solid state drive 100, the volatile memory 120 may have an increased capacity relative to typical solid state drives to accommodate the reserved, high-performance memory region. In some embodiments, for example, the size of the reserved memory region of the volatile memory 120 may be substantially similar to the size of the non-volatile memory 110 as described in more detail below. It should be appreciated that due to the type of memory of the volatile memory 120 (e.g., DRAM memory), memory accesses to the volatile memory 120, such as to the reserved memory region, may be faster and exhibit higher endurance than those to the non-volatile memory 110. In addition to the reserved, high-performance memory region, the volatile memory 120 may also store various metadata associated with the data stored in the non-volatile memory 110, such as a logical-to-physical indirection table 322 (see FIG. 3).[0026] As discussed above, the illustrative solid state drive 100 also includes the power fail response circuit 130, which is configured to provide backup power to certain components of the solid state drive 100 for a period of time in the event that power to the solid state drive 100 is unexpectedly lost or interrupted. To do so, the power fail response circuit 130 includes an energy storage 132, which may be embodied as any type of energy storage device or devices capable of providing power to components of the solid state drive 100 for a period of time. In the illustrative embodiment, the energy storage 132 is embodied as a bank of capacitors, which are charged during operation and from which energy can be extracted in the event of a power interruption. In other embodiments, the energy storage 132 may be embodied as, or otherwise include, other types of energy storage devices such as backup batteries. In the illustrative solid state drive 100, the size and/or available power of the energy storage 132 may be greater than backup circuits of typical solid state drives due to the increased amount of data potentially stored on the volatile memory 120 in the reserved region, which may require additional time and/or power to move to the non-volatile memory 110 in the event of a power interruption.[0027] Referring now to FIG. 2, in some embodiments, the solid state drive 100 may form a portion of a computing system 200. For example, the solid state drive 100 may be incorporated into a computing device 202 and/or a remote storage device 204, which may be coupled to the computing device 202. The computing device 202 may be embodied as any type of computing device capable of communicating with the solid state drive 100 and/or the remote storage device 204 to access data stored on the solid state drive 100. For example, the computing device 202 may be embodied as a desktop computer, a mobile computing device, a notebook computer, a laptop computer, an enterprise computing system, a server, a server controller, a router, a switch, a smart appliance, a distributed computing system, a multiprocessor system, and/or any other computing device. As shown in FIG. 2, the illustrative computing device 202 includes a processor 210, an I/O subsystem 212, and memory 214. In some embodiments, the computing device 202 may include the solid state drive 100 as a component of the device 202. Additionally, the computing device 202 may include additional peripheral devices 220 in some embodiments. Of course, the computing device 202 may include other or additional components, such as those commonly found in a computer (e.g. , various input/output devices), in other embodiments. Additionally, in some embodiments, one or more of the illustrative components may be incorporated in, or otherwise from a portion of, another component. For example, the memory 214, or portions thereof, may be incorporated in the processor 210 in some embodiments.[0028] The processor 210 may be embodied as any type of processor capable of performing the functions described herein. For example, the processor 210 may be embodied as a single or multi-core processor(s), digital signal processor, microcontroller, or other processor or processing/controlling circuit. Similarly, the memory 214 may be embodied as any type of volatile or non-volatile memory or data storage capable of performing the functions described herein. The memory 214 is communicatively coupled to the processor 210 via the I/O subsystem 212, which may be embodied as circuitry and/or components to facilitate input/output operations with the processor 210, the memory 214, the solid state drive 100 (in embodiments in which the solid state drive 100 forms a portion of the computing device 202), and other components of the computing device 202. For example, the I/O subsystem 212 may be embodied as, or otherwise include, memory controller hubs, input/output control hubs, firmware devices, communication links (i.e. , point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.) and/or other components and subsystems to facilitate the input/output operations.[0029] The remote storage device 204 may be embodied as any type of data storage device capable of operating remotely from the computing device 202. For example, the remote storage device 204 may be embodied as a remote data server, remote computing device, and/or other electronic device capable of managing access requests to the local solid state drive 100. In some embodiments, for example, the remote storage device 204 may be embodied as a remote data server with which the computing device 202 communicates to access data stored on the solid state drive 100 of the remote storage device 204.[0030] Referring now to FIG. 3, in use, the solid state drive 100 may establish an environment 300. The illustrative environment 300 includes a reserved high-performance region management module 302, a reserved high-performance region notification module 304, a shutdown module 306, and a power failure management module 308. Each of the modules and other components of the environment 300 may be embodied as firmware, software, hardware, or a combination thereof. For example the various modules, logic, and other components of the environment 300 may form a portion of, or otherwise be established by, the drive controller 102 or other hardware components of the solid state drive 100. As such, in some embodiments, any one or more of the modules of the environment 300 may be embodied as a circuit or collection of electrical devices (e.g., a reserved high-performance region management circuitry, a reserved high-performance region notification circuitry, a shutdown circuitry, a power failure management circuitry, etc.)[0031] The reserved high-performance region management module 302 is configured to manage the establishment and access of the reserved region in the volatile memory 120. To do so, the reserved high-performance region management module 302 includes a reservation module 310 and an access management module 312. The reservation module 310 is configured to establish a reserved memory region 320 in the volatile memory 120 upon power up of the solid state drive 100. To do so, the reservation module 310 may identify the region of the volatile memory 120 corresponding to the reserved region 320. For example, the reservation module 310 may identify the logical block addressing (LB A) range corresponding to the reserved region 320 and/or a namespace assigned to the reserved region 320. Upon establishment of the reserved region 320, the reservation module 310 may update a logical-to- physical indirection table 322, which may also be stored in the volatile memory 120 during operation of the solid state drive 100. In some embodiments, the reservation module 310 may also perform some pre-initialization of the reserved region 320. For example, the reservation module 310 may pre-erase the reserved region 320. Additionally, the reservation module 310 may reinstate data to the reserved region 320 that was previously copied to the non-volatile memory 110 in response to a power-down request or a power failure as discussed in more detail below.[0032] The access management module 312 is configured to manage access to the reserved region 320 of the volatile memory 120. For example, a host 350 may request read or write access to the reserved region 320, which is handled by the access management module 312. Read and write requests that are not specifically addressed to the reserved region 320 are directed to the non- volatile memory 110 in the normal manner.[0033] The reserved high-performance region notification module 304 is configured to provide notification to a host 350 (e.g., host applications or devices) that the reserved region 320 of the volatile memory 120 is available for use. Such notification may be embodied as any type of notification or data capable of informing a recipient of the existence of the reserved region 320 and providing access thereto. For example, in some embodiments, the reserved high-performance region notification module 304 may provide the assigned namespace associated with the reserved region 320 or the LBA range corresponding to the reserved region 320.[0034] The shutdown module 306 is configured to respond to shutdown requests received by the solid state drive 100 from a host 350. To do so, the shutdown module 306 is configured to copy data presently saved in the reserved region 320 of the volatile memory 120 to the non-volatile memory 110 upon receipt of the shutdown request. Additionally, the shutdown module 306 may update the logical-to-physical indirection table 322 in response to moving the saved data to the non-volatile memory 110 and/or respond to the shutdown request upon successfully moving the data to the non- volatile memory 110.[0035] The power failure management module 308 is configured to monitor for unexpected power failures or interruption and provide backup power to components of the solid state drive 100 for a period of time in the event of a power failure or interruption. Additionally, the power failure management module 308 and/or the shutdown module 306 may be configured to move data presently saved in the reserved region 320 to the non-volatile memory 110 in response to the power failure and while the backup power is supplied in order to properly save the data. As discussed above, because an increased amount of data may be saved in the reserved region 320, the power failure management module 308 may be configured to provide an increased amount of power and/or provide power for a longer period of time relative to typical solid date drives to allow the drive controller 102 to fully move the data from the reserved region 320 to the non- volatile memory 110.[0036] As discussed above, the reserved high-performance region management module is configured to respond to storage access requests received from a host 350. The host 350 may be embodied as any type of device or service requesting a read or write access to the solid state drive 100. For example, in some embodiments, the host 350 may be embodied as a software application executed by the computing device 202.[0037] Referring now to FIG. 4, in use, the drive controller 102 of the solid state drive100 may execute a method 400 for initializing the volatile memory 120. The method 400 begins with block 402 in which the drive controller 102 determines whether the solid state drive 100 has been powered up. If so, the method 400 advances to block 404 in which the drive controller 102 reserves the high-performance region 320 in the volatile memory 120. To do so, as discussed above, the drive controller 102 may determine or identify the region of the volatile memory 120 corresponding to the reserved region 320. For example, the drive controller 102 may identify the LBA range or a namespace corresponding to the reserved region 320. In some embodiments, the drive controller 102 may also update the logical-to-physical indirection table 322 in block 406 to indicate the presence of the reserved region 320 of the volatile memory 120. Additionally, in block 408, the drive controller 102 may pre-erase the physical memory of the volatile memory 120 corresponding to the reserved region 320.[0038] After the reserved region 320 has been established in block 404, the method 400 advances to block 410 in which the drive controller 102 determines whether to reinstate data to the reserved region 320 that had been previously moved from the reserved region 320 to the non-volatile memory 110 in response to a shutdown request or a power failure. If so, the method 400 advances to block 412 in which the drive controller 102 reinstates the previously moved data to the reserved region 320 of the volatile memory 120. To do so, in block 414, the drive controller 102 retrieves the relevant data from the non-volatile memory 110. The drive controller 102 may be configured to identify the relevant data based on entries in the logical-to- physical indirection table 322, based on metadata saved in association with the relevant data, based on whether the data is stored in pre-defined locations of the non-volatile memory 110, and/or other criteria. Subsequently, in block 416, the drive controller 102 saves the retrieved data in the reserved region 320 of the volatile memory 120. Additionally, in some embodiments, the drive controller 102 may update the logical-to-physical indirection table 322 in block 418. Further, in some embodiments, the drive controller 102 may pre-erase remaining sections of the reserved region 320 (i.e., memory regions unused by the moved data) in block 420.[0039] After the previously moved data has been reinstated in block 412 or if no reinstatement is needed, the method 400 advances to block 422. In block 422, the drive controller 102 exposes the reserved region 320 to a host 350 (or multiple hosts). To do so, the drive controller 102 may utilize any suitable methodology to expose the reserved region 320 for use by a host 350. For example, in block 424, the drive controller 102 may provide a notification or, otherwise expose, a namespace corresponding to the reserved region 320. Additionally or alternatively, in block 426, the drive controller 102 may provide a notification or, otherwise expose, a LBA range corresponding to the reserved region 320.[0040] Referring now to FIG. 5, in use, the drive controller 102 may also execute a method 500 for managing storage access requests received by the solid state drive 100. The method 500 begins with block 502 in which the drive controller 102 determines whether a storage access request has been received. If so, the method 500 advances to block 504 in which the drive controller 102 determines whether the storage access request is directed to the reserved region 320 of the volatile memory 120. If so, the method 500 advances to block 506 in which the drive controller 102 directs the memory access to the reserved region 320. For example, in block 508, the drive controller 102 may write data included in a write request to the reserved region 320 of the volatile memory 120. Alternatively, in block 510, the drive controller 102 may read data requested in a read request from the reserved region 320 of the volatile memory 120.[0041] Referring back to block 504, if the received storage access request is not directed to the reserved region 320 of the volatile memory 120, the drive controller 102 handles the storage access request to the non-volatile memory 1 10 as normal in block 512. For example, in block 514, the drive controller 102 may write data included in a write request to the non- volatile memory 110. Alternatively, in block 516, the drive controller 102 may read data requested in a read request from the non-volatile memory 110.[0042] After the drive controller 102 has responded to the storage access request of the reserved region 320 in block 506 or the non-volatile memory 110 in the block 512, the method 500 loops back to block 502 in which the drive controller 102 continues to monitor for storage access requests. If no storage access request is received in block 502, the method 500 advances to block 518 in which the drive controller 102 determines whether a shutdown request has been received. If so, the method 500 advances to block 520. In block 520, the drive controller 102 moves data presently stored in the reserved region 320 of the volatile memory 120 to the nonvolatile memory 110. To do so, the drive controller 102 retrieves the data stored in the reserved region 320 in block 522 and stores the retrieved data in the non-volatile memory 110 in block 524. In some embodiments, in block 528, the drive controller 102 may write the retrieved data to pre-erased blocks of memory of the non- volatile memory 110 configured in single-level cell (SLC) mode, which exhibits faster memory access and reliability relative to memory regions configured in multi-level cell (MLC) or triple-level cell (TLC) mode. Additionally, in some embodiments, the drive controller 102 may update the logical-to-physical indirection table 322 in block 528 based on the movement of the data from the reserved region 320 to the nonvolatile memory 110. Regardless, after the data presently stored in the reserved region 320 has been successfully moved to the non-volatile memory 110, the drive controller 102 may shutdown the solid state drive 100 in block 530.[0043] Referring now to FIG. 600, in use, the power fail response circuit 130 of the solid state drive 100 may execute a method 600 for handling and responding to an unexpected power failure or interruption. The method 600 begins with block 602 in which the power fail response circuit 130 determines whether a power failure or interruption has been detected. If so, the method 600 advances to block 604 in which the power fail response circuit 130 provides backup power to components of the solid state drive 100, such as the drive controller 102, the non-volatile memory 110, and the volatile memory 120. For example, in block 606, the power fail response circuit 130 may supply power to such components from the energy storage 132, which may be embodied as a bank of capacitors or batteries as discussed above.[0044] In response to the backup power, the drive controller 102 is configured to move data presently stored in the reserved region 320 of the volatile memory 120 to the non-volatile memory 110 in block 608. To do so, the drive controller 102 retrieves the data stored in the reserved region 320 in block 610 and stores the retrieved data in the non- volatile memory 110 in block 612. As discussed above in regard to block 528 of the method 500 (see FIG. 5), the drive controller 102 may write the retrieved data to pre-erased blocks of memory of the nonvolatile memory 110 configured in single-level cell (SLC) mode. Additionally, in some embodiments, the drive controller 102 may update the logical-to-physical table 322 in block 614 based on the movement of the data from the reserved region 320 to the non- volatile memory 110. Regardless, after the data presently stored in the reserved region 320 has been successfully moved to the non-volatile memory 110, the power fail response circuit 130 may power down the solid state drive 100 by removing the backup power from the powered components of the drive 100 in block 616.[0045] Referring now to FIG. 7, in use, the host 350 may execute a method 700 to access the reserved high-performance memory region 320 of the volatile memory 120 of the solid state drive 100. The method 700 begins with block 702 in which the host 350 receives a notification from the solid state drive 100 informing of the establishment of the reserved region 320 in the volatile memory 120. As discussed above, such notification may include or otherwise identify the namespace assigned to the reserved region 320 in block 704 and/or the LBA range corresponding to the reserved region 320 in block 706.[0046] Subsequently, in block 708, the host 350 determines whether access to the memory of the solid state drive 100 (i.e., to the non-volatile memory 110 or the reserved region 320 of the volatile memory 120) is desired. If so, the method 700 advances to block 710 in which the host 350 determines whether access to the reserved region 320 is required. As discussed above in detail, the host 350 may utilize the reserved region 320 for important and/or time-critical memory storage activities including, for example, journaling or logging of data. If access to the reserved region 320 is not required, the method 700 advances to block 712 in which the host 350 directs the storage access request to the non-volatile memory 110 as normal. However, if access to the reserved region 320 is required, the method 700 advances to block 714 in which the host 350 directs the storage access request to the reserved region 320 of the volatile memory 120. For example, the host 350 may direct the storage access request using the namespace assigned to the reserved region 320 in block 716 and/or the LB A range corresponding to the reserved region 320 in block 718. It should be appreciated that such storage access requests may be embodied as read and/or write requests. Regardless, after the host 350 has directed the storage access request to the reserved region 320 in block 714 or to the non-volatile memory 110 in block 712, the method 700 loops back to block 708 in which the host 350 determines whether additional storage access requests are desired.[0047] Although access to the reserved region 320 of the volatile memory 120 by the host 350 has been described above in regard to use of an assigned namespace or corresponding LBA range, it should be appreciated that other technologies and methodologies may be used to expose the reserved region 320 to the host(s) 350. For example, in some embodiments, the reserved region 320 may be memory mapped using Peripheral Component Interconnect express (PCIe) Base Address Registers (BAR). Such embodiments support saving data to the reserved region 320 with small granularity (e.g., per-byte) by a host 350 (e.g., a software application).[0048] Additionally, although the non- volatile memory 110 has been described herein as supporting direct memory access to store and retrieve data therefrom, the non- volatile memory 110 may be used only for saving data from the reserved region 320 in response to a shutdown request or a power failure event in some embodiments. In such embodiments, the volatile memory 120 may have a substantially similar data capacity as the non-volatile memory 110 (e.g., the non-volatile memory 110 may be reduced to the size of the volatile memory 120 or vice-versa). Additionally, in such embodiments, all storage access requests to the solid state drive 100 would be directed to reserved region 320 of the volatile memory 120 as it would be the only memory region of the solid state drive 100 exposed to the host(s) 350.EXAMPLES[0049] Illustrative examples of the technologies disclosed herein are provided below.An embodiment of the technologies may include any one or more, and any combination of, the examples described below.[0050] Example 1 includes a solid state drive for managing a high-performance memory region, the solid state drive comprising a non-volatile memory; a volatile memory; and a drive controller to (i) reserve a region of the volatile memory for storage of host data, (ii) receive a storage access request from a host of a computing system, (iii) determine whether the storage access request is directed to the reserved region of the volatile memory, and (iv) access the reserved region of the volatile memory in response to a determination that the storage access request is directed to the reserved region.[0051] Example 2 includes the subject matter of Example 1, and wherein the drive controller is further to expose the reserved memory region to the host of the computing system.[0052] Example 3 includes the subject matter of any of Examples 1 and 2, and wherein to expose the reserved memory region comprises to inform the host of a namespace corresponding to the reserved memory region of the volatile memory.[0053] Example 4 includes the subject matter of any of Examples 1-3, and wherein to expose the reserved memory region comprises to inform the host of a logical block addressed region corresponding to the reserved region of the volatile memory.[0054] Example 5 includes the subject matter of any of Examples 1-4, and wherein to expose the reserved memory region comprises to memory map the reserved memory region of the volatile memory for use by the host.[0055] Example 6 includes the subject matter of any of Examples 1-5, and wherein to reserve the region of the volatile memory comprises to reserve a region of a dynamic random- access memory (DRAM) of the solid state drive.[0056] Example 7 includes the subject matter of any of Examples 1-6, and wherein to reserve the region of the volatile memory comprises to update a logical-to-physical indirection table of the solid state drive based on the reserved region of the volatile memory.[0057] Example 8 includes the subject matter of any of Examples 1-7, and wherein to determine whether the storage access request is directed to the reserved region of the volatile memory comprises to determine whether the storage access request includes an address that indicates the storage access request is for the reserved region of the volatile memory.[0058] Example 9 includes the subject matter of any of Examples 1-8, and wherein to determine whether the storage access request is directed to the reserved region of the volatile memory comprises to determine whether the storage access request is directed to a logical block addressed region corresponding to the reserved region of the volatile memory.[0059] Example 10 includes the subject matter of any of Examples 1-9, and wherein to access the reserved region of the volatile memory comprises to write data to or read data from the reserved region of the volatile memory in response to a determination that the memory access is directed to the reserved region.[0060] Example 11 includes the subject matter of any of Examples 1-10, and wherein to access the reserved region of the volatile memory comprises to write data to one or more pre- erased blocks of memory of the reserved region of the volatile memory in a single-level cell mode in response to a determination that the memory access is directed to the reserved region.[0061] Example 12 includes the subject matter of any of Examples 1-11, and wherein the drive controller is further to access the non-volatile memory in response to a determination that the storage access request is not directed to the reserved region of the non- volatile memory.[0062] Example 13 includes the subject matter of any of Examples 1-12, and wherein the drive controller is further to reinstate data stored in the non- volatile memory to the reserved region of the volatile memory during an initialization procedure of the solid state drive.[0063] Example 14 includes the subject matter of any of Examples 1-13, and wherein to reinstate data stored in the non- volatile memory to the reserved region of the volatile memory comprises to retrieve data stored in the non-volatile memory; store the data retrieved from the non-volatile memory to the reserved region of the volatile memory; and update a logical-to- physical indirection table of the solid state drive based on the storage of the data to the reserved region of the volatile memory.[0064] Example 15 includes the subject matter of any of Examples 1-14, and wherein the drive controller is further to pre-erase a storage region of the non- volatile memory based on a storage capacity of the reserved region of the volatile memory.[0065] Example 16 includes the subject matter of any of Examples 1-15, and wherein the drive controller is further to receive a shutdown request for the solid state drive; retrieve data stored in the reserved region of the volatile memory in response to the shutdown request; and store the data retrieved from the reserved region of the volatile memory to the storage region of the non- volatile memory of the solid state drive.[0066] Example 17 includes the subject matter of any of Examples 1- 16, and further including a power fail response circuit to (i) detect a power failure event of the solid state drive, (ii) provide power to the drive controller, the volatile memory, and the non- volatile memory in response to the detection of the power failure event for a period of time, (iii) retrieve, during the period of time, data stored in the reserved region of the volatile memory in response to the detection of the power failure event, and (iv) store, during the period of time, the data retrieved from the reserved region of the volatile memory to the storage region of the non-volatile memory of the solid state drive.[0067] Example 18 includes a method for managing a high-performance memory region of a solid state drive, the method comprising reserving, by a drive controller of the solid state drive, a region of a volatile memory of the solid state drive for storage of host data; receiving, by the drive controller, a storage access request from a host of a computing system; determining, by the drive controller, whether the storage access request is directed to the reserved region of the volatile memory; and accessing, by the drive controller, the reserved region of the volatile memory in response to a determination that the storage access request is directed to the reserved region.[0068] Example 19 includes the subject matter of Example 18, and further comprising exposing the reserved memory region to the host of the computing system.[0069] Example 20 includes the subject matter of any of Examples 18 and 19, and wherein exposing the reserved memory region comprises informing, by the drive controller, the host of a namespace corresponding to the reserved memory region of the volatile memory.[0070] Example 21 includes the subject matter of any of Examples 18-20, and wherein exposing the reserved memory region comprises informing, by the drive controller, the host of a logical block addressed region corresponding to the reserved region of the volatile memory.[0071] Example 22 includes the subject matter of any of Examples 18-21, and wherein to exposing the reserved memory region comprises memory mapping the reserved memory region of the volatile memory for use by the host.[0072] Example 23 includes the subject matter of any of Examples 18-22, and wherein reserving the region of the volatile memory comprises reserving a region of a dynamic random- access memory (DRAM) of the solid state drive.[0073] Example 24 includes the subject matter of any of Examples 18-23, and wherein reserving the region of the volatile memory comprises updating a logical-to-physical indirection table of the solid state drive based on the reserved region of the volatile memory.[0074] Example 25 includes the subject matter of any of Examples 18-24, and wherein determining whether the storage access request is directed to the reserved region of the volatile memory comprises determining whether the storage access request includes an address that indicates the storage access request is for the reserved region of the volatile memory.[0075] Example 26 includes the subject matter of any of Examples 18-25, and wherein determining whether the storage access request is directed to the reserved region of the volatile memory comprises determining whether the storage access request is directed to a logical block addressed region corresponding to the reserved region of the volatile memory.[0076] Example 27 includes the subject matter of any of Examples 18-26, and wherein accessing the reserved region of the volatile memory comprises writing data to or reading data from the reserved region of the volatile memory in response to a determination that the memory access is directed to the reserved region. [0077] Example 28 includes the subject matter of any of Examples 18-27, and wherein accessing the reserved region of the volatile memory comprises writing data to one or more pre- erased blocks of memory of the reserved region of the volatile memory in a single-level cell mode in response to a determination that the memory access is directed to the reserved region.[0078] Example 29 includes the subject matter of any of Examples 18-28, and further including accessing, by the drive controller, non-volatile memory of the solid state drive in response to a determination that the storage access request is not directed to the reserved region of the non- volatile memory.[0079] Example 30 includes the subject matter of any of Examples 18-29, and further including reinstating data stored in a non-volatile memory of the solid state drive to the reserved region of the volatile memory during an initialization procedure of the solid state drive.[0080] Example 31 includes the subject matter of any of Examples 18-30, and wherein reinstating data stored in the non-volatile memory to the reserved region of the volatile memory comprises retrieving, by the drive controller, data stored in the non-volatile memory; storing, by the drive controller, the data retrieved from the non-volatile memory to the reserved region of the volatile memory; and updating, by the drive controller, a logical-to-physical indirection table of the solid state drive based on the storage of the data to the reserved region of the volatile memory.[0081] Example 32 includes the subject matter of any of Examples 18-31, and further including pre-erasing a storage region of a non- volatile memory of the solid state drive based on a storage capacity of the reserved region of the volatile memory.[0082] Example 33 includes the subject matter of any of Examples 18-32, and further including receiving, by the drive controller, a shutdown request for the solid state drive; retrieving, by the drive controller, data stored in the reserved region of the volatile memory in response to the shutdown request; and storing, by the drive controller, the data retrieved from the reserved region of the volatile memory to a non-volatile memory of the solid state drive.[0083] Example 34 includes the subject matter of any of Examples 18-33, and further including detecting, by a power fail response circuit of the solid state drive, a power failure event of the solid state drive; providing, by the power fail response circuit, power to the drive controller, the volatile memory, and a non-volatile memory of the solid state drive in response to the detection of the power failure event for a period of time; retrieving, by the drive controller and during the period of time, data stored in the reserved region of the volatile memory in response to the detection of the power failure event; and storing, by the drive controller and during the period of time, the data retrieved from the reserved region of the volatile memory to a non- volatile memory of the solid state drive.[0084] Example 35 includes one or more machine -readable storage media comprising a plurality of instructions stored thereon that, when executed, cause a solid state drive to perform the method of any of Examples 18-34.[0085] Example 36 includes a solid state drive for managing a high-performance memory region, the solid state drive comprising means for reserving a region of a volatile memory of the solid state drive for storage of host data; means for receiving a storage access request from a host of a computing system; means for determining whether the storage access request is directed to the reserved region of the volatile memory; and means for accessing the reserved region of the volatile memory in response to a determination that the storage access request is directed to the reserved region.[0086] Example 37 includes the subject matter of Example 36, and further including means for exposing the reserved memory region to the host of the computing system.[0087] Example 38 includes the subject matter of any of Examples 36 and 37, and wherein the means for exposing the reserved memory region comprises means for informing the host of a namespace corresponding to the reserved memory region of the volatile memory.[0088] Example 39 includes the subject matter of any of Examples 36-38, and wherein the means for exposing the reserved memory region comprises means for informing the host of a logical block addressed region corresponding to the reserved region of the volatile memory.[0089] Example 40 includes the subject matter of any of Examples 36-39, and wherein to means for exposing the reserved memory region comprises means for memory mapping the reserved memory region of the volatile memory for use by the host.[0090] Example 41 includes the subject matter of any of Examples 36-40, and wherein the means for reserving the region of the volatile memory comprises means for reserving a region of a dynamic random-access memory (DRAM) of the solid state drive.[0091] Example 42 includes the subject matter of any of Examples 36-41, and wherein the means for reserving the region of the volatile memory comprises means for updating a logical-to-physical indirection table of the solid state drive based on the reserved region of the volatile memory.[0092] Example 43 includes the subject matter of any of Examples 36-42, and wherein the means for determining whether the storage access request is directed to the reserved region of the volatile memory comprises means for determining whether the storage access request includes an address that indicates the storage access request is for the reserved region of the volatile memory.[0093] Example 44 includes the subject matter of any of Examples 36-43, and wherein the means for determining whether the storage access request is directed to the reserved region of the volatile memory comprises means for determining whether the storage access request is directed to a logical block addressed region corresponding to the reserved region of the volatile memory.[0094] Example 45 includes the subject matter of any of Examples 36-44, and wherein the means for accessing the reserved region of the volatile memory comprises means for writing data to or reading data from the reserved region of the volatile memory in response to a determination that the memory access is directed to the reserved region.[0095] Example 46 includes the subject matter of any of Examples 36-45, and wherein the means for accessing the reserved region of the volatile memory comprises means for writing data to one or more pre-erased blocks of memory of the reserved region of the volatile memory in a single-level cell mode in response to a determination that the memory access is directed to the reserved region.[0096] Example 47 includes the subject matter of any of Examples 36-46, and further including means for accessing non-volatile memory of the solid state drive in response to a determination that the storage access request is not directed to the reserved region of the nonvolatile memory.[0097] Example 48 includes the subject matter of any of Examples 36-47, and further including means for reinstating data stored in a non-volatile memory of the solid state drive to the reserved region of the volatile memory during an initialization procedure of the solid state drive.[0098] Example 49 includes the subject matter of any of Examples 36-48, and wherein the means for reinstating data stored in the non-volatile memory to the reserved region of the volatile memory comprises means for retrieving data stored in the non-volatile memory; means for storing the data retrieved from the non- volatile memory to the reserved region of the volatile memory; and means for updating a logical-to-physical indirection table of the solid state drive based on the storage of the data to the reserved region of the volatile memory.[0099] Example 50 includes the subject matter of any of Examples 36-49, and further including means for pre-erasing a storage region of a non-volatile memory of the solid state drive based on a storage capacity of the reserved region of the volatile memory. [00100] Example 51 includes the subject matter of any of Examples 36-50, and further including means for receiving a shutdown request for the solid state drive; means for retrieving data stored in the reserved region of the volatile memory in response to the shutdown request; and means for storing the data retrieved from the reserved region of the volatile memory to a non- volatile memory of the solid state drive.[00101] Example 52 includes the subject matter of any of Examples 36-51, and further including means for detecting a power failure event of the solid state drive; means for providing power to the drive controller, the volatile memory, and a non-volatile memory of the solid state drive in response to the detection of the power failure event for a period of time; means for retrieving, during the period of time, data stored in the reserved region of the volatile memory in response to the detection of the power failure event; and means for storing the data retrieved from the reserved region of the volatile memory to a non-volatile memory of the solid state drive. |
Technologies for executing a serial data processing algorithm on a single variable-length data buffer includes padding data segments of the buffer, streaming the data segments into a data register and executing the serial data processing algorithm on each of the segments in parallel. |
WHAT IS CLAIMED IS: 1. A computing device for processing a data buffer, the computing device comprising: a data buffer processing module to access an arbitrary- length data buffer having a buffer length and a plurality of data segments, each data segment having a segment length greater than zero and less than the buffer length; pad each data segment in accordance with a serial data processing algorithm; directly read each of the padded data segments into a data register, the data register having a plurality of data paths, each padded data segment being read directly into a different data path; and perform a serial data processing algorithm on each of the data paths substantially in parallel to produce a result for each data path. 2. The computing device of claim 1, wherein the data buffer has an arbitrary length. 3. The computing device of claim 1 or claim 2, wherein the data buffer processing module comprises a data buffer processing module to directly read each of the padded data segments into a different data path of the data register. 4. The computing device of any of claims 1-3, wherein the data buffer processing module comprises a data buffer processing module to pad each of the data segments in accordance with the serial data processing algorithm. 5. The computing device of any of claims 1-4, wherein the data buffer processing module is embodied as an extension to a cryptographic hash algorithm. 6. The computing device of any of claims 1-5, wherein the data buffer processing module comprises a data buffer processing module to execute on a single core of a microprocessor of the computing device. 7. The computing device of claim 6, wherein the data buffer processing module comprises a data buffer processing module to execute on a single thread of the single core. 8. The computing device of any of claims 1-7, wherein the data buffer processing module comprises a data buffer processing module to execute on a single instruction, multiple data-capable processor of the computing device. 9. The computing device of any of claims 1-8, the data buffer processing module comprises a data buffer processing module to execute with a single thread software application. 10. One or more machine readable storage media comprising a plurality of instructions stored thereon that, in response to being executed, cause a computing device to: define an arbitrary- length data buffer of the computing device as a plurality of data segments, each data segment having a segment length greater than zero and less than the length of the data buffer; pad each data segment in accordance with a serial data processing algorithm; stream the padded data segments into a data register, the data register having a plurality of data path execution units, each padded data segment being streamed into a different data path execution unit using a single data pointer; and execute a serial data processing algorithm in each of the data path execution units substantially in parallel to produce a result for each data path execution unit. 11. The machine readable storage media of claim 10, wherein the plurality of instructions further cause the computing device to define the segment length based on the width of the data register and a word size specified by the serial data processing algorithm. 12. The machine readable storage media of claim 10 or claim 11, wherein to define the data buffer as a plurality of data segments comprises to divide the data buffer into the plurality of data segments in an interleaved fashion. 13. The machine readable storage media of claim 12, wherein the data buffer comprises a plurality of data words, and wherein to divide the data buffer into the plurality of data segments in an interleaved fashion comprises to assign each data word in the data buffer to a different data segment, so that each data segment comprises an array of data words. 14. The machine readable storage media of any of claims 10-13, wherein each result comprises a plurality of data words, and wherein the plurality of instructions further cause the computing device to interleave the results by the data words. 15. The machine readable storage media of any of claims 10-14, wherein to execute a serial data processing algorithm comprises to execute a cryptographic hash function. 16. The machine readable storage media of claim 15, wherein the plurality of instructions further cause the computing device to generate a hash digest for each of the padded data segments. 17. The machine readable storage media of claim 16, wherein the plurality of instructions further cause the computing device to combine the hash digests to form a new data buffer and execute the cryptographic hash function on the new data buffer. 18. The machine readable storage media of claim 17, wherein to combine the hash digests comprises to concatenate the results and execute the serial data processing algorithm on the concatenated results. 19. The machine readable storage media of any of claims 10-18, wherein the plurality of instructions further cause the computing device to determine a block size associated with the serial data processing algorithm and to pad each of the data segments so that the length of each of the padded data segments is a multiple of the block size. 20. The machine readable storage media of claim 19, wherein the plurality of instructions further cause the computing device to append a fixed pattern of data bits to each of the data segments. 21. The machine readable storage media of any of claims 10-20, wherein the plurality of instructions further cause the computing device to determine the number of data segments based on a characteristic of a microprocessor of the computing device. 22. The machine readable storage media of any of claims 10-21, wherein the plurality of instructions further cause the computing device to determine the number of data segments based on a characteristic of the serial data processing algorithm. 23. A method for processing an arbitrary- length data buffer, the method comprising: defining the data buffer as a plurality of data segments, each data segment having a segment length greater than zero and less than the length of the data buffer; padding each data segment in accordance with a serial data processing algorithm; streaming the padded data segments into a data register, the data register having a plurality of data path execution units, each padded data segment being streamed into a different data path execution unit using a single data pointer; and executing a serial data processing algorithm in each of the data path execution units substantially in parallel to produce a result for each data path execution unit. 24. The method of claim 23, wherein the data buffer comprises a plurality of data words, and wherein: defining the data buffer as a plurality of data segments comprises dividing the data buffer into the plurality of data segments in an interleaved fashion, and dividing the data buffer into the plurality of data segments in an interleaved fashion comprises assigning each data word in the data buffer to a different data segment, so that each data segment comprises an array of data words. 25. The method of any of claim 23 or claim 24, further comprising determining a block size associated with the serial data processing algorithm and padding each of the data segments so that the length of each of the padded data segments is a multiple of the block size. |
PARALLEL PROCESSING OF A SINGLE DATA BUFFER CROSS-REFERENCE TO RELATED APPLICATIONS This present application claims priority under 35 U.S. C. § 119(e) to U.S. Provisional Application Serial No. 61/670,472, filed July 11, 2012 and U.S. Patent Application Serial No. 13/631,763, filed September 28, 2012. BACKGROUND Software for verifying the security of data files and computer programs is prevalent in many different contexts, such as operating system boot sequences, loading of program code or data files, web browsing, data communication, and data storage. Serial data processing algorithms such as those used for authentication and/or encryption can operate in a chained dependent fashion on a single buffer of data. Those algorithms can be constrained by serial chaining in that the output resulting from the processing of one block of data in the buffer is often required for the processing of a subsequent block. For example, cryptographic hash functions such as MD5 (Message-Digest Algorithm) and SHA1, SHA256 and SHA512 (Secure Hash Algorithms) can be expensive in terms of computation on general-purpose processors. Such hash functions work sequentially on single buffers of data, updating a hash digest state with the computations derived from each data block and using a number of rounds of processing that are dependent on each other. The sequential processing of the blocks of the single buffer limits the performance on modern processors. Methods such as multi-buffer processing using vector Single Instruction Multiple Data (SIMD) units have been proposed for better performance in applications where it is possible to work on multiple independent data buffers; however, those methods are not applicable to applications involving the hashing of a single buffer. Tree hashing is another technique that has been used, albeit across multiple cores or engines. BRIEF DESCRIPTION The concepts described herein are illustrated by way of example and not by way of limitation in the accompanying figures. For simplicity and clarity of illustration, elements illustrated in the figures are not necessarily drawn to scale. Where considered appropriate, reference labels have been repeated among the figures to indicate corresponding or analogous elements. FIG. 1 is a simplified block diagram of at least one embodiment of a computing device in connection with which the disclosed methods may be implemented; FIG. 2 is a simplified module diagram of at least one embodiment of a system for parallel processing of a single data buffer; and FIG. 3 is a simplified flow diagram of at least one embodiment of a method for parallel processing of a single data buffer. DETAILED DESCRIPTION While the concepts of the present disclosure are susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will be described herein in detail. It should be understood, however, that there is no intent to limit the concepts of the present disclosure to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives consistent with the present disclosure and the appended claims. References in the specification to "one embodiment," "an embodiment," "an illustrative embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include that particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. The disclosed embodiments may be implemented, in some cases, in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on a transitory or non-transitory machine- readable (e.g., computer-readable) storage medium, which may be read and executed by one or more processors. A machine-readable storage medium may be embodied as any storage device, mechanism, or other physical structure for storing or transmitting information in a form readable by a machine (e.g., a volatile or non-volatile memory, a media disc, or other media device). In the drawings, some structural or method features may be shown in specific arrangements and/or orderings. However, it should be appreciated that such specific arrangements and/or orderings may not be required. Rather, in some embodiments, such features may be arranged in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of a structural or method feature in a particular figure is not meant to imply that such feature is required in all embodiments and, in some embodiments, may not be included or may be combined with other features. Referring now to FIG. 1, a data buffer processing module 130 is embodied in an illustrative computing device 100. In use, the data buffer processing module 130 takes as input a single data buffer 132 (e.g., a string or "message" of arbitrary length). The data buffer processing module 130 determines a level of parallelism for the single data buffer 132; that is, a number of "segments" of the single data buffer 132 that can be processed in parallel by a serial data processing algorithm 128 (e.g., a cryptographic hash function). The data buffer processing module 130 manages the parallel processing of the segments by the algorithm 128. Although different, the output of the algorithm 128 after such parallel processing has a security strength that is comparable to the results normally achieved by executing the algorithm 128 on a single data buffer in a traditional way (e.g., sequentially). Further, performance gains can be achieved as a result of the segmenting and parallel processing of the single data buffer 132. In this way, the data buffer processing module 130 can perform a serial data processing algorithm on a single data buffer of any arbitrary length, even though the underlying algorithm works on blocks of a specific size (e.g. 64 bytes). The illustrative computing device 100 includes at least one processor 110, a memory 120, an input/output (I/O) subsystem 122, a storage device 124, and one or more peripheral devices 140. The computing device 100 may be embodied in or as any type of computing device, such as, for example, a desktop computer system, a laptop or tablet computer system, a server, an enterprise computer system, a network of computers, a handheld or otherwise mobile computing device, or other electronic device, depending on the particular application. The illustrative processor 110 includes one or more processor cores or logical sections of a single core, e.g., processor cores 112, 114, 116, which are referred to herein simply as "cores" for ease of description. In some embodiments, one or more of the cores 112, 114, 116 is configured to process single-threaded computer programs (such as, in some embodiments, the data buffer processing module 130) using a SIMD (Single Instruction, Multiple Data) instruction set or similar set of computer instructions. More specifically, in some embodiments, at least one of the cores 112, 114, 116 is configured with an instruction set that includes one or more streaming extensions, such as the Streaming SIMD Extensions (SSE) or later versions (e.g., SSE/? or AVX (Advanced Vector Extensions)). The core or cores 112, 114, 116 include or are communicatively coupled to one or more data registers 118. The registers 118 may be utilized to temporarily store data and/or instructions during operation of the serial data processing algorithm 128, the data buffer processing module 130, and/or other components of the computing device 100. Each register 118 has a register size or "width," which is the amount of data the register 118 can store at a given time. At least one of the data registers 118 is configured for data- level parallelism. For example, in some embodiments, at least one data register 118 is configured for SIMD or similar data-level parallel processing; that is, it can be divided into multiple functional units (e.g., "lanes," "data paths," or "execution units") that can perform the same operation on multiple data at the same time or substantially the same time. For example, in a SIMD or similar register having a width of 128 bits, computer instructions can specify a number of lanes or data paths of the register 118 that can each process a portion of the 128 bits of data in parallel, so that the algorithm 128 can be executed on each of the data paths at the same time, independently of the other data paths. The illustrative cores 112, 114, 116 also include or are communicatively coupled to one or more cache memory (not shown). The cache memory may be utilized to temporarily store data and/or instructions during operation of the serial data processing algorithm 128, the data buffer processing module 130, and/or other components of the computing device 100. In addition to the cache memory and the registers 118, the processor 110 and/or its cores 112, 114, 116 include, or are otherwise communicatively coupled to, the memory 120. Portions of the memory 120 may be embodied as any type of suitable memory device, such as a dynamic random access memory device (DRAM), synchronous dynamic random access memory device (SDRAM), double-data rate dynamic random access memory device (DDR SDRAM) and/or other volatile memory devices. The processor 110 is also communicatively coupled to the I/O subsystem 122. Although not specifically shown, the I/O subsystem 122 typically includes a memory controller (e.g., a memory controller subsystem or northbridge), an input/output controller (e.g., an input/output controller subsystem or southbridge), and a firmware device. Of course, in other embodiments, I/O subsystems having other configurations may be used. For example, in some embodiments, the I/O subsystem 122 may form a portion of a system-on-a-chip (SoC) and be incorporated, along with the processor 110 and other components of the computing device 100, on a single integrated circuit chip. As such, it will be appreciated that each component of the I/O subsystem 122 may be located on a common integrated circuit chip in some embodiments. The illustrative I/O subsystem 122 is communicatively coupled to one or more storage devices 124. Portions of the storage 124 may be embodied as any suitable device for storing data and/or instructions, such as disk storage (e.g. hard disks), memory cards, memory sticks, and/or others. In some embodiments, the serial data processing algorithm 128, the data buffer processing module 130, and/or the single data buffer 132 are at least temporarily embodied in the storage device 124. During execution, portions of the serial data processing algorithm 128, the data buffer processing module 130 and/or the single data buffer 132 may be loaded into the memory 120, cache memory, and/or the registers 118, for faster processing or other reasons. In other embodiments, the serial data processing algorithm 128 and/or the data buffer processing module 130 may be embodied as circuitry, machine-executable logic units, or the like. That is, the serial data processing algorithm 128 and/or the data buffer processing module 130 may each be embodied as software, firmware, hardware, and/or a combination thereof, in various embodiments. Further, the data buffer processing module 130 may be embodied as a sub-module or "extension" of the serial data processing algorithm 128, or as a function, procedure, or library object callable by the serial data processing algorithm 128 and/or other software (e.g., an operating system, a security application, and/or others). For example, the buffer processing module 130 may be embodied as one or more software extensions to an existing or future cryptographic hash algorithm, such as a Secure Hash Algorithm. The I/O subsystem 122 may be communicatively coupled to one or more peripheral devices 140. The peripheral device(s) 140 may include one or more network interfaces, graphics and/or video adaptors, keyboard, touchscreens, displays, printers, data storage devices, and/or other peripheral devices, depending upon, for example, the intended use of the computing device 100. Further, it should be appreciated that the computing device 100 may include other components, sub-components, and devices not illustrated in FIG. 1 for clarity of the description. In general, the components of the computing device 100 are communicatively coupled as shown in FIG. 1, by one or more signal paths, which are represented schematically as double-headed arrows. Such signal paths may be embodied as any type of wired or wireless signal paths capable of facilitating communication between the respective devices. For example, the signal paths may be embodied as any number of wires, printed circuit board traces, via, bus, point-to-point interconnects, intervening devices, and/or the like. Referring now to FIG. 2, an illustrative system 200 in which the buffer processing module 130 manages parallel execution of the serial data processing algorithm 128 across an input data buffer 210, is shown. The illustrative input data buffer 210 is a string of data characters (e.g., a data file or "message") having an arbitrary size or length L (as measured in, e.g., bits or bytes). As described in more detail below, the buffer processing module 130 divides the contents of the input data buffer 210 into a number of segments S, where the number of segments is a positive integer representing the level or degree of parallelism across the input data buffer 210 that is desired or which is possible given the requirements of a particular design or implementation of the system 200. In the illustrative embodiments, each segment may be padded to a specified length in accordance with requirements of the serial data processing algorithm 128. In other words, some segments may be padded while others are not padded, depending on the segment's length before padding and the serial data processing algorithm 128's specifications. The buffer processing module 130 streams the contents of the input data buffer 210 (e.g., the segments, padded as needed) into the data register 118 so that each segment is assigned to a different lane or data path of the register 118. The buffer processing module 130 initiates execution of the algorithm 128 on each lane or data path of the register 118, in parallel, so that each segment is processed by the serial data processing algorithm 128 concurrently. The algorithm 128 processes, in parallel, each of the segments (padded, as needed) of the data buffer 210, serially in data blocks of a specified size B (as measured in, e.g., bits or bytes), where each data block is made up of a number of data words of size W (as measured in, e.g., bits or bytes), such that B is a multiple of W. The algorithm 128 generates an output (or "message digest," or "hash digest" in some embodiments) for each segment, which may be at least temporarily stored in an output data buffer 212. The contents of each of the output data buffers 212 (1)...(S) (where S is the number of segments) has a fixed length D (as measured in, e.g., bits or bytes). Both the input data buffer 210 and the output data buffers 212(1)...212(S) may be embodied as the single data buffer 132, or in one or more temporary storage buffers, in the various embodiments. For instance, the contents of the single data buffer 132 may initially correspond to the contents of the input data buffer 210, and may be updated as the execution of the buffer processing module 130 and/or the serial data processing algorithm 128 proceeds. In some embodiments, the algorithm 128 is a cryptographic hash function such as MD5, SHA1, SHA256, or SHA512, and the data buffer processing module 130 uses as parameters certain specifications of the cryptographic hash function (as defined, e.g., in the relevant Federal Information Processing Standards Publication or FIPS PUB) in determining the number of segments S. As an example, the standards for the SHA256 secure hash function specify that B=512 bits, W=32 bits, and D=256 bits. The standard SHA256 hash function breaks the contents of an arbitrary- length input buffer into blocks of size B, and executes a number of computational rounds on each block using, in each round, a word of size W from the block. Each round updates the buffer, such that the output of one round is an input to the subsequent round. Traditionally, the SHA256 hash function processes the blocks of the contents of the input buffer sequentially, such that the hash digest produced for one block is used as the initial hash digest for the processing of the next block, and so on, until each block of data in the input buffer has been processed. In contrast, the buffer processing module 130 defines multiple segments across a single data buffer, where each segment includes one or more blocks of data, and applies the algorithm 128 to each of the segments of the data buffer in parallel. For example, if a data register has a width of 256 bits, then the buffer processing module 130 can divide the contents of the input data buffer 210 into (register width)/W or 256/32 = 8 segments and execute the algorithm 128 on each of the 8 segments in parallel. Referring now to FIG. 3, an illustrative method 300 executable as computerized programs, routines, logic and/or instructions by the buffer processing module 130 and/or other modules or components of the computing device 100, for parallel processing of a single data buffer, is shown. At block 310, the method 300 determines the number of segments S in which to divide the contents of the input data buffer 210, and creates the determined number of segments by dividing the contents of the input buffer 210, accordingly. In some embodiments, the number of segments may be pre-determined and simply accessed as a parameter, argument, or stored value (e.g., from a look-up table or database). In other embodiments, the number of segments may be determined at load time or runtime. In some embodiments, the number of segments may be a function of the width of the register 1 18, the parameters or specifications of the serial data processing algorithm 128 (e.g., block size, word size, output length, etc.), and/or the length of the input data buffer 210. As an example, where an SHA256 hash function is used as the algorithm 128, S=8, W=4 bytes, and B=64 bytes. Still at block 310, each of the segments is defined as being comprised of data words having a particular width (e.g., 32 bits). In some embodiments, the segment word width corresponds to the word width W specified by the algorithm 128. The segments are each created using every Sth word of the input data buffer 210, such that the length of the segment is evenly divisible by the block size B. The length L of the input data buffer 210 is divided by the segment block size (S multiplied by B, or SB) to determine how much of the contents of the input data buffer 210 can be processed in segments of the same size. Where the length L of the input data buffer is not evenly divisible by SB, one or more of the segments may be padded or a final segment comprising the remaining data may be created. In the SHA256 example, SB = 8*64=512 bytes. Since there are 8 segments, each segment is formed using every 8 thdata word (32 bits, or 4 bytes) in the input data buffer 210, up to 512*N bits, where N is a positive integer and 512*N is less than L. At block 312, the method 300 performs any necessary padding of each of the segments, either as part of a pre-processing routine or "on the fly" as needed. For example, in the case of cryptographic hash functions, each segment may be padded as needed by appending (e.g., by concatenation) a number of data bits plus an indication of the buffer length to the end of the message so that the segment is of a specified length for processing by the selected algorithm 128. In some embodiments, the padding includes a "1" bit followed by the necessary number of "0" bits followed by the buffer length. In other embodiments, other combinations or patterns of "0" and "1" bits may be used in the padding of each segment. The standards or specifications that define the underlying algorithm 128 specify the padding scheme. In some embodiments, each of the segments is extended by a number of bits sufficient to make the padded buffer the smallest multiple of the block size. For example, each segment of the buffer 210 may be padded to its nearest multiple of B bytes, and then processed with S-way SIMD processing applied to the algorithm 128 to generate S digests. In this case, the per-segment padding is done according to the algorithm 128's standard padding scheme. In some cases (such as in the case of a remainder segment), a segment may have a different padded length than other padded segments. For instance, padding may result in a segment having an additional block when the amount of data in the segment plus the requisite padding exceeds the block size. At block 314, the method 300 streams or otherwise reads the segments into the data paths of the register 118, so that each segment is read into a different data path (using, e.g., interleaving). In some embodiments, this is accomplished by using a single data pointer that is incremented up to SB; that is, until all of the evenly-sized segments have been processed. In the SHA256 example, eight 32-bit words are read into 8 data paths of the register at a time. As another example, executing SHA-1 on a SIMD-capable microprocessor with 128-bit registers would have the following parameter settings: B=64 Bytes, W=4 Bytes, S=4, D=20 Bytes. At block 316, the serial data processing algorithm 128 is executed on each of the padded data segments in parallel. That is, for each padded segment, the algorithm 128 sequentially processes the blocks of that segment, at the same time as the other segments are being similarly processed by the algorithm 128. Thus, an intermediate result (e.g., a hash digest) is created for each padded segment. In the SHA256 example, the SHA256 algorithm is executed on each data path/32-bit word substantially simultaneously, and then the next 8 words are read into the register data paths and processed in parallel by the SHA256 algorithm, and so on, up to the block size B. Due to the fact that each data segment is padded and processed according to the algorithm 128's specifications, in some embodiments it is not necessary for the individual segment results to be combined. Thus, the segment results may be stored in separate buffers or together in one buffer (e.g., if concatenated). Optionally, at block 318, the individual S digests may be combined to form a single result, e.g., the final output of the algorithm 128. For example, the set of S digests may be treated as another data buffer of length S*D, and then a final hash of size D may be generated in a single buffer fashion. The segment results can be combined in a number of different ways, including using an exclusive-or (XOR) or addition (ADD) function, or by concatenating the segment results and then executing the algorithm 128 again. Using the SHA256 example, each of the 8 hash digests may be combined into one 256 bit hash digest. It should be appreciated by those skilled in the art that the method 300 can be easily adapted to other processor configurations and serial data processing algorithms. For example, registers having other register widths can be used. As an example, using the AVX3, which has a width of 512 bits, the number of segments S could be 16 rather than 8, and each segment could be made up of every 16 th(32-bit) word. In some embodiments, the data segments are analogous to interleaved independent buffers, where a number of independent hash digests are generated for those segments in parallel as discussed above. In some embodiments, the number of interleaved segments is a power of 2. In creating the segments, some embodiments of the method 300 interleave the data at a finer granularity (e.g., data words), rather than breaking the buffer 210 down into block- or greater-sized processing portions. Referring again to FIG. 3, an illustrative embodiment of the method 300 uses a hash algorithm H, which is defined to work on an integral number of blocks of size B bytes each. The below embodiment hashes a message Mo of length L with a given level of parallelism S (where the || symbol denotes concatenation). After the segments are created, the padding function associated with H extends each segment of the message with a pre-determined pattern and a concatenation of the segment length to the smallest length that is a multiple of B bytes. Referring to block 310 of FIG. 3, the message M 0is divided into S segments each of length L/S. The message Mo may be divided in an interleaved fashion such that every word size W-bits of M 0is assigned to a different segment. Each segment may be represented as an array of W-bit words: Sego = M 0[0] II Mo [S] || M 0[2S] || ... Segi = Mo [1] II Mo[S+l] || M 0[2S+1] || ... Seg s_! = Mo [S-l] I) Mo [(2S-1)] || M 0[(3S-1)] || ... where each Mo[n] is a word size W index into the message. Referring to block 312 of FIG. 3, the padding function specified by the algorithm 128 is applied to each segment of the message, generating individual segments each having a padded length. The padded length of each segment is the smallest length to which the respective segment can be extended that is a multiple of B bytes. As mentioned above, some segments may have a different padded length than other segments. Referring to block 316 of FIG. 3, S leaf-level digests D kare generated on the padded segments as D k= H(Seg k) for k=0...(S-l). Referring to block 318 of FIG. 3 (optionally), a new message Mi may be created by interleaving the resultant digests from block 316 by every word size W-bits. If Mi = D 0[0] || Di[0] ... || D (s_i )[0] || Di[l] ... || D (s_i )[(D/W)- 1], then each D k[n] may be a word size W index into a segment's digest. The hash algorithm H may then be applied to Mi (e.g., H(Mi)). In some embodiments, the contents of the data buffer 210 aligned in memory is read (e.g., "streamed") directly into SIMD registers without the need for transposing. In some embodiments, the method 300 allows the data being streamed (e.g., from a network connection) to be fed directly into the register 118 without the need to know the length of the buffer 210 at start time. Accordingly, single-thread applications do not have to be modified (other than at the hash algorithm level) to take advantage of the performance benefits of the disclosed parallel processing. In some embodiments, the algorithm 128 can be selected or ordered based on computation and/or security considerations, and the current (possibly ordered) list of cryptographic hash algorithms in various protocols/standards can be augmented with parallelized versions as disclosed herein (e.g. SHAlx4, SHAlx8, SHA256x4, SHA256x8, etc.). In some embodiments, e.g., applications involving verifying signatures of files that are securely loaded, the signing entity replaces the existing cryptographic hashing algorithm of the chosen security (e.g. SHA256) with a version of the method 300 that is most efficient to compute for verification. For instance, if the verifying entity has a 128-bit SIMD data-path execution unit in its processor core, and if an SHA256-strength digest is desired, the SHA256x4 algorithm may be desired (as the SHA256 algorithm is 32-bit based, a 128-bit SIMD execution unit can process 128/32 = 4 segments, in parallel). Thus, instead of using one of the currently used 32-bit algorithms (e.g., MD5, SHAl, SHA256), the verifying entity would use a corresponding MD5 x8, SHAl x4, SHA256 x4 parallelized algorithm. In some embodiments, additional parallelism may be desired with MD5 due to the algorithm's constrained data-dependency chain, even though only 4 segments are needed from a 128-bit SIMD perspective. In embodiments where there may be many verifying devices of different computation strengths, the signing entity may need to determine the level of parallelism that works for the majority of its verifying devices. The disclosed embodiments do not require the server to estimate this very accurately, as a larger level of parallelism can be created during signing, and the verifying agents can perform a multi-pass approach during verification, if their SIMD or hardware capability cannot process as many segments as specified, all at once. For example, a signer can use an x4 scheme while a verifying agent could perform two passes of an x2 scheme. In some embodiments, some loss of efficiency could result if too many passes are needed (due, e.g., to managing multiple state variables of the digests), however, data can still be brought in efficiently in a streaming manner just once. In this case, the application will need to cycle through the sets of state variables. For instance, in some cases, a client device may not have a SIMD unit at all, and needs to perform simple scalar operations to process a SHA256x4 hash. In this case, instead of working with 1 set of SHA256 state variables (32 Bytes), it will simultaneously work on 4 such copies of state variables (128 Bytes), cycling through them as it processes words from the data buffer. This increase in state size is very small. However, the working-set size increase associated with message schedules for a block (e.g., for SHA) may be undesirable in some cases. If the increase in working-set size is problematic, one could choose to store four blocks of data and strictly work on one interleaved block at a time. Many other variations are possible, and various embodiments can permit any device to process a parallel hash signature efficiently without undue burden. However, if a fixed hardware engine is designed to perform the entire hash function, including padding, on a given buffer/length input, then the padding can be designed to be the same as the hardware to achieve the same result. If the hardware engine works on a per block basis or has a mode that does not include padding, then it can be used to perform the disclosed multi-hash methods. Although the disclosed embodiments are capable of large degrees of parallelism (e.g., x32 or x64), it may be desirable in some embodiments to configure the method 300 in accordance with the capabilities of existing devices or reasonably anticipated future devices (e.g., x4 or x8). In some embodiments, an SHA256x4 version of the method 300 has been shown to provide an approximately 2.6x performance gain over the best SHA256 algorithm computation on a reasonably sized 1KB data buffer. In some embodiments, an MD5 x8 version of the method 300 has been shown to result in an approximately 4 Ax performance gain over the standard MD5 algorithm. The multi-hash performance should scale in proportion to increasing data-path widths of future processors. Further, using the disclosed embodiments, the resulting digest should be at least as secure and collision-resistant as the digest obtained by a direct application of the underlying hash function. In addition to the most commonly used hash functions today, the disclosed embodiments can be adapted for the new SHA3 candidates. EXAMPLES Illustrative examples of the devices, systems, and methods disclosed herein are provided below. An embodiment of the devices, systems, and methods may include any one or more, and any combination of, the examples described below. Example 1 includes a computing device for processing a data buffer. The computing device includes a data buffer processing module to access an arbitrary- length data buffer having a buffer length and a plurality of data segments, each data segment having a segment length greater than zero and less than the buffer length; pad each data segment in accordance with a serial data processing algorithm; directly read each of the padded data segments into a data register, the data register having a plurality of data paths, each padded data segment being read directly into a different data path; and perform a serial data processing algorithm on each of the data paths substantially in parallel to produce a result for each data path. Example 2 includes the subject matter of Example 1, and wherein the data buffer has an arbitrary length. Example 3 includes the subject matter of any of Examples 1 and 2, and wherein the data buffer processing module comprises a data buffer processing module to directly read each of the padded data segments into a different data path of the data register. Example 4 includes the subject matter of any of Examples 1-3, and wherein the data buffer processing module comprises a data buffer processing module to pad each of the data segments in accordance with the serial data processing algorithm. Example 5 includes the subject matter of any of Examples 1-4, and wherein the data buffer processing module is embodied as an extension to a cryptographic hash algorithm. Example 6 includes the subject matter of any of Examples 1-5, and wherein the data buffer processing module comprises a data buffer processing module to execute on a single core of a microprocessor of the computing device. Example 7 includes the subject matter of any of Examples 1-6, and wherein the data buffer processing module comprises a data buffer processing module to execute on a single thread of the single core. Example 8 includes the subject matter of any of Examples 1-7, and wherein the data buffer processing module comprises a data buffer processing module to execute on a single instruction, multiple data-capable processor of the computing device. Example 9 includes the subject matter of any of Examples 1-8, and wherein the data buffer processing module comprises a data buffer processing module to execute with a single thread software application. Example 10 includes a method for processing an arbitrary- length data buffer. The method includes defining the data buffer as a plurality of data segments, each data segment having a segment length greater than zero and less than the length of the data buffer; padding each data segment in accordance with a serial data processing algorithm; streaming the padded data segments into a data register, the data register having a plurality of data path execution units, each padded data segment being streamed into a different data path execution unit using a single data pointer; and executing a serial data processing algorithm in each of the data path execution units substantially in parallel to produce a result for each data path execution unit. Example 11 includes the subject matter of Example 10, and further includes defining the segment length based on the width of the data register and a word size specified by the serial data processing algorithm. Example 12 includes the subject matter of any of Examples 10 and 11, and wherein defining the data buffer as a plurality of data segments comprises dividing the data buffer into the plurality of data segments in an interleaved fashion. Example 13 includes the subject mater of any of Example 10-12, and wherein the data buffer comprises a plurality of data words, and dividing the data buffer into the plurality of data segments in an interleaved fashion comprises assigning each data word in the data buffer to a different data segment, so that each data segment comprises an array of data words. Example 14 includes the subject mater of any of Example 10-13, and wherein each result comprises a plurality of data words, and further comprising interleaving the results by the data words. Example 15 includes the subject mater of any of Example 10-14, and wherein executing a serial data processing algorithm comprises executing a cryptographic hash function. Example 16 includes the subject mater of any of Example 10-15, and further includes generating a hash digest for each of the padded data segments. Example 17 includes the subject mater of any of Example 10-16, and further includes combining the hash digests to form a new data buffer and executing the cryptographic hash function on the new data buffer. Example 18 includes the subject mater of any of Example 10-17, and wherein the combining comprises concatenating the results and executing the serial data processing algorithm on the concatenated results. Example 19 includes the subject mater of any of Example 10-18, and further includes determining a block size associated with the serial data processing algorithm and padding each of the data segments so that the length of each of the padded data segments is a multiple of the block size. Example 20 includes the subject mater of any of Example 10-19, and further includes appending a fixed pattern of data bits to each of the data segments. Example 21 includes the subject mater of any of Example 10-20, and further includes determining the number of data segments based on a characteristic of a microprocessor of the computing device. Example 22 includes the subject mater of any of Example 10-21, and further includes determining the number of data segments based on a characteristic of the serial data processing algorithm. Example 23 includes a computing device having a processor and a memory having stored therein a plurality of instructions that when executed by the processor cause the computing device to perform the method any of Examples 10-22. Example 24 includes one or more machine readable storage media comprising a plurality of instructions stored thereon that in response to being executed result in a computing device performing the method of any of Examples 10-22. Example 25 includes one or more machine readable storage media comprising a plurality of instructions stored thereon that in response to being executed result in a computing device dividing the data buffer into plurality of data segments, each data segment having a segment length greater than zero and less than the length of the data buffer; padding each data segment according to a serial data processing algorithm; reading each padded data segment directly into a different data path execution unit of a data register of the computing device; and executing a cryptographic hash algorithm on each of the data path execution units substantially in parallel to produce a result for each data path execution unit. Example 26 includes the subject mater of Example 25, and further includes combining the results produced at the data path execution units. Example 27 includes the subject mater of any of Example 25 or 26, and further includes executing the cryptographic hash algorithm on the combined results. Example 28 includes the subject mater of any of Example 25-27, and wherein the cryptographic hash algorithm comprises a Secure Hash Algorithm or an MD5 Algorithm. Example 29 includes the subject mater of any of Example 25-28, and further includes defining the segment length based on the width of the data register and a word size specified by the cryptographic hash algorithm. Example 30 includes the subject mater of any of Example 25-29, and wherein defining the data buffer as a plurality of data segments comprises dividing the data buffer into the plurality of data segments in an interleaved fashion. Example 31 includes the subject mater of any of Example 25-30, and wherein the data buffer comprises a plurality of data words, each data word comprising a plurality of data bits, and dividing the data buffer into the plurality of data segments in an interleaved fashion comprises assigning each data word in the data buffer to a different data segment, so that each data segment comprises an array of data words. Example 32 includes the subject mater of any of Example 25-31, and wherein each result comprises a plurality of data words, and the method comprises interleaving the results by the data words. Example 33 includes the subject mater of any of Example 25-32, and further includes determining the number of data segments based on one or more of a characteristic of a microprocessor of the computing device and a characteristic of the cryptographic hash algorithm. |
Apparatuses and methods for memory array accessibility can include an apparatus with an array of memory cells. The array can include a first portion accessible by a controller of the array and inaccessible to devices external to the apparatus. The array can include a second portion accessible to the devices external to the apparatus. The array can include a number of registers that store row addresses that indicate which portion of the array is the first portion. The apparatus can include the controller configured to access the number of registers to allow access to the second portion by the devices external to the apparatus based on the stored row addresses. |
1.A device, including:An array of memory cells, wherein the array includes:The first part, which is accessible by the controller of the array and not accessible by devices external to the equipment;The second part, which can be accessed by the device outside the device; andSeveral registers configured to store a row address indicating which part of the array is the first part; andThe controller is configured to access the several registers to allow the device external to the device to access the second portion based on the stored row address.2.The apparatus of claim 1, wherein the plurality of registers include access control registers.3.The apparatus of claim 2, wherein the access control register comprises a row access control register.4.The apparatus of claim 1, wherein the controller is configured to:Allow execution of instructions stored in the first part to write to or read from the second part; andIt is not allowed to execute instructions from the second part.5.The apparatus of claim 1, wherein the plurality of registers can only be written by instructions executed from the first part.6.The apparatus of claim 1, wherein the controller is configured to allow data to be copied into the first part only by executing the instructions executed from the first part.7.The apparatus of claim 1, wherein the controller is configured to prevent external DRAMACT commands from being executed inside the first part in response to the register being loaded.8.The apparatus of claim 1, wherein the controller is configured to prevent internal PIM instruction commands from being executed outside the first part in response to the register being loaded.9.The apparatus of claim 1, wherein the controller is configured to allow execution of the instruction from the second part only in response to the instruction having no target in the first part.10.The apparatus of claim 1, wherein the controller is configured to prevent data from being copied from the first part to the second part in response to execution of instructions not stored in the first part.11.The apparatus of claim 1, wherein the controller is configured to prevent data from being copied from the second part to the first part in response to execution of instructions not stored in the first part.12.The device according to any one of claims 1 to 11, wherein the device is a memory-built processor-type PIM device, which includes:The array of memory cells; andThe controller;The array and the controller are on the same die.13.The device of claim 12, wherein a host is coupled to the PIM device and is one of the devices external to the device.14.The apparatus of any one of claims 1 to 11, wherein the plurality of registers are configured to each store an address associated with a row of the array.15.The apparatus of claim 14, wherein the portion of the array between the addresses stored in each of the plurality of registers is the first portion.16.The apparatus of claim 14, wherein the first portion of the array is modified in response to the address associated with each of the plurality of registers being modified.17.A device, including:An array of memory cells, including:The first part of the memory unit, which is not accessible by devices external to the device and is configured to store the first data set and instruction set;A second part of the memory unit, which is accessible by the device external to the device and configured to store a second data set; andA set of address registers, wherein the set of address registers indicates which part of the array is the inaccessible first part; andA controller configured to execute the first instruction set in the inaccessible first part.18.The apparatus of claim 17, wherein the accessible second portion includes execution by the controller without reading or writing data into the first inaccessible portion of the memory cell Another instruction set.19.The apparatus of claim 17, wherein the controller is configured to transfer data from the inaccessible first part to the accessible second part.20.The apparatus of any one of claims 17 to 19, wherein the controller is configured to recognize data written to the accessible second part as an operand and not recognize the data For instructions.21.The apparatus of any one of claims 17 to 19, wherein the controller is configured to recognize data written to the inaccessible first part as one of an operand and the instruction set.22.The apparatus of claim 21, wherein the controller is configured to execute the instruction set in the inaccessible first part.23.The apparatus of claim 20, wherein the accessible second portion of the memory unit is configured to store a second instruction set, wherein the second instruction set does not require reading or writing data to the memory unit It is executed in the case of the first inaccessible part.24.A method, including:Read the addresses in several registers;Identify the first part of the memory cell array indicated by the read address;Execute the instructions stored in the first part in the memory unit of the first part;Prevent access to the first part; andAllows access to the second part of the array.25.The method of claim 24, which includes modifying the addresses in the plurality of registers, which modifies which memory cells of the array make up the first portion.26.The method according to any one of claims 24 to 25, wherein the first address of the first one of the plurality of registers indicates the first unit row of the first part.27.The method of claim 26, wherein the second address of the second register of the plurality of registers indicates the last cell row of the first part.28.The method of claim 26, comprising:Decrypt the data when the data is initially read and written into the first part; andKeep the data unencrypted in the first part.29.A method according to any one of claims 24 to 25, which includes clearing data from the first portion in response to performing a system reset.30.The method of claim 29, comprising clearing the plurality of registers in response to performing the system reset. |
Memory array accessibilityTechnical fieldThe present disclosure relates generally to semiconductor memory devices and methods, and more particularly, to devices and methods related to memory array accessibility.Background techniqueMemory devices are often provided as internal semiconductor integrated circuits in computers or other electronic systems. There are many different types of memory, including volatile and non-volatile memory. Volatile memory may require power to maintain its data (such as host data, erroneous data, etc.), and includes random access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), Synchronous dynamic random access memory (SDRAM) and thyristor random access memory (TRAM), etc. Non-volatile memory can provide persistent data by keeping stored data when it is not powered, and can include NAND flash memory, NOR flash memory, and variable resistance memory, such as phase change random access memory (PCRAM), Resistive random access memory (RRAM) and magnetoresistive random access memory (MRAM), such as spin torque transfer random access memory (STT RAM), etc.Electronic systems typically include several processing resources (such as one or more processors) that can retrieve and execute instructions and store the results of the executed instructions to a suitable location. The processor may include several functional units of executable instructions to perform logical operations (eg, AND, OR, NOT, NAND, NOR, and XOR logical operations) on data (eg, one or more operands) (eg, in this document) Referred to as functional unit circuits), such as arithmetic logic unit (ALU) circuits, floating point unit (FPU) circuits, and / or combinational logic blocks.Several components in an electronic system may be involved in providing instructions to functional unit circuits for execution. The instructions may be generated by processing resources such as a controller and / or a host processor, for example. Data (eg, operands on which instructions will be executed to perform logical operations) can be stored in a memory array, which can be accessed by functional unit circuits. The instructions and / or data can be retrieved from the memory array and sorted and / or buffered before the functional unit circuit starts executing instructions on the data. In addition, since different types of operations can be performed in one or more clock cycles via functional unit circuits, the intermediate results and / or data of the operations can also be sorted and / or buffered.In many cases, processing resources (eg, processors and / or associated functional unit circuits) may be external to the memory array, and data may be accessed (eg, via a bus between the processing resources and the memory array) to execute instructions. Data can be transferred from the memory array to external processing resources via the bus. The data being transferred between the memory array and external processing resources can be detected on the way (eg, by "snooping" the pins of the bus). Some methods of providing data transmission security may include encrypting / decrypting data; however, the encryption / decryption process may adversely affect system performance and / or increase circuit complexity, among other disadvantages.BRIEF DESCRIPTIONFIG. 1 is a block diagram of an apparatus in the form of a computing system including a memory device according to several embodiments of the present disclosure.FIG. 2 illustrates a schematic diagram of a portion of a memory array according to several embodiments of the present disclosure.3 illustrates a schematic diagram of a portion of a memory array according to several embodiments of the present disclosure.FIG. 4 illustrates a schematic diagram of a portion of a memory array according to several embodiments of the present disclosure.5 illustrates a schematic diagram of an example method of memory array accessibility according to several embodiments of the present disclosure. 6 is a schematic diagram illustrating a sensing circuit according to several embodiments of the present disclosure.7 is a schematic diagram illustrating a sensing circuit with selectable logic operation selection logic according to several embodiments of the present disclosure.FIG. 8 is a logic table illustrating the results of selectable logic operations implemented by a sensing circuit according to several embodiments of the present disclosure.9 illustrates a timing diagram associated with performing logic operations and shift operations using a sensing circuit according to several embodiments of the present disclosure.FIG. 10 illustrates a timing diagram associated with performing logic operations and shift operations using a sensing circuit according to several embodiments of the present disclosure.detailed descriptionExample devices include memory cell arrays. The array may include a first portion accessible to a controller of the array and not accessible to devices external to the equipment. The array may include a second portion accessible to devices external to the device. The array may include several registers that store row addresses that indicate which part of the array is the first part. The apparatus may include a controller configured to access the several registers to allow devices external to the apparatus to access the second portion based on the stored row address.According to various embodiments of the present disclosure, data stored in the memory array may be protected by preventing access to a specific part of the memory array and allowing access to another part of the memory array. For example, a device (such as an external processor) external to a memory built-in processor-type (PIM) device can be prevented from accessing a specific part (such as an inaccessible part), and an external device can be allowed to access another part of the memory array (For example, the accessible part). Several rows of registers may contain addresses that indicate the boundary of an inaccessible (eg, protected) portion of the memory array, and data corresponding to addresses outside the boundary may be accessed by external devices. In at least one embodiment, the number of rows of registers can be written by ISA instructions only. As used herein, a PIM device refers to a memory device capable of performing bit vector operations on bit vectors stored in an array without transferring data to a processing resource (eg, host processor) external to the PIM device.The first row of registers may store an address indicating the beginning of an inaccessible part, and the second row of registers may store an address indicating the end of an inaccessible part. The address stored in the row register can be modified to dynamically modify the inaccessible portion in response to more or less data being protected in the memory array. Since the protected data from the inaccessible part is not transferred via the bus and / or pins across the PIM device, the protected data cannot be accessed by external devices that try to read the data while the data is being transferred in order to Determine how to access protected data, change passwords or access to protected data, change instructions in the memory array, etc.As used herein, "bit vector operation" is intended to mean an operation performed on a bit vector associated with the virtual address space and / or physical address space used by the PIM device. Examples of bit vector operations may include logical operations (eg, Boolean operations) and / or mathematical operations (eg, addition, subtraction, multiplication, division, etc.), and so on. 6-11 described below describe the operation of the PIM device and additionally describe how to perform the operation without transmitting data outside the PIM device. Maintaining data in the memory device allows data and / or instructions in the memory array when data is not transferred along the bus and / or across pins detectable by external devices (eg, hackers, detection devices, etc.) When it can be executed or operated, the data in the first part of the memory array is protected.In some embodiments, a bit vector may be a number of bits that are physically consecutively stored in a row and / or in the sensing circuit of the PIM device. For example, a row of virtual address space in a PIM device may have a bit length of 16K bits (eg, corresponding to 16K complementary pairs of memory cells in a DRAM configuration). As described herein, the sensing circuit 150 for such 16K-bit rows may include corresponding 16K processing elements (eg, computing components as described herein) that are optionally coupled to the 16-bit rows The sensing lines of the corresponding memory cells in are formed at a pitch. As additionally described in conjunction with FIG. 6 and described elsewhere herein, the computing components and corresponding sense amplifiers in the PIM device are operable as one-bit processing elements.In terms of performing logical operations, several embodiments of the present disclosure may provide improved parallelism and / or compared to previous systems with external processors (eg, located outside the memory array, such as processing resources on a separate integrated circuit chip). Or reduced power consumption. For example, several embodiments can implement, for example, integer addition, subtraction, multiplication, and division without transferring data out of the memory array and sensing circuit via, for example, a bus (eg, data bus, address bus, control bus) And content addressable memory (CAM) operations. However, the embodiments are not limited to these examples. For example, the PIM device can perform several non-Boolean logic operations, such as sense amplifier setup, sense amplifier clear, copy, compare, destroy, etc.In previous methods, data may be transferred from the array and sensing circuits (eg, via a bus including input / output (I / O) lines) to processing resources such as processors, microprocessors, and / or computing engines, the The processing resources may include ALU circuits and / or other functional unit circuits configured to perform appropriate logical operations. However, transferring data from memory arrays and sensing circuits to such processing resources may involve significant power consumption. Even though the processing resources are on the same chip as the memory array, significant power can be consumed in the process of moving data out of the array to the computing circuit, which can involve performing sensing lines (which can be referred to herein as digital lines or Data line) address access (e.g. firing column decode signals) to transfer data from the sense line to the I / O line (e.g. local I / O line), move the data to the periphery of the array, and provide the data to the circuit to Perform calculation functions. The ability of the PIM device to perform operations within the memory (eg, in conjunction with executing instructions) allows data to be moved across the array without being transferred across the bus and / or across pins to be processed. In this way, a "monolithic" architecture for internally protecting data in the memory array can be achieved.Furthermore, in some previous methods, circuits that process resources (eg, computing engines) may not obey the spacing rules associated with memory arrays. For example, a cell of a memory array may have a cell size of 4F2 or 6F2, where "F" is a characteristic size corresponding to the cell. Thus, devices (eg, logic gates) associated with the ALU circuit of the previous PIM system may not be able to be formed at the same pitch as the sensing lines of the array, which may affect, for example, chip size and / or memory density.For example, the sensing circuit 150 described herein may be formed at the same pitch as a pair of complementary sensing lines. For example, a pair of complementary memory cells may have a cell size with a 6F2 pitch (eg, 3F × 2F). If the pitch of a pair of complementary sense lines for complementary memory cells is 3F, then the sensing circuit spacing indicates the sensing circuit (for example, the sense amplifier and corresponding computing component of each corresponding pair of complementary sense lines) It is formed to fit within the 3F pitch of complementary sensing lines.In the following detailed description of the present disclosure, reference is made to the accompanying drawings that form part of the present disclosure, and the figures show by way of illustration how one or more embodiments of the present disclosure can be practiced. These embodiments are described in sufficient detail to enable those of ordinary skill in the art to practice the embodiments of the present disclosure, and it should be understood that other embodiments can be utilized and that processes, electrical and electrical processes can be performed without departing from the scope of the present disclosure Structural changes. As used herein, the designator "N" specifically relative to the element symbol in the drawing indicates that there may be several specific features so labeled. As used herein, "several" specific things may refer to one or more of such things (eg, several memory arrays may refer to one or more memory arrays).The figures herein follow the numbering rule, where the first number or digits correspond to the figure number, and the remaining digits identify elements or components in the drawing. Similar elements or components between different drawings can be identified by using similar numbers. For example, 206 may represent the element "06" in FIG. 2, and similar elements may be represented as 606 in FIG. As will be appreciated, elements shown in various embodiments herein can be added, exchanged, and / or removed, thereby providing several additional embodiments of the present disclosure. In addition, as will be understood, the scale and relative scale of the elements provided in the figures are intended to illustrate certain embodiments of the present invention and should not be understood in a limiting sense.FIG. 1 is a block diagram of an apparatus in the form of a computing system 100 including a memory device 120 according to several embodiments of the present disclosure. As used herein, the memory device 120, the memory array 130, the controller 140, and / or the sensing circuit 150 may also be individually considered as "devices."The system 100 includes a host 110 coupled (eg, connected) to a memory device 120 that includes a memory array 130. The memory device 120 may be a PIM device. The host 110 may be a host system such as a personal portable computer, a desktop computer, a digital camera, a smart phone, or a memory card reader, and various other types of hosts. The host 110 may include a system motherboard and / or a backplane, and may include several processing resources (eg, one or more processors, microprocessors, or some other type of control circuitry). System 100 may include a separate integrated circuit, or both host 110 and memory device 120 may be on the same integrated circuit. For example, the system 100 may be a server system and / or a high-performance computing (HPC) system and / or a part thereof. Although the example shown in FIG. 1 illustrates a system with a Von Neumann architecture, embodiments of the present disclosure can be implemented in a non-Von Neumann architecture (eg, Turing machine), so The non-Von Neumann architecture may not include one or more components (eg, CPU, ALU, etc.) that are generally associated with the von Neumann architecture.For clarity, the system 100 has been simplified to focus on features that are particularly relevant to this disclosure. The memory array 130 may be, for example, a DRAM array, SRAM array, STT RAM array, PCRAM array, TRAM array, RRAM array, NAND flash memory array, and / or NOR flash memory array. The array 130 may include memory cells arranged in rows coupled by access lines (which may be referred to herein as word lines or select lines) and arranged in columns coupled by sense lines. Although a single array 130 is shown in FIG. 1, the embodiment is not limited thereto. For example, the memory device 120 may include several arrays 130 (eg, array DRAM cells). The example DRAM array is described in conjunction with FIG. 6.The memory device 120 includes an address circuit 142 to latch address signals provided by the I / O circuit 144 via the I / O bus 156 (eg, data bus). Address signals to controller 140 may also be received (eg, via address circuit 142 and / or via bus 154). The row decoder 146 and the column decoder 152 receive and decode address signals to access the memory array 130. By sensing the voltage and / or current changes on the data line using the sensing circuit 150, data can be read from the memory array 130. The sensing circuit 150 can read and latch a page (eg, row) of data from the memory array 130. The I / O circuit 144 may be used for bidirectional data communication with the host 110 via the I / O bus 156. The write circuit 148 is used to write data to the memory array 130.The controller 140 decodes the signal provided from the host 110 through the control bus 154. These signals may include chip enable signals, write enable signals, and address latch signals to control operations performed on the memory array 130 (including data read, data write, and data erase operations). In various embodiments, the controller 140 is responsible for executing instructions from the host 110. The controller 140 may be a state machine, a sequencer, or some other type of control circuit. The controller 140 may be implemented in hardware, firmware, and / or software. The controller 140 may also control a shift circuit, which may be implemented in the sensing circuit 150 according to various embodiments, for example.Examples of the sensing circuit 150 are further described below. For example, in several embodiments, the sensing circuit 150 may include several sense amplifiers that can be used to perform bit vector operations on the data stored in the array 130 (e.g., shown as 606 in FIG. 6 and in A sense amplifier shown as 706 in FIG. 7) and several calculation components (for example, calculation components shown as 631 in FIG. 6 and / or shown as 731 in FIG. 7). The sense amplifier may include, for example, a static latch, which may be referred to herein as a master latch. The computing component 631 may include, for example, dynamic and / or static latches, which may be referred to herein as secondary latches, and may act as and be referred to as an accumulator.In several embodiments, the sensing circuit (eg, 150) can be used to perform operations using the data stored in the array 130 as input and store the result of the logic operation back to the array 130 without access via the sensing line address Transmit data (for example, without firing the column decode signal). Thus, various logic functions can be performed using the sensing circuit 150 and within the sensing circuit 150, rather than by processing resources external to the sensing circuit (eg, by the processor and other processing circuits associated with the host 110, such as An ALU circuit located on the device 120 (eg, on the controller 140 or elsewhere) executes (or is combined with).In various previous methods, for example, the data associated with the operands will be read from the memory via the sensing circuit, and via I / O lines (eg, via local I / O lines and global I / O lines) The data is provided to an external ALU circuit. The external ALU circuit may contain several registers and will use the operands to perform logical operations and transfer the results back to the array via I / O lines (eg, 130). In contrast, in several embodiments of the present disclosure, the sensing circuit (eg, 150) is configured to perform bit vector operations (eg, logical operations) on the data stored in the memory (eg, array 130) and The result is stored back to memory without enabling I / O lines (eg, local I / O lines) coupled to the sensing circuit, which can be formed at a distance from the memory cells of the array. Enabling the I / O line may include enabling (eg, turning on) a transistor having a gate coupled to the decoded signal (eg, column decoded signal) and a source / drain coupled to the I / O line. The embodiment is not limited to this. For example, in several embodiments, the sensing circuit (eg, 150) can be used to perform logic operations without enabling the column decoding lines of the array; however, local I / O lines can be enabled to transfer the results to The appropriate orientation is not transferred back to the array (eg, transferred to an external register).Thus, in several embodiments, the various circuits external to the array 130 and the sensing circuit 150 (eg, external registers associated with the ALU) do not need to perform logic functions because the sensing circuit 150 can perform appropriate logic operations To perform such logical functions without using external processing resources. In this way, the instructions in the array 130 can be executed on the data stored in the array 130 inside the array 130 without transferring the data out of the array and without exposing the data to possible detection, interception, hacking, etc. The data and instructions in the portion of the array that is not accessible by external devices can be protected, and the additional portion of the array 130 that stores unprotected data can be accessible.In addition, the sensing circuit 150 may be used to supplement and / or replace such external processing resources (or at least the bandwidth of such external processing resources) at least to some extent. However, in several embodiments, in addition to performing logical operations through external processing resources (eg, host 110), the sensing circuit 150 may also be used to perform logical operations (eg, executing instructions). For example, the host 110 and / or the sensing circuit 150 may be limited to performing only certain logical operations and / or a specific number of logical operations.2 illustrates a schematic diagram of a portion of a memory array 230 according to several embodiments of the present disclosure. The memory array 230 may be, for example, a DRAM array, SRAM array, STT RAM array, PCRAM array, TRAM array, RRAM array, NAND flash memory array, and / or NOR flash memory array. The array 230 may include memory cells arranged in rows coupled by access lines (which may be referred to herein as word lines or select lines) and arranged in columns coupled by sense lines. Although a single array 230 is shown in FIG. 2, the embodiment is not limited thereto. For example, a memory device (eg, memory device 120 in FIG. 1) may include several arrays 130 (eg, array DRAM cells). The example DRAM array is described in conjunction with FIG. 7.The memory array 230 may include a first part 232 and a second part 236. The first portion 232 may include several memory cell rows that are not accessible (eg, protected) by devices external to the memory device (eg, memory device 120), illustrated as row X 233 to row Y 235. As an example, a device external to the memory device cannot access data stored in the first part 232, cannot execute instructions stored in the first part 232, and cannot read data from and / or write data to the first part 232 232. The first part 232 can be accessed by, for example, a controller (eg, controller 140) within the memory device (eg, memory device 120). The first part 232 may store data and / or instructions executable by the controller. The data and / or instructions originating from the first part 232 can be prevented from being read while in the processor, so that the data and / or instructions are still protected even when read by the processor inside the memory device.The instruction may refer to an instruction set architecture (ISA) instruction, which may be a binary coded instruction accepted by the PIM device to perform various bit vector operations. ISA refers to the interface or boundary between software and hardware. Generally speaking, a program executable on a PIM device may include a set of ISA instructions and data (eg, operands). ISA can allow multiple implementations that can vary depending on performance, physical size, and cost. The instructions stored in the first part 232 may use data stored in the first part 232 and the second part 236 to execute the instructions (eg, use the data as an operand). The instructions stored in the first part 232 may be prevented from being executed in an additional orientation outside the second part 236 or the memory device. It is possible to prevent execution of instructions (eg, DRAM activation (ACT) instructions) originating outside the first portion 232 in the first portion 232. The execution of instructions inside the PIM device (eg, PIM DRAM activation (ACT) instruction) outside the first part 323 may be prevented. In at least one embodiment, the instructions are executable to store the data result in the second part 236. The data stored in the second part 236 by a device external to the PIM device can only be stored in the first part 232 by the execution of the instructions in the first part 232.The second part 236 may contain several lines, namely line Y + 1 to line Y + N. The second part 236 may store data accessible to the memory device (eg, the memory device 120) and devices external to the memory device. In order to transfer the data stored in the first part 232 to an external device, the data may be transferred to and from the second part 236 to the external device. In order to use the data set from the external device, the data set from the external device may be transferred to the second part 236 and the instructions stored in the first part 232 may be executed using the data set.In at least some embodiments, the second part 236 may be prevented from storing instructions in the second part 236. In this way, the instructions may be limited to being executed from the first portion 232 and not from the second portion 236. The data stored in the second part 236 may only be recognized as data (eg, operands) and not as instructions. If the external device has added incorrect or erroneous data to the second part 236, executing the instruction from the first part 232 may cause an error or incorrect result but disallow access to or modification of the instruction in the first part 232. Since the data in the second part 236 can be accessed, storing only the data and not storing the instructions in the second part 236 prevents the external device from storing and executing the instructions in the second part 236.The memory array 230 may include registers 238, 239 that store addresses associated with the first portion 232 and provide access control to the first portion 232. The registers 238, 239 may be stored in the periphery of the array 230 of memory cells (eg, operand data is not stored in the cell). The first register 238 stores the row address of the initial unit row that stores data in the first section 232. The second register 239 stores the row address of the last unit row that stores data in the first section 232. In this way, the row addresses stored in the first register 238 and the second register 236 define the boundary of the first portion 232 that is inaccessible to external devices. The registers 238, 239 can only be accessed by executing instructions stored in the first part 232 (eg, ISA instructions). When the registers 238, 239 are initially loaded, they are written to the registers 238, 239 only during the specific stage described in connection with FIG. Once initially loaded, it can no longer be written to the registers 238, 239 until a particular process can clear the registers (eg, reset).In at least one embodiment, the second part 236 may store instructions executed by the controller without reading the data generated by the execution or writing into the first part 232. In this way, the data and / or instructions stored in the first part 232 are protected from modification by the instructions stored and executed in the second part 236.In response to a reset (eg, soft reset or hard reset) of the memory device, the first portion 232 may be cleared (eg, reset). In this way, the data in the first part 232 is also cleared in response to the reset of the line registers 238, 239 that define the boundary of the first part 232. Otherwise, when the line registers 238, 239 are reset, the boundary of the first portion 232 protecting the data may be cleared, and the data in the first portion 232 will be accessible by external devices, and therefore not protected. By resetting the data of the first part 232, the data previously stored in the first part 232 is protected immediately after the memory device is reset.A memory device including the memory array 230 (eg, memory device 120 in FIG. 1) may include fuses that disable row registers 238, 239 by default and do not use the row registers 238, 239 to define the first portion 232. Once the fuse is deactivated, the row registers 238, 239 can be activated to define the first portion 232.FIG. 3 illustrates a schematic diagram of a portion of a memory array 330 according to several embodiments of the present disclosure. The memory array 330 may include a first portion 332 and a second portion 336. The first part 332 may include several memory cell rows that are not accessible (eg, protected) by devices external to the memory device (eg, memory device 120), illustrated as row X (333) to row Y (335). The first portion 332 can be accessed by, for example, a controller (eg, controller 140) within the memory device (eg, memory device 120). The first part 232 may store data and / or instructions executable by the controller. The second part 236 may contain several rows, namely row Y + 1 to row N. The second part 236 may store data accessible to the memory device (eg, the memory device 120) and devices external to the memory device.The memory array 330 may include registers 338, 339 that store addresses associated with the first part 332 and provide access control to the first part 332. The registers 338, 339 may be stored in the periphery of the array 330 of memory cells (eg, operand data is not stored in the cell). The first register 338 stores the row address of the initial unit row that stores data in the first section 332. The second register 339 stores the row address of the last unit row that stores data in the first section 332. In this way, the row addresses stored in the first register 338 and the second register 336 define the boundary of the first portion 332 that is inaccessible to external devices.The row addresses stored in the first register 338 and the second register 339 can be modified in order to modify which cells of the array 330 are in the first portion 232 of the cell and in turn modify which cells are not accessible by external devices. For example, the address stored in the second row register 338 can be modified from being associated with row Y (235) in FIG. 2 to be associated with row 337-1 to add an additional row of cells to the first portion 332. The additional modification may include modifying the address stored in the second row register 338 to be associated with row 337-2 to add an additional unit row, and the additional modification may include modifying the address to be associated with row 337-3.In this way, the amount of unit lines can be reduced to the size between line X 333 and line 337-1, increased to the amount of lines between line X and line 337-2, and additionally increased to line X and line The amount of lines between 337-3. The modification may be based on the amount of protected data in the first part 332. The line registers 338, 339 may be writable by ISA instructions. The line registers 338, 339 may be writable by ISA instructions executed from the first part 332. By modifying the size of the first part 332, the memory array 330 can dynamically protect the amount of data based on the size of the data and is not limited by the size of the first part 332.FIG. 4 illustrates a schematic diagram of a portion of a memory system 404 according to several embodiments of the present disclosure. The memory system 404 may include a memory device (eg, PIM device) 420 and encrypted user data 449 stored on a hard drive. The memory device 420 may include a memory array 430, a decryption key 441, and a hardware decryption engine 443.The memory array 430 may include a first part (eg, inaccessible part) 432, a second part (accessible part 436 and line registers 438, 439. The decryption key 441 may be used to trigger a fuse in the memory device 420 to indicate usage The row registers 438, 439 define the rows of the first part 432. The row registers 438, 439 may initially indicate that the first part 432 is allowed to be accessed when data is initially loaded into the first part 432. In this way, the data to be protected is loaded into the first part 432. The encrypted user data 449 can be used to be loaded into the first part 432 and protected by restricted access. Once the data to be protected is loaded into the first part 432, the row registers 438, 439 are reset and data is not allowed to be written to the first part 432 , And therefore the data is still protected.5 illustrates a schematic diagram of an example method 505 of memory array accessibility according to several embodiments of the present disclosure. Method 505 may include at 551, clearing the memory array (eg, memory arrays 230, 330, and 430 in FIGS. 2-4, respectively) and registers (eg, registers 238, 239, and 338 in FIGS. 2-4, respectively). 339,438,439). In at least one embodiment, the first portion (eg, first portions 232, 332, 432) and the second portion (eg, second portions 236, 336, 436) can be cleared. In at least one embodiment, the first portion can be cleared and the second portion is not cleared.Method 505 may include at 553, loading (RAC) registers (eg, registers 238, 239, 338, 339, 438, and 439 in FIGS. 2-4, respectively) from an external device to the memory array (eg, Memory arrays 130, 230, 330, 430 in 1-4). The data loaded into the register may indicate the memory array section that constitutes the first part of the memory array. The data may include a row address indicating the first initial row of the first part and a row address indicating the last row of the first part. The data may indicate the boundary of the row defining the first part in the memory array.Method 505 may include at 555, decrypting the write into the first part. The write can be allowed by reversing the normal process that initially prevented writing or reading into the first part, at this point, the data to be protected will be written to the protected place at a certain point or the data written here before Enter the first part. The write may be data from an external device and / or a controller of a memory device (eg, memory device 120 in FIG. 1). The written data may be data indicating protection by the memory array. In the case where the first part of the memory array has restricted access (for example, the external device does not access the first part), data is written into the first part.Method 505 may be included at 557 to prevent data from being written or read from the first portion. In response to the initial data being written to the first part, a register indicating which rows make up the first part indicates that data is not written to or read from the first part. This reversal from allowing initial writing of external data to not allowing writing or reading of any data creates restricted access and thus protects the data written to the first part.Method 505 may include at 559, performing normal protected operations. The protected operation may include executing instructions stored in the first part on data (eg, operands) stored in any of the first and second parts. The protected operation may include preventing an external device from accessing the first part when executing instructions in the first part. The protected operation may include preventing instructions stored outside the first part from being executed in the first part. The protected operation may include preventing instructions from the first part from executing outside the first part. In this way, the data stored in the first part is prevented from being snooped or detected by hackers. The instructions and / or data stored in the first part can be prevented from being read, and no instructions and / or data can be written to the first part. In order to access the data in the first part, the data can first be written to the second part and then read out from the second part.6 is a schematic diagram illustrating a sensing circuit according to several embodiments of the present disclosure. The memory cell includes a storage element (for example, a capacitor) and an access device (for example, a transistor). For example, the transistor 602-1 and the capacitor 603-1 include memory cells, and the transistor 602-2 and the capacitor 603-2 include memory cells and the like. In this example, the memory array 630 is a DRAM array of one transistor one capacitor (1T1C) memory cell. In several embodiments, the memory cell may be a destructive read memory cell (eg, reading data stored in the cell will destroy the data, so that the data originally stored in the cell is refreshed after being read).The cells of the memory array 630 may be arranged in rows coupled through word lines 604-X (row X), 604-Y (row Y), etc., and through pairs of complementary sensing lines (eg, data line DIGIT (n) / DIGIT (n) _) coupled column. The individual sensing lines corresponding to each pair of complementary sensing lines may also be referred to as data lines 605-1 (D) and 605-2 (D_), respectively. Although only a pair of complementary data lines (eg, one column) is shown in FIG. 6, the embodiments of the present disclosure are not limited thereto, and the memory cell array may include additional columns of memory cells and / or data lines (eg, 4096, 8192, 16384, etc.).Memory cells may be coupled to different data lines and / or word lines. For example, the first source / drain region of the transistor 602-1 may be coupled to the data line 605-1 (D), and the second source / drain region of the transistor 602-1 may be coupled to the capacitor 603-1, And the gate of the transistor 602-1 may be coupled to the word line 604-Y. The first source / drain region of the transistor 602-2 may be coupled to the data line 605-2 (D_), the second source / drain region of the transistor 602-2 may be coupled to the capacitor 603-2, and the transistor 602- The gate of 2 may be coupled to word line 604-X. The cell board as shown in FIG. 6 may be coupled to each of the capacitors 603-1 and 603-2. The cell board may be a common node to which a reference voltage (eg, ground) may be applied in various memory array configurations.According to several embodiments of the present disclosure, the memory array 630 is coupled to the sensing circuit 650. In this example, the sense circuit 650 includes a sense amplifier 606 and a computing component 631 corresponding to a corresponding column of memory cells (eg, coupled to a corresponding pair of complementary data lines). For example, the sensing circuit 650 may correspond to the sensing circuit 150 shown in FIG. 1. The sense amplifier 606 may be coupled to the pair of complementary sense lines 605-1 and 605-2. The computing component 631 may be coupled to the sense amplifier 606 via pass gates 607-1 and 607-2. The gates of pass gates 607-1 and 607-2 may be coupled to logic operation selection logic 613.The logic operation selection logic 613 may be configured to include the pair of complementary sense lines 605-1 and 605- for controlling the untransposed (as shown in FIG. 6) coupled between the sense amplifier 606 and the computing component 631 The pass-gate logic of the pass-gate of 2, and / or the exchange of complementary sense lines for controlling the transposition (e.g., discussed later with respect to FIG. 7) coupled between the sense amplifier 606 and the computing component 631 The logic of the door replacement door. The logic operation selection logic 613 may also be coupled to the pair of complementary sense lines 605-1 and 605-2. The logic operation selection logic 613 may be configured to control the pass gates 607-1 and 607-2 based on the selected logic operation (eg, to control whether the pass gates 607-1 and 607-2 are in a conducting state or a non-conducting state ), As described in detail below for various configurations of the logic operation selection logic 613.The sense amplifier 606 can be operated to determine the data value (eg, logic state) stored in the selected memory cell. The sense amplifier 606 may include a cross-coupled latch, which may be referred to herein as a master latch. In the example illustrated in FIG. 6, the circuit corresponding to the sense amplifier 606 includes a latch 615 that includes four transistors coupled to the pair of complementary data lines 605-1 and 605-2. However, the embodiment is not limited to this example. The latch 615 may be a cross-coupled latch (e.g., the gate of a pair of transistors, such as the other of, for example, p-channel transistors (e.g., PMOS transistors) 629.1 and 629-2) N-channel transistors (for example, NMOS transistors 627-1 and 627-2) cross-coupled to the gates of the transistors.In operation, when a memory cell is being sensed (eg, read), the voltage on one of the data line 605-1 (D) or 605-2 (D_) will be slightly greater than the data line 605-1 (D) Or the voltage on the other of 605-2 (D_). The ACT signal may be driven high and the RNL * signal may be driven low to enable (eg, energize) the sense amplifier 606. Compared with one of the PMOS transistors 629-1 or 629-2, the data line 605-1 (D) or 605-2 (D_) with a lower voltage will turn on the PMOS transistor 629-1 or The other of 629-2, thereby to a greater extent, will drive the data line 605- with a higher voltage than driving the other data line 605-1 (D) or 605-2 (D_) high 1 (D) or 605-2 (D_) is driven high.Similarly, the data line 605-1 (D) or 605-2 (D_) with a higher voltage will turn on the NMOS transistor 627 to a greater degree than one of the NMOS transistors 627-1 or 627-2 The other of -1 or 627-2, thus to a greater extent, will have data with a lower voltage than driving the other data line 605-1 (D) or 605-2 (D_) low Line 605-1 (D) or 605-2 (D_) is driven low. Therefore, after a short delay, the data line 605-1 (D) or 605-2 (D_) with a slightly larger voltage is driven to the voltage of the supply voltage VDD (for example, through a source transistor (not shown)) , And the other data line 605-1 (D) or 605-2 (D_) is driven to the voltage of the reference voltage (for example, to the ground (GND) through a collecting transistor (not shown)). Therefore, the cross-coupled NMOS transistors 627-1 and 627-2 and the PMOS transistors 629.1 and 629-2 act as a sense amplifier pair that amplifies the data lines 605-1 (D) and 605-2 ( The differential voltage on D_) is used to latch the data value sensed from the selected memory cell.The embodiment is not limited to the sense amplifier 606 configuration illustrated in FIG. 6. As an example, the sense amplifier 606 may be a current mode sense amplifier and / or a single-ended sense amplifier (eg, a sense amplifier coupled to a digital line). Moreover, the embodiments of the present disclosure are not limited to, for example, the folded data line architecture shown in FIG. 6.The sense amplifier 606 may operate in conjunction with the computing component 631 to use data from the array as input to perform various logic operations. In several embodiments, the results of logical operations can be stored back to the array without the need to transfer data via data line address access (eg, without the need to activate column decode signals to enable data to be transferred to the array and sense via local I / O lines Circuit outside the circuit). Thus, compared to various previous methods, several embodiments of the present disclosure may enable less power to be used to perform the logic operations associated therewith. In addition, since several embodiments can eliminate the need to transfer data across I / O lines to perform logical functions (eg, between memory and discrete processors), several embodiments can achieve increased Parallel processing power.The sense amplifier 606 may additionally include a balancing circuit 614 that may be configured to balance the data lines 605-1 (D) and 605-2 (D_). In this example, the balance circuit 614 includes a transistor 624 coupled between the data lines 605-1 (D) and 605-2 (D_). Balanced circuit 614 also includes transistors 625-1 and 625-2 each having a first source / drain region coupled to a balanced voltage (eg, VDD / 2), where VDD is the supply voltage associated with the array. The second source / drain region of the transistor 625-1 may be coupled to the data line 605-1 (D), and the second source / drain region of the transistor 625-1 may be coupled to the data line 605-2 (D_) . The gates of transistors 624, 625-1, and 625-2 may be coupled together, and to the balance (EQ) control signal line 626. Thus, activating EQ will enable transistors 624, 625-1, and 625-2, which effectively shorts data lines 605-1 (D) and 605-2 (D_) together and to a balanced voltage (eg, VDD /2).Although FIG. 6 shows that the sense amplifier 606 includes the balance circuit 614, the embodiment is not so limited, and the balance circuit 614 may be implemented separately from the sense amplifier 606, in a configuration different from that shown in FIG. 6, or Not implemented at all.As described further below, in several embodiments, the sensing circuit (eg, sense amplifier 606 and computing component 631) is operable to perform the selected logical operation and initially store the result in the sense amplifier 606 or computing component One of 631 without the need to transfer data from the sensing circuit via the I / O line (eg, without performing data line address access via activation of, for example, a column decode signal).As shown in FIG. 6, the computing component 631 may also include a latch 664, which may be referred to herein as a secondary latch. The secondary latch 664 may be configured and operated in a manner similar to that described above with respect to the primary latch 615, except that the pair of cross-coupled p-channel transistors including the secondary latch (eg, PMOS transistors) can have their respective sources coupled to the supply voltage (eg, VDD), and the pair of cross-coupled n-channel transistors (eg, NMOS transistors) of the secondary latch can have their respective sources selectively Coupling to a reference voltage (eg, ground) to enable the secondary latch continuously. The configuration of the computing component is not limited to the configuration shown at 631 in FIG. 6 and various other embodiments are further described below.7 is a schematic diagram illustrating a sensing circuit with selectable logic operation selection logic according to several embodiments of the present disclosure. 7 shows several sense amplifiers 706 coupled to corresponding pairs of complementary sense lines 705-1 and 705-2, and the corresponding number of calculations coupled to sense amplifiers 706 via pass gates 707-1 and 707-2 Component 731. The gates of pass gates 707-1 and 707-2 may be controlled by logic operation selection logic signal PASS. For example, the output of logic operation selection logic 713-6 may be coupled to the gates of pass gates 707-1 and 707-2.According to the embodiment illustrated in FIG. 7, the calculation component 731 may include a corresponding stage (for example, a shift unit) of a loadable shift register configured to shift the data value left and right. According to some embodiments, the computing component 731 may have bidirectional shift capability. According to various embodiments of the present disclosure, the computing component 731 may include a loadable shift register (eg, where each computing component 731 acts as a corresponding shift) configured to shift in multiple directions (eg, left and right) Bit level). According to various embodiments of the present disclosure, the computing component 731 may include a corresponding stage (eg, shift unit) of a loadable shift register configured to shift in one direction. The loadable shift register may be coupled to the pair of complementary sense lines 705-1 and 705-2, where the node ST2 of each stage is coupled to the sense line (eg, DIGIT (n)) transmitting true data values and The node SF2 of each stage is coupled to a sensing line (eg, DIGIT (n) _) that transmits complementary (eg, false) data values.According to some embodiments and as illustrated in FIG. 7, each computing component 731 (eg, stage) of the shift register includes a pair of right shift transistors 781 and 786, a pair of left shift transistors 789 and 790, and a pair Inverters 787 and 788. The signals PHASE 1R, PHASE 2R, PHASE 1L, and PHASE 2L can be applied to the corresponding control lines 782, 783, 791, and 792 to enable / in conjunction with performing logical operations and / or shifting data according to the embodiments described herein / The feedback on the latch of the corresponding computing component 731 is deactivated. Examples of shifting data (eg, from a specific computing component 731 to an adjacent computing component 731) are further described below with respect to FIGS. 9 and 10.The computing component 731 (eg, stage) that can load the shift register may include a first right shift transistor 781 having a gate coupled to the first right shift control line 780 (eg, "PHASE 1R"), and having a coupling The second right shift transistor 786 to the gate of the second right shift control line 782 (eg, "PHASE 2R"). The node ST2 of each stage of the loadable shift register is coupled to the input of the first inverter 787. The output of the first inverter 787 (for example, the node SF1) is coupled to one source / drain of the second right shift transistor 786, and the other source / drain of the second right shift transistor 786 is coupled to the first The input of the two inverter 788 (eg, node SF2). The output of the second inverter 788 (for example, the node ST1) is coupled to one source / drain of the first right shift transistor 781, and the other source / drain of the first right shift transistor 781 is coupled to The input of the second inverter of the adjacent computing component 731 (for example, the node SF2). The latch transistor 785 has a gate coupled to the LATCH control signal 784. One source / drain of the latch transistor 785 is coupled to the node ST2, and the other source / drain of the latch transistor 785 is coupled to the node ST1.The sense amplifier 706 may be coupled to the corresponding pair of complementary sense lines 705-1 and 705-2, and the corresponding computing component 731 is coupled to the sense amplifier 706 via respective pass gates 707-1 and 707-2. The gates of the pass gates 707-1 and 707-2 may be controlled by corresponding logic operation selection logic signals "Passd" and "Passdb" that can be output from logic operation selection logic (not shown for clarity).The first left shift transistor 789 is coupled from the node SF2 of one loadable shift register to the node SF1 of the loadable shift register corresponding to the adjacent computing component 731. The channel of the second left shift transistor 790 is coupled from the node ST2 to the node ST1. The gate of the first left shift transistor 789 is coupled to the first left shift control line 791 (for example, "PHASE1L"), and the gate of the second left shift transistor 790 is coupled to the second left shift control line 792 ( For example, "PHASE 2L").The logic operation selection logic 713-6 includes a swap gate 742, and logic to control the pass gates 707-1 and 707-2 and the swap gate 742. The logic operation selection logic 713-6 includes four logic selection transistors: a logic selection transistor 762 coupled between the gate of the switching transistor 742 and the TF signal control line, and a gate coupled to the gates of the pass gates 707-1 and 707-2 A logic selection transistor 752 between the TT signal control line, a logic selection transistor 754 coupled between the gates of the pass gates 707-1 and 707-2 and the FT signal control line, and a gate and FF coupled to the transposition transistor 742 The logic select transistor 764 between the signal control lines. The gates of the logic selection transistors 762 and 752 are coupled to the true sense line through the isolation transistor 750-1 (with the gate coupled to the ISO signal control line). The gates of logic select transistors 764 and 754 are coupled to complementary sense lines through isolation transistor 750-2 (which also has a gate coupled to the ISO signal control line). 10 and 11 illustrate timing diagrams associated with performing logic operations and shift operations using the sensing circuit shown in FIG. 7.The data value on the corresponding pair of complementary sensing lines 705-1 and 705-2 can be loaded into the corresponding computing component 731 by causing the pass gates 707-1 and 707-2 to turn on, for example, causing the Passd control signal to go high, for example , You can load the shift register). A gate that is controlled to have continuity (eg, electrical continuity through a channel) will be turned on, and may be referred to herein as OPEN. A gate that is controlled to have no continuity (eg, electrical continuity across a channel) is said to be non-conducting, and may be referred to herein as CLOSED. For example, continuity refers to a low resistance condition where the gate is on. The corresponding computing component 731 may be over-powered through the sense amplifier 706 (eg, to overwrite existing data values in the computing component 731) and / or by turning off the PHASE 1R and PHASE 2R control signals 780 and 782 and the LATCH control signal 784 To load the data value into the corresponding calculation component 731. The first latch (eg, sense amplifier) may be configured to be the second latch when the current provided by the first latch and presented to the second latch is sufficient to flip the second latch (For example, computing components) Excessive power supply.The sense amplifier 706 may be configured to drive the voltage on the pair of complementary sense lines 705-1 and 705-2 to the maximum power supply voltage corresponding to the data value (eg, the pair of complementary sense lines 705- 1 and 705-2 are driven to the track) to over-power the computing component 731, which can change the data value stored in the computing component 731. According to various embodiments, the computing component 731 may be configured to transmit data values to the pair of complementary sensing lines 705-1 and 705-2 without transferring the pair of complementary sensing lines 705-1 and 705-2 The voltage is driven to the track (for example, to VDD or GND). Thus, the computing component 731 may be configured to not over-power the sense amplifier 706 (eg, before enabling the sense amplifier, the data values from the computing component 731 on the pair of complementary sense lines 705-1 and 705-2 The data value stored in the sense amplifier 706 will not be changed).Once the data value is loaded into the calculation component 731 of the loadable shift register, the true data value is spaced apart from the supplementary data value by the first inverter 787. The data value can be shifted to the right (for example, to the adjacent calculation component 731) by the alternate operation of the first right shift transistor 781 and the second right shift transistor 786, which can be used as the first right shift control line 780 The second right shift control line 782 has periodic signals that become out of phase with each other (for example, making alternating square waves 180 degrees out of phase with each other). The LATCH control signal 784 can be activated to cause the latch transistor 785 to turn on, thereby latching the data value into the corresponding computing component 731 of the loadable shift register (eg, when the signal PHASE 1R remains low and PHASE 2R remains high to (When the data value latched in the calculation component 731 is maintained).8 is a logic table illustrating selectable logic operation results implemented by a sensing circuit (eg, the sensing circuits shown in FIGS. 6 and 7) according to several embodiments of the present disclosure. Four logic selection control signals (for example, TF, TT, FT, and FF) combined with specific data values present on complementary sense lines can be used to select the ones involving the initial data values stored in sense amplifier 606 and computing component 631 Implemented in one of multiple logical operations. Four control signals (for example, TF, TT, FT, and FF) combine specific data values present on complementary sensing lines (for example, nodes S and S *) to control pass gates 707-1 and 707-2 and the switching transistor 742, in turn affecting the data value in the computing component 731 and / or the sense amplifier 706 before / after excitation. The ability to selectively control the swap transistor 742 facilitates the implementation of logical operations involving inverted data values (eg, inverted operands and / or inverted results), and so on.The logic table 8-1 illustrated in FIG. 8 shows the starting data value stored in the calculation component 631 shown in column A at 844 and stored in the sense amplifier shown in column B at 845 The starting data value in 606. The other three column headers in the logic table 8-1 refer to the states of the pass gates 607-1 and 607-2 and the swap transistor 742. When the ISO control signal is asserted, the pass gates 607-1 and 607-2 and The transposition transistor 742 may depend on the state of the four logic selection control signals (eg, TF, TT, FT, and FF) and combined with specific data values present on the pair of complementary sensing lines 605-1 and 605-2. It is controlled to open or close. The “not open” column corresponds to the pass gates 607-1 and 607-2 and the switching transistor 742 are both in the non-conducting condition, and the “positive open” column corresponds to the pass gates 607-1 and 607-2 are in the conducting condition, And the “inverted on” column corresponds to that the switching transistor 742 is in a conducting condition. The logic table 8-1 does not reflect the configuration corresponding to both the pass gates 607-1 and 607-2 and the switching transistor 742 being in the on condition, because this causes the sense lines to be shorted together.Via selective control of the pass gates 707-1 and 707-2 and the swap transistor 742, each of the three columns of the upper part of the logic table 8-1 can be compared with the three columns of the lower part of the logic table 8-1 Each of them is combined to provide nine (eg, 3 × 3) different result combinations, corresponding to nine different logical operations, as indicated by the various connection paths shown at 875. The nine different selectable logic operations that can be implemented by the sensing circuit 650 are summarized in the logic table 8-2.The column of the logic table 8-2 shows the header 880 that contains the status of the logic selection control signal (eg, FF, FT, TF, TT). For example, the state of the first logic selection control signal (eg, FF) is provided in row 876, the state of the second logic selection control signal (eg, FT) is provided in row 877, and the third logic is provided in row 878 The state of the control signal (eg, TF) is selected, and a fourth logic is provided in row 879 to select the state of the control signal (eg, TT). The specific logical operations corresponding to the results are summarized in line 847.9 illustrates a timing diagram associated with performing a logical AND operation and a shift operation using a sensing circuit according to several embodiments of the present disclosure. 9 contains waveforms corresponding to the signals EQ, ROW X, ROW Y, SENSE AMP, TF, TT, FT, FF, PHASE1R, PHASE 2R, PHASE 1L, PHASE 2L, ISO, Pass, Pass *, DIGIT, and DIGIT_. The EQ signal corresponds to the balanced signal associated with the sense amplifier (eg, EQ 626 shown in FIG. 6). The ROW X and ROW Y signals correspond to signals applied to corresponding access lines (eg, access lines 604-X and 604-Y shown in FIG. 6) to access the selected cell (or cell row). The SENSE AMP signal corresponds to a signal used to enable / disable a sense amplifier (eg, sense amplifier 706). The TF, TT, FT, and FF signals correspond to, for example, the logic selection control signals shown in FIG. 7 (for example, signals coupled to the logic selection transistors 762, 752, 754, and 764). The PHASE 1R, PHASE 2R, PHASE 1L, and PHASE 2L signals correspond to the control signals (for example, clock signals) supplied to the corresponding control lines 782, 783, 791, and 792 shown in FIG. 7. The ISO signal corresponds to the signal coupled to the gates of the isolation transistors 750-1 and 750-2 shown in FIG. The PASS signal corresponds to the signal coupled to the gates of the pass transistors 707-1 and 707-2 shown in FIG. 7, and the PASS * signal corresponds to the signal coupled to the gate of the transposition transistor 742. The DIGIT and DIGIT_ signals correspond to the signals present on the corresponding sensing lines 705-1 (eg, DIGIT (n)) and 705-2 (eg, DIGIT (n) _).The timing diagram shown in FIG. 9 is associated with performing a logical AND operation on the data value stored in the first memory cell and the data value stored in the second memory cell of the array. The memory cells may correspond to specific columns of the array (eg, columns including complementary pairs of sense lines) and may be coupled to corresponding access lines (eg, row X and row Y). In describing the logical AND operation shown in FIG. 9, refer to the sensing circuit described in FIG. 7. For example, the logical operation described in FIG. 9 may include storing the data value of the row X memory cell (eg, "row X data value") in a latch corresponding to the computing component 731 (eg, "A" data value ), The data value of the row Y memory cell (for example, “row Y data value”) is stored in the latch corresponding to the sense amplifier 706 (for example, “B” data value), and the row X data value and row The Y data value performs the selected logical operation (eg, a logical AND operation in this example), and the calculation component 731 may be referred to as an accumulator 731, where the result of the selected logical operation is stored in the lock of the calculation component 731 Memory.As shown in FIG. 9, at time T1, the balance of the sense amplifier 706 is disabled (eg, EQ goes low). At time T2, ROWX goes high to access (eg, select) row X memory cells. At time T3, the sense amplifier 706 is enabled (eg, SENSE AMP goes high), which drives the complementary sense lines 705-1 and 705-2 in response to the row X data value (eg, as indicated by the DIGIT and DIGIT_ signals Show the appropriate rail voltage (eg, VDD and GND), and the row X data value is latched in the sense amplifier 706. At time T4, the PHASE 2R and PHASE 2L signals go low, which disables the feedback on the latch of the calculation component 731 (for example, by turning off transistors 786 and 790, respectively), so that the memory stored in the calculation can be overwritten during the logic operation The value in the component. Also, at time T4, ISO becomes low, which disables the isolation transistors 750-1 and 750-2. At time T5, TT and FT are enabled (e.g., go high), which causes PASS to go high (e.g., because transistor 752 or 754 will depend on node ST2 (corresponding to the node in FIG. 6 when ISO is deactivated at time T4 S ”) or node SF2 (corresponding to node“ S * ”in FIG. 6) which is high and turned on) (recall that when ISO is deactivated, the voltages of nodes ST2 and SF2 dynamically reside in the corresponding enabling transistor 752 and 754 on the grid). Passing PASS high enables pass transistors 707-1 and 707-2, so that the DIGIT and DIGIT_ signals corresponding to the row X data value are provided to the corresponding computing component nodes ST2 and SF2. At time T6, TT and FT are disabled, which causes PASS to go low, thereby disabling pass transistors 707-1 and 707-2. It should be noted that PASS * remains low between times T5 and T6 because the TF and FF signals remain low. At time T7, ROW X is disabled, and PHASE 2R, PHASE 2L, and ISO are enabled. Enabling PHASE 2R and PHASE 2L at time T7 enables feedback on the latch of the computing component 731 so that the row X data value is latched in the latch. Enabling ISO at time T7 again couples nodes ST2 and SF2 to the gates of enabling transistors 752, 754, 762, and 764. At time T8, balancing is enabled (eg, EQ goes high so that DIGIT and DIGIT_ are driven to a balanced voltage, such as VDD / 2) and sense amplifier 706 is disabled (eg, SENSE AMP goes low).In the case where the row X data value is latched in the calculation component 731, the balance is disabled (for example, at time T9EQ goes low). At time T10, ROW Y goes high to access (eg, select) row Y memory cells. At time T11, the sense amplifier 706 is enabled (eg, SENSE AMP goes high), which will complement the sense lines 705-1 and 705- in response to the row Y data value (eg, as shown by the DIGIT and DIGIT_ signals) 2 Drive to the appropriate rail voltage (eg, VDD and GND), and the row Y data value is latched in the sense amplifier 706. At time T12, the PHASE 2R and PHASE 2L signals go low, which disables the feedback on the latch of the calculation component 731 (for example, by turning off transistors 786 and 790, respectively), so that the storage stored in the calculation can be overwritten during logic operations The value in the component. Also, at time T12, ISO becomes low, which disables the isolation transistors 750-1 and 750-2. Since the desired logic operation is an AND operation in this example, at time T13, TT is enabled while TF, FT, and FF remain disabled (as shown in Table 8-2, FF = 0, FT = 0, TF = 0 And TT = 1 corresponds to logical AND operation). Whether enabling TT causes PASS to go high depends on the value stored in the computing component 731 when the ISO is deactivated at time T12. For example, the enable transistor 752 will be turned on when the node ST2 is high when ISO is deactivated, and the enable transistor will not be turned on when the node ST2 is low when the ISO is deactivated at time T12.In this example, if PASS goes high at time T13, pass transistors 707-1 and 707-2 are enabled, and the DIGIT and DIGIT_ signals corresponding to the row Y data value are provided to the corresponding computing component nodes ST2 and SF2. Thus, the value stored in the calculation component 731 (eg, row X data value) may be flipped depending on the values of DIGIT and DIGIT_ (eg, row Y data value). In this example, if the PASS remains low at time T13, the pass transistors 707-1 and 707-2 are not enabled, so that the DIGIT and DIGIT_ signals corresponding to the row Y data value remain at the nodes ST2 and SF2 of the calculation component 731 isolation. Thus, the data value in the calculation component (eg, row X data value) will remain the same.At time T14, TT is disabled, which causes PASS to go low (or stay low), causing pass transistors 707-1 and 707-2 to be disabled. It should be noted that PASS * remains low between times T13 and T14, because the TF and FF signals remain low. At time T15, ROW Y is disabled, and PHASE 2R, PHASE 2L, and ISO are enabled. Enabling PHASE 2R and PHASE 2L at time T15 enables feedback on the latch of the computing component 731 so that the result of the AND operation (eg, "A" AND "B") is latched in the latch. Enabling ISO at time T15 again couples nodes ST2 and SF2 to the gates of enabling transistors 752, 754, 762, and 764. At time T16, balancing is enabled (eg, EQ goes high so that DIGIT and DIGIT_ are driven to the balanced voltage) and sense amplifier 706 is disabled (eg, SENSE AMP goes low).The result of the AND operation is initially stored in the computing component 731 in this example, and can be transferred back to the memory array (eg, transferred back to the memory cell coupled to row X, row Y, and / or a different row via complementary sense lines) And / or transmitted to an external location (eg, external processing component) via I / O lines.Figure 9 also contains (e.g., at 801) signaling associated with shifting the data (e.g., from computing component 731 to neighboring computing component 731). The example shown in FIG. 9 illustrates two shifts to the left, such that the data values stored in the computing component corresponding to column "N" are shifted to the left to the computing component corresponding to column "N-2". As shown at time T16, PHASE 2R and PHASE 2L are disabled, which disables the feedback on the computing component latch, as described above. To perform the first left shift, PHASE1L is activated at time T17 and deactivated at time T18. Enabling PHASE 1L causes transistor 789 to turn on, which causes the data value at node SF1 to move to the left to node SF2 on the left adjacent computing component 731. PHASE 2L is then activated at time T19 and deactivated at time T20. Enabling PHASE 2L causes transistor 790 to turn on, which causes the data value from node ST1 to move to the left to node ST2, completing the shift to the left.The above sequence may be repeated (eg, PHASE 1L is activated / deactivated and then PHASE 2L is subsequently activated / deactivated) to achieve the desired number of shifts to the left. For example, in this example, the second left shift is performed by enabling PHASE 1L at time T21 and disabling PHASE 1L at time T22. PHASE 2L is then activated at time T23 to complete the second left shift. After the second left shift, PHASE 2L remains enabled and PHASE 2R is enabled (eg, at time T24) so that feedback is enabled to latch the data value in the computing component latch.FIG. 10 illustrates a timing diagram associated with performing logical XOR operations and shift operations using a sensing circuit according to several embodiments of the present disclosure. FIG. 10 contains the same waveforms described above in FIG. 9. However, the timing diagram shown in FIG. 10 is associated with performing a logical XOR operation on row X data values and row Y data values (eg, compared to logical AND operations). Refer again to the sensing circuit described in FIG. 7.The signaling indicated in Fig. 10 at times T0 to T9 is the same as Fig. 9 and will not be repeated here. Thus, at time T9, EQ is disabled, and the row X data value is latched in the calculation component 731. At time T10, ROW Y goes high to access (eg, select) row Y memory cells. At time T11, the sense amplifier 706 is enabled (eg, SENSE AMP goes high), which will complement the sense lines 705-1 and 705- in response to the row Y data value (eg, as shown by the DIGIT and DIGIT_ signals) 2 Drive to the appropriate rail voltage (eg, VDD and GND), and the row Y data value is latched in the sense amplifier 706. At time T12, the PHASE 2R and PHASE 2L signals go low, which disables the feedback on the latch of the calculation component 731 (for example, by turning off transistors 786 and 790, respectively), so that the storage stored in the calculation can be overwritten during logic operations The value in component 731. Also, at time T12, ISO becomes low, which disables the isolation transistors 750-1 and 750-2. Since the desired logical operation is an XOR operation in this example, at time T13, TF and FT are enabled, while TT and FF remain disabled (as shown in Table 10-2, FF = 0, FT = 1, TF = 1 And TT = 0 corresponds to logical XOR (eg, "AXB" operation). Whether enabling TF and FT causes PASS or PASS * to go high depends on the value stored in the computing component 731 when the ISO is deactivated at time T12. For example, the enable transistor 762 will be turned on when the node ST2 is high when the ISO is deactivated, and the enable transistor 762 will not be turned on when the node ST2 is low when the ISO is deactivated at time T12. Similarly, the enable transistor 754 will turn on when the node SF2 is high when ISO is disabled, and the enable transistor 754 will not turn on when the node SF2 is low when ISO is disabled.In this example, if PASS goes high at time T13, pass transistors 707-1 and 707-2 are enabled so that the DIGIT and DIGIT_ signals corresponding to the row Y data value are provided to the corresponding computing component nodes ST2 and SF2. Thus, the value stored in the calculation component 731 (eg, row X data value) may be flipped depending on the values of DIGIT and DIGIT_ (eg, row Y data value). In this example, if the PASS remains low at time T13, the pass transistors 707-1 and 707-2 are not enabled, so that the DIGIT and DIGIT_ signals corresponding to the row Y data value remain with the nodes ST2 and SF2 of the calculation component 731 isolation. Thus, the data value in the calculation component (eg, row X data value) will remain the same. In this example, if PASS * goes high at time T13, the swap transistor 742 is enabled so that the DIGIT and DIGIT_ signals corresponding to the row Y data value are transposed (eg, "true" data on DIGIT (n) The value will be provided to node SF2 and the "supplementary" data value on DIGIT (n) _ will be provided to node ST2) to the corresponding computing component nodes ST2 and SF2. Thus, the value stored in the calculation component 731 (eg, row X data value) may be flipped depending on the values of DIGIT and DIGIT_ (eg, row Y data value). In this example, if PASS * remains low at time T13, the swap transistor 742 is not enabled, so that the DIGIT and DIGIT_ signals corresponding to the row Y data value remain isolated from the nodes ST2 and SF2 of the computing component 731. Thus, the data value in the calculation component (eg, row X data value) will remain the same.At time T14, TF and FT are deactivated, which causes PASS and PASS * to go low (or remain low), causing pass transistors 707-1 and 707-2 and swap transistor 742 to be deactivated. At time T15, ROW Y is disabled, and PHASE 2R, PHASE 2L, and ISO are enabled. Enabling PHASE 2R and PHASE 2L at time T15 enables feedback on the latch of the computing component 731 so that the result of the XOR operation (eg, "A" XOR "B") is latched in the latch. Enabling ISO at time T15 again couples nodes ST2 and SF2 to the gates of enabling transistors 752, 754, 762, and 764. At time T16, balancing is enabled (eg, EQ goes high so that DIGIT and DIGIT_ are driven to the balanced voltage) and sense amplifier 706 is disabled (eg, SENSE AMP goes low).The results of the XOR operation are initially stored in the computing component 731 in this example and can be transferred back to the memory array (eg, transferred back to the memory cells coupled with row X, row Y, and / or different rows via complementary sense lines) And / or transmitted to an external location (eg, external processing component) via I / O lines.Figure 10 also contains (e.g., at 1001) signaling associated with shifting data (e.g., from computing component 731 to neighboring computing component 731). The example shown in FIG. 10 illustrates two shifts to the right, so that the data value stored in the calculation component corresponding to the column "N" is shifted to the right to the calculation component corresponding to the column "N + 2". As shown at time T16, PHASE 2R and PHASE 2L are disabled, which disables the feedback on the latch of the computing component, as described above. To perform the first right shift, PHASE1R is activated at time T17 and deactivated at time T18. Enabling PHASE 1R causes transistor 781 to turn on, which causes the data value at node ST1 to move to the right to node ST2 of the adjacent computing component 731 on the right. PHASE 2R is then activated at time T19 and deactivated at time T20. Enabling PHASE 2R causes transistor 786 to turn on, which causes the data value from node SF1 to move to the right to node SF2, completing the shift to the right.The above sequence can be repeated (eg, PHASE 1R is activated / deactivated and then PHASE 2R is subsequently activated / deactivated) to achieve the desired number of right shifts. For example, in this example, the second right shift is performed by enabling PHASE 1R at time T21 and disabling PHASE 1R at time T22. PHASE 2R is then activated at time T23 to complete the second right shift. After the second right shift, PHASE 1R remains disabled, PHASE 2R remains enabled, and PHASE 2L is enabled (eg, at time T24), so that feedback is enabled to latch the data value in the computing component latch.Although the examples described in FIGS. 9 and 10 include logical operation results stored in the computing component (eg, 371), the sensing circuit according to the embodiments described herein is operable to initially store the results in the sense amplifier ( For example, the logic operation is performed in the case as illustrated in FIG. 9). Moreover, the embodiment is not limited to the logical operation examples of "AND" and "XOR" described in FIGS. 9 and 10, respectively. For example, a sensing circuit (eg, 750 shown in FIG. 7) according to an embodiment of the present disclosure may be controlled to perform various other logical operations, such as those shown in Table 10-2.Although example embodiments including various combinations and configurations of sense circuits, sense amplifiers, computing components, dynamic latches, isolation devices, and / or shift circuits have been illustrated and described herein, embodiments of the present disclosure do not Limited to those combinations explicitly stated in this article. Other combinations and configurations of the sense circuits, sense amplifiers, computing components, dynamic latches, isolation devices, and / or shift circuits disclosed herein are expressly included within the scope of this disclosure.Although specific embodiments have been illustrated and described herein, those of ordinary skill in the art should understand that arrangements calculated to achieve the same result can replace the specific embodiments shown. This disclosure is intended to cover adaptations or changes of one or more embodiments of the present disclosure. It should be understood that the above description is made in an illustrative manner and not restrictive. After reviewing the above description, the combination of the above embodiments and other embodiments not specifically described herein will be apparent to those skilled in the art. The scope of one or more embodiments of the present disclosure includes other applications using the above structures and methods. Accordingly, the scope of one or more embodiments of the present disclosure should be determined with reference to the appended claims and the full scope of equivalents to which such claims are given.In the foregoing specific embodiments, some features are grouped together into a single embodiment for the purpose of simplifying the present disclosure. This method of the present disclosure should not be construed as reflecting the intention that the disclosed embodiments of the present disclosure must use more features than are expressly recited in each claim. Indeed, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Accordingly, the appended claims are hereby incorporated into the detailed description, with each claim standing on its own as a separate embodiment. |
Methods, devices, and systems associated with phase change memory structures are described herein. One or more embodiments of the present disclosure can reduce thermal crosstalk associated with phase change memory cells, which can provide various benefits including improved data reliability and retention and decreased read and/or write times, among various other benefits. One or more embodiments can reduce the number of processing steps associated with providing local interconnects to phase change memory arrays. |
What is claimed is: 1. A method of forming a phase change memory structure, the method comprising: forming a first stack structure including a phase change material between a bottom electrode and a top electrode; forming a second stack structure a distance from the first stack structure; and depositing a thermally conductive material in a gap between the first stack structure and the second stack structure. 2. The method of claim 1 , wherein forming the second stack structure includes forming a phase change material between a bottom electrode and a top electrode. 3. The method of claim 1, including filling the gap between the first stack structure and the second stack structure with the thermally conductive material. 4. The method of claim 1, including depositing a dielectric layer between the thermally conductive material and a sidewall of at least one of the first and the second stack structure. 5. The method of claim 1, wherein depositing the thermally conductive material includes depositing a thermally conductive insulative material selected from the group including: a diamond-like-carbon (DLC) material; a carbon-carbon (C-C) composite material; a carbon nanotube material; and AlN. 6. The method of claim 1, wherein depositing the thermally conductive material includes depositing a metal material. 7. A method of forming a phase change memory structure, the method comprising: forming a first, a second, and a third metal contact on a substrate, the third metal contact located between the first and second metal contact; forming a first stack structure a distance from a second stack structure, the first and the second stack structure including a phase change material between a bottom electrode and a top electrode, wherein the bottom electrode of the first stack is coupled to the first metal contact and the bottom electrode of the second stack is coupled to the second metal contact; forming a dielectric layer on a wall of the first and the second stack structure; and depositing a heat sink material in a gap between the first stack structure and the second stack structure, the heat sink material deposited over at least a portion of the third metal contact. 8. The method of claim 7, wherein forming the first and the second metal contact includes forming a drain contact coupled to a drain region associated with the substrate. 9. The method of claim 7, wherein forming the third metal contact includes forming a source contact coupled to a source region associated with the substrate. 10. The method of claim 7, including forming a gate of an access transistor between the first and the third metal contact. 11. The method of claim 7, wherein the dielectric layer formed on the wall of the first and the second stack structure is also formed over the third metal contact, and wherein the method includes performing an etch to expose the at least a portion of the third metal contact. 12. The method of any one of claims 7 to 11, wherein depositing the heat sink material in the gap between the first stack structure and the second stackstructure includes depositing a metal in the gap and on the at least a portion of the third metal contact. 13. A phase change memory cell, comprising: a stack structure including a phase change material between a bottom electrode and a top electrode; and a heat sink comprised of a thermally conductive material located between a stack structure associated with an adjacent phase change memory cell. 14. The memory cell of claim 13, wherein a thermal conductivity of the thermally conductive material is at least 30 W/m-K. 15. The memory cell of claim 13, wherein a thermal conductivity of the thermally conductive material is at least 100 W/m-K. 16. The memory cell of claim 13, wherein a wall of the stack structure includes a dielectric material formed thereon, the dielectric material located between the wall and the thermally conductive material. 17. The memory cell of any one of claims 13 to 16, wherein the heat sink is comprised of a metal. 18. The memory cell of claim 17, wherein the heat sink is formed over a metal contact. 19. The memory cell of claim 18, wherein the metal contact is coupled to a source region associated with an access transistor corresponding to the phase change memory cell. 20. A memory device, comprising: an array of phase change memory cells; control circuitry coupled to the array and configured to perform operations on the memory cells;address circuitry coupled to the array and configured to latch address signals provided on address input connections; and wherein a number of the phase change memory cells include a heat sink formed of a thermally conductive material located between the number of the phase change memory cells and an adjacent memory cell. 21. The device of claim 20, wherein the heat sink is formed of an electrically conducting material. 22. The device of claim 21, wherein the heat sink is connected to a conductive contact coupled to at least one of a source and a drain region of an access transistor. 23. The device of any one of claims 20 to 22, wherein the heat sink provides a local interconnect associated with the number of phase change memory cells. 24. The device of claim 23, wherein the heat sink is coupled to a reference line associated with the array. 25. The device of claim 24, wherein the heat sink is coupled to a ground line associated with the array. |
PHASE CHANGE MEMORY STRUCTURES AND METHODS Technical Field [0001] The present disclosure relates generally to semiconductor memory devices and methods, and more particularly, to phase change memory structures and methods. Background [0002] Memory devices are typically provided as internal, semiconductor, integrated circuits in computers or other electronic devices. There are many different types of memory including random-access memory (RAM), read only memory (ROM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), phase change random access memory (PCRAM), and flash memory, among other types of memory. [0003] Resistance variable memory devices, such as PCRAM devices, can include a structural phase change material such as a chalcogenide alloy, for instance, which can be programmed into different resistivity states to store data. The phase change memory cells are nonvolatile and the particular data stored in a phase change memory cell can be read by sensing the cell's resistance, e.g., by sensing current and/or voltage variations based on the resistance of the phase change material. [0004] In cases in which the resistance variable memory device includes a chalcogenide alloy, the chalcogenide alloy can exhibit a reversible structural phase change, e.g., from amorphous to crystalline. A small volume of the chalcogenide alloy can be integrated into a circuit that can allow the cell to act as a fast switching programmable resistor. This programmable resistor can exhibit greater than 40 times the dynamic range of resistivity between the crystalline state (low resistivity) and the amorphous state (high resistivity), and is also capable of exhibiting multiple intermediate states that allow multi-bit storage in each cell. That is, resistance variable memories may achieve multi-level cell (MLC) functionality via programming of memory cells to one of a number of different resistance levels. [0005] Thermal sensitivity of phase change memory cells can lead to data retention and/or accuracy issues associated with programming and/or reading the data state of the cells. For instance, increases in temperature canalter the structural phase of a cell, which can result in an altering of the programmed data state of a cell due to a resistance change associated with the phase change material of the memory cell. As such, unintentional and/or undesirable temperature fluctuations can lead to data read errors. Brief Description of the Drawings [0006] Figure 1 is a schematic of a portion of a phase change memory array that can be used in accordance with one or more embodiments of the present disclosure. [0007] Figure 2 is a schematic of a portion of a phase change memory array that can be used in accordance with one or more embodiments of the present disclosure. [0008] Figure 3 illustrates an example of pulses that can be used to program phase change memory cells in accordance with one or more embodiments of the present disclosure. [0009] Figure 4 illustrates a cross-sectional view of a portion of a phase change memory structure. [0010] Figures 5A-5C are cross-sectional views illustrating formation of a phase change memory structure in accordance with one or more embodiments of the present disclosure. [0011] Figures 6A-6D are cross-sectional views illustrating formation of a phase change memory structure in accordance with one or more embodiments of the present disclosure. [0012] Figure 7 is a functional block diagram of an electronic memory system having at least one memory device in accordance with an embodiment of the present disclosure. [0013] Figure 8 is a functional block diagram of a memory module having at least one memory device in accordance with an embodiment of the present disclosure. Detailed Description [0014] Methods, devices, and systems associated with phase change memory structures are described herein. One or more embodiments of the present disclosure can reduce thermal crosstalk associated with phase changememory cells, which can provide various benefits including improved data reliability and retention and increased read and/or write times, among various other benefits. One or more embodiments can reduce the number of processing steps associated with providing local interconnects to phase change memory arrays. [0015] In one or more embodiments, a method for forming a phase change memory structure includes forming a first stack structure including a phase change material between a bottom electrode and a top electrode, forming a second stack structure a distance from the first stack structure, and depositing a thermally conductive material in a gap between the first stack structure and the second stack structure. In various embodiments, the thermally conductive material provides a heat sink which can reduce the thermal crosstalk between adjacent phase change memory cells. [0016] In the following detailed description of the present disclosure, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration how one or more embodiments of the disclosure may be practiced. These embodiments are described in sufficient detail to enable those of ordinary skill in the art to practice the embodiments of this disclosure, and it is to be understood that other embodiments may be utilized and that process, electrical, and/or structural changes may be made without departing from the scope of the present disclosure. As used herein, the designators "N" and "M," particularly with respect to reference numerals in the drawings, indicate that a number of the particular feature so designated can be included with one or more embodiments of the present disclosure. [0017] The figures herein follow a numbering convention in which the first digit or digits correspond to the drawing figure number and the remaining digits identify an element or component in the drawing. Similar elements or components between different figures may be identified by the use of similar digits. For example, 110 may reference element "10" in Fig. 1, and a similar element may be referenced as 210 in Fig. 2. As will be appreciated, elements shown in the various embodiments herein can be added, exchanged, and/or eliminated so as to provide a number of additional embodiments of the present disclosure. In addition, the proportion and the relative scale of the elementsprovided in the figures are intended to illustrate various embodiments of the present invention and are not to be used in a limiting sense. [0018] As used in this disclosure, the terms "wafer" and "substrate" are used interchangeably and are to be understood as including silicon-on-insulator (SOI) or silicon-on-sapphire (SOS) technology, doped and undoped semiconductors, epitaxial layers of silicon supported by a base semiconductor foundation, and other semiconductor structures. Furthermore, when reference is made to a "wafer" or "substrate" in the following description, previous process steps may have been utilized to form regions or junctions in the base semiconductor structure or foundation. [0019] Figure 1 is a schematic of a portion of a phase change memory array 100 that can be used in accordance with one or more embodiments of the present disclosure. In the embodiment illustrated in Figure 1, the memory array 100 includes a number of phase change memory cells each having an associated access device 102 and resistance variable element 104, e.g., a phase change material 104. The access devices 102 can be operated, e.g., turned on/off, to access the memory cells in order to perform operations such as data programming, e.g., writing, and/or data reading operations on the resistance variable elements 104. [0020] In the embodiment illustrated in Figure 1, the access devices 102 are metal oxide semiconductor field effect transistors (MOSFETs). As shown in Figure 1, a gate of each MOSFET 102 associated with each memory cell is coupled to one of a number of access lines 105-0 (WLO), 105-1 (WLl), . . ., 105-N (WLN), i.e., each access line 105-0, 105-1, . . ., 105-N is coupled to a row of phase change memory cells. The access lines 105-0, 105-1, . . ., 105-N may be referred to herein as "word lines." The designator "N" is used to indicate that a memory array can include a number of access lines. The resistance variable elements 104 can be a phase change chalcogenide alloy such as a Germanium- Antimony-Tellurium (GST) material, e.g., a Ge-Sb-Te material such as Ge2Sb2Te5, Ge1Sb2Te4, Ge)Sb4Te7, etc. The hyphenated chemical composition notation, as used herein, indicates the elements included in a particular mixture or compound, and is intended to represent all stoichiometrics involving the indicated elements. Other phase change materials can include Ge-Te, In-Se, Sb- Te, Ga-Sb, In-Sb, As-Te, Al-Te, Ge-Sb-Te, Te-Ge-As, In-Sb-Te, Te-Sn-Se, Ge-Se-Ga, Bi-Se-Sb, Ga-Se-Te, Sn-Sb-Te, In-Sb-Ge, Te-Ge-Sb-S, Te-Ge-Sn-O, Te- Ge-Sn-Au, Pd-Te-Ge-Sn, In-Se-Ti-Co, Ge-Sb-Te-Pd, Ge-Sb-Te-Co, Sb-Te-Bi- Se, Ag-In-Sb-Te, Ge-Sb-Se-Te, Ge-Sn-Sb-Te, Ge-Te-Sn-Ni, Ge-Te-Sn-Pd, and Ge-Te-Sn-Pt, among various other phase change materials. [0021] In the embodiment illustrated in Figure 1, each resistance variable element 104 is coupled to one of a number of data lines 107-0 (BLO), 107-1 (BLl), . . ., 107-M (BLM), i.e., each data line 107-0, 107-1, . . ., 107-M is coupled to a column of phase change memory cells. The data lines 107-0, 107-1, . . ., 107-M may be referred to herein as "bit lines" or "sense lines." The designator "M" is used to indicate that a memory array can include a number of data lines. For ease of addressing in the digital environment, the number of word lines 105-1, . . ., 105-N and the number of bit lines 107-1, . . ., 107-M can each be some power of two, e.g., 256 word lines by 4,096 bit lines. However, embodiments are not limited to particular numbers of word lines and/or bit lines. [0022] In operation, appropriate voltage and/or current signals, e.g., pulses, can be applied to the bit lines 107-0, 107-1, . . ., 107-M and word lines 105-0, 105-1, . . ., 105-N in order to program data to and/or read data from the phase change memory cells of the array 100. As an example, the data stored by a phase change memory cell of array 100 can be determined by turning on an access device, e.g., 102, and sensing a current passing through the phase change element, e.g., 104. The current sensed on the bit line associated with the memory cell being read, e.g., bit line 107-0, 107-1 , . . ., 107-M, corresponds to a resistance level of the phase change element 104, which in turn corresponds to a particular data value, e.g., a binary value such as 1, 0, 001, 111, 1011, etc. [0023] Embodiments of the present disclosure are not limited to the example array 100 illustrated in Figure 1. For example, as one of ordinary skill in the art will appreciate, the access device 102 associated with a particular memory cell can be a device other than a MOSFET. In some embodiments, the access device 102 can be a bipolar junction transistor (BJT) or a diode, among other types of access devices. An example of an array in which the access device is a diode is described below in connection with Figure 2. Also, a memory array, e.g., 100, can have an architecture other than that illustrated in Figure 1, as will be understood by one of ordinary skill in the art.[0024] Figure 2 is a schematic of a portion of a phase change memory array 200 that can be used in accordance with one or more embodiments of the present disclosure. In the embodiment illustrated in Figure 2, the access device 202 associated with the phase change memory cells of array 200 is a diode 202. The diode 202 can be a diode such as a p-n diode, a Zener diode, or a Schottky diode, among various other types of diodes. [0025] In operation, appropriate voltage and/or current signals, e.g., pulses, can be applied to the bit lines 207-0, 207-1, . . ., 207-M and word lines 205-0, 205-1, . . ., 205-N in order to program data to and/or read data from the phase change memory cells of the array 200. As an example, the data stored by a phase change memory cell of array 200 can be determined by turning on a diode access device, e.g., 202, and sensing a current passing through the phase change element, e.g., 204. The current sensed on the bit line associated with the memory cell being read, e.g., bit line 207-0, 207-1, . . ., 207-M, corresponds to a resistance level of the phase change element 204, which in turn corresponds to a particular data value, e.g., a binary value such as 1, 0, 001, 111, 101 1, etc. [0026] As one of ordinary skill in the art will appreciate, the phase change memory array 100 illustrated in Figure 1 and the phase change memory array 200 illustrated in Figure 2 can be coupled to programming, e.g., write, circuitry and/or sensing, e.g., read, circuitry (not shown in Figures 1 and 2). For instance, the arrays 100 and/or 200 can be coupled to write and/or read circuitry as described below in connection with Figure 7. [0027] Figure 3 illustrates an example of pulses that can be used to program phase change memory cells in accordance with one or more embodiments of the present disclosure. In Figure 3, the pulse 311 represents an amorphizing (reset) pulse, e.g., a pulse used to place one or more phase change memory cells in an amorphous (high resistivity) state. The pulse 313 represents a crystallizing (set) pulse, e.g., a pulse used to place one or more phase change memory cells in a crystalline (low resistivity) state. The reset pulse 311 and the set pulse 313 can be applied to a particular memory cell in order to alter the resistance of the phase change element, e.g., phase change element 104 shown in Figure 1 or phase change element 204 shown in Figure 2, by raising/lowering the temperature of the phase change material corresponding to the cell in a mannersuch that the resistance of the cell is changed, e.g., programmed, to a value that corresponds to a particular desired data state. [0028] As one of ordinary skill in the art will appreciate, a reset pulse such as reset pulse 311 can be used to place the phase change material, e.g., phase change element 104 shown in Figure 1 or 204 shown in Figure 2, or a portion thereof, in a relatively amorphous state corresponding to a relatively high resistance value, e.g., about 100 kiloohm to 1 megaohm. For instance, in the example illustrated in Figure 3, the reset pulse 311 can be used to raise the temperature of the phase change material to a temperature Ta sufficient to melt the phase change material; the phase change material cools over a short time period, i.e., tl, to amorphize the phase change material such that the phase change material does not re-form some portion of its internal crystalline structure. The time tl can be referred to as a "quenching time." [0029] A set pulse, such as set pulse 313 illustrated in Figure 3, can be used to raise the temperature of a phase change material above a temperature Tx and maintain the temperature of the phase change material for a time, e.g., t2, sufficient to allow crystallization of the phase change material to occur. As such, the set pulse 313 can place the phase change material in a relatively crystalline state corresponding to a relatively low resistance value, e.g., about 1 kiloohm to 10 kiloohm, for instance. [0030] Embodiments of the present disclosure are not limited to the reset and/or set pulses illustrated in the example shown in Figure 3. As an example, one or more embodiments of the present disclosure can provide a phase change memory structure which can shorten the quench time, e.g., tl shown in Figure 3, associated with a reset pulse, e.g., 311. For instance, various embodiments can increase the quench rate associated with a reset pulse by providing a heat sink between adjacent cells which can quickly and efficiently dissipate heat generated by a reset pulse such as pulse 311 shown in Figure 3. Examples of such heat sink regions are described in connection with Figures 5 A-5C and 6A-6D. One or more phase change memory structures having a heat sink region in accordance with embodiments of the present disclosure can also decrease the melting time associated with a reset pulse, e.g., 311, and/or a set pulse, e.g., 313, which can also decrease cell programming time. As one example, in some embodiments, the time for a reset operation can be about 10ns.[0031] Figure 4 illustrates a cross-sectional view of a portion 420 of a phase change memory structure. The example shown in Figure 4 is used to illustrate thermal interference, e.g., crosstalk, which can occur between adjacent phase change memory cells in an array of cells, e.g., array 100 shown in Figure 1 or array 200 shown in Figure 2. Thermal crosstalk between cells can increase the temperature of the phase change material of a particular adjacent cell, which can unintentionally alter the programmed resistance of the adjacent phase change cell. As a result, such thermal crosstalk can result in reduced data reliability, e.g., data read errors. The problem associated with thermal crosstalk between adjacent cells may intensify as the phase change material in and/or around adjacent cells becomes closer as semiconductor device size is scaled. [0032] The example illustrated in Figure 4 includes a first phase change material 427-1 associated with a first phase change memory cell and a second phase change material 427-2 associated with an adjacent phase change memory cell. As shown in Figure 4, in this example, the phase change material 427-1 and 427-2 is a GST material. However, embodiments of the present disclosure are not limited to a particular type of phase change material. [0033] In the example illustrated in Figure 4, the access device associated with a first phase change memory cell includes collector 422, base 424-1, and emitter 426-1, while the access device associated with the adjacent cell includes collector 422, base 424-2, and emitter 426-2. That is, the access devices, which can be coupled to word lines in a phase change memory array, are bipolar junction transistors (BJTs), in this example. [0034] The first phase change memory cell includes a bottom electrode 430-1 and the adjacent phase change memory cell includes a bottom electrode 430-2. As shown in the example illustrated in Figure 4, the bottom electrodes 430-1 and 430-2 of the adjacent cells and the phase change material 427-1 and 427-2 of the adjacent cells are separated by an insulator material 431. In various previous approaches, and in the example shown in Figure 4, the insulator 431 is a thermally non-conductive insulating material such as silicon dioxide or other thermally non-conductive insulating material. Such thermally non-conductive insulating materials are less effective for transferring, e.g., dissipating, heat than thermally conductive insulating materials. For instance, thermally non- conductive insulating materials, e.g., 431, dissipate heat generated by a particularcell at a slower rate than thermally conductive insulating materials, e.g., thermally conductive insulating materials such as silicon nitride (SiN), aluminum nitride (AlN), diamond-like-carbon (DLC), and various carbon-carbon (C-C) composites, among others. In various embodiments, the thermally conductive material can be a carbon nanotube material. [0035] The bottom electrodes 430-1 and 430-2 can be referred to as "heaters," as shown in Figure 4. In operation, current can pass between the bottom electrodes 430-1 and 430-2 and a top electrode, e.g., metal layer 429 in this example, through the respective GST material 427-1 and 427-2. The heat generated by the current between the top and bottom electrodes can alter the structural phase of a portion 428-1 and/or portion 428-2 of the respective phase change materials 427-1 and 427-2, which can alter the resistance of the cell. In operation, the phase change material of a particular cell, e.g., 427-1, including the corresponding resistance-alterable portion, e.g., 428-1, can act as a series resistance between the top and bottom electrodes, e.g., metal layer 429 and electrode 430-1, respectively. In this manner, and as noted above, the resistance of the phase change material of a particular cell can be programmed to a particular level which can correspond to a particular stored data state of the cell. [0036] However, as phase change memory devices become smaller, e.g., to reduce size and/or increase density, the distance between the phase change materials, e.g., 427-1 and 427-2, of adjacent cells and/or the distance between the bottom electrodes, e.g., 430-1 and 430-2, of adjacent cells can decrease. As such, the thermal crosstalk between adjacent cells can become significant; it can lead to unintentional and/or undesirable effects such as data read errors due to resistance changes in adjacent cells. For instance, the heat generated by heating the phase change material 427-1 can result in altering the phase, e.g., crystallizing, a portion of the phase change material 427-2 of the adjacent cell, when it is in the amorphous state, which can alter the resistance associated with the adjacent cell. [0037] Various embodiments of the present disclosure can reduce the effects of thermal crosstalk between adjacent phase change memory cells by providing an efficient heat dissipation region, e.g. heat sink, between and/or around adjacent phase change memory cells. In one or more embodiments, the heat sink region can be comprised of a thermally conductive material. In one ormore embodiments, and as described further below in connection with Figures 5A-5C, the heat sink region can be comprised of a thermally conductive dielectric material or a combination of thermally conductive dielectric materials such as SiN, DLC, AlN, carbon nanotubes, and/or various C-C composites, to name a few. [0038] In one or more embodiments, and as described further below in connection with Figures 6A-6D, the heat sink region can be used as a local interconnect between memory cells. In such embodiments, the heat sink region can be comprised of an electrically conductive material such as a metal, which can serve as a local interconnect between various regions of an integrated circuit. For example, the metal heat sink can be used to locally interconnect gate, source, and/or drain regions in circuits and/or can be used to locally interconnect one or more metallization layers to particular structures in an integrated circuit. In various embodiments in which the heat sink region includes an electrically conductive material, the electrically conductive material can also be a thermally conductive material. In such embodiments, the electrically and thermally conductive heat sink region can serve as both a local interconnect and a heat dissipater. [0039] Figures 5A-5C are cross-sectional views illustrating formation of a phase change memory structure 520 in accordance with one or more embodiments of the present disclosure. Although not shown in Figures 5A-5C, one of ordinary skill in the art will appreciate that the phase change memory structure 520 can be formed on a base semiconductor structure such as a silicon substrate, among various other semiconductor foundations such as SOI, SOS, etc. In the embodiment illustrated in Figures 5A-5C, the phase change structure 520 includes a first stack structure 521-1 associated with a first phase change memory cell and a second stack structure 521-2 associated with a second phase change memory cell, e.g., an adjacent phase change memory cell in this example. [0040] Figure 5A shows phase change structure 520 at a particular stage in a phase change memory device fabrication sequence. In the embodiment illustrated in Figure 5A, the first stack 521-1 includes a top electrode (TE) 533-1 formed on a layer of phase change material 527-1 (GST in this example), which is formed on a dielectric layer 536. The second stack 521-2 includes a topelectrode 533-2 formed on a layer of phase change material 527-2, which is formed on a dielectric layer 536. The separate stacks 521-1 and 521-2 are formed by a masking and etching process through the appropriate layers. The etching can be a dry etch or other suitable process. [0041] As shown in Figure 5A, the stack 521-1 includes a bottom electrode (BE) 530-1 and the stack 521-2 includes a bottom electrode 530-2. The bottom electrodes 530-1 and 530-2 are formed of electrically conductive material and are connected to conductive contacts 538-1 and 538-2, respectively. The contacts 538-1 and 538-2 can be metal contacts and can be connected to an access device of a phase change memory cell, e.g., a FET, diode, or BJT, among other devices associated with a phase change memory cell. [0042] In one or more embodiments, the phase change structure 520 can be formed by forming a dielectric layer 534 on a substrate (not shown in Figure 5A). The dielectric layer 534 can be a dielectric oxide layer such as silicon dioxide (SiO2), among others. The contacts 538-1 and 538-2 can be formed in the dielectric layer 534, e.g., via a masking and etching process. A planarization process such as chemical mechanical planarization (CMP) can be used to planarize the surface, e.g., to remove excess layer 534 and expose the contacts 538-1 and 538-2. A dielectric layer 536 can then be formed over the contacts 538-1 and 538-2 and the dielectric layer 534. The dielectric layer 536 can be silicon dioxide or other suitable dielectric material. [0043] In the embodiment illustrated in Figure 5 A, the dielectric layer 536 is masked and etched and then filled with portions 535 of material prior to formation of the bottom electrodes 530-1 and 530-2 therein. However, embodiments are not limited to this example. For instance, in various embodiments, the bottom electrodes 530-1 and 530-2 may be formed directly in dielectric layer 536. That is, in such embodiments, the filler material 535 may not be used in forming the bottom electrodes. Whether the filler material is used can depend on various factors such as the desired width of the electrodes 530-1 and 530-2, among other factors. [0044] After formation of the bottom electrodes 530-1 and 530-2, a layer of phase change material, e.g., a chalcogenide alloy such as GST or other suitable phase change material, is formed on the dielectric layer 536 and a layer of conductive material which will form top electrodes 533-1 and 533-2 is formedover the phase change material layer. As noted above, a masking and etching process can then be used to remove the appropriate portions of structure 520 in order to form the individual stacks 521-1 and 521-2 corresponding to adjacent phase change memory cells. Alternatively, in some embodiments, the stacks 521-1 and 521-2 may be formed by etching less than the entire layer 536 between the stacks 521-1 and 521-2. That is, the stacks 521-1 and 521-2 can be formed without etching all the way to layer 534, in various embodiments. [0045] As illustrated in Figure 5 A, the first stack 521-1 includes phase change material 527-1 located between and connected to the bottom electrode 530-1 and the top electrode 533-1. The second stack 521-2 includes phase change material 527-2 located between and connected to the bottom electrode 530-2 and the top electrode 533-2. In the embodiment shown in Figure 5A, the phase change materials 527-1 and 527-2 include respective portions 528-1 and 528-2, which can undergo structural phase transitions during operation of the phase change memory cells. [0046] Figure 5B shows phase change structure 520 at another particular stage in a phase change memory device fabrication sequence. In the embodiment illustrated in Figure 5B, an encapsulation layer 537 is formed on the structure 520. The encapsulation layer 537 can be a dielectric material, which can be used to electrically insulate the stack 521-1 associated with a first phase change memory cell from the stack 521-2 associated with an adjacent phase change memory cell. As shown in Figure 5B, the layer 537 can form sidewalls on the stacks 521-1 and 521-2. The layer 537 can be an insulating material such as silicon nitride (SiN), polyimide, or fluorinated silicon dioxide, among other insulators, and can be deposited via a process such as chemical vapor deposition (CVD). However, embodiments are not limited to particular materials or to particular formation, e.g., deposition and/or growth, techniques. [0047] Figure 5C shows the phase change structure 520 at another particular stage in a phase change memory device fabrication sequence. In Figure 5C, the structure 520 includes a thermally conductive material 539 deposited between adjacent phase change memory cells, e.g., between adjacent stack structures 521-1 and 521-2. In one or more embodiments of the present disclosure, the thermally conductive material 539 provides a heat dissipation region, e.g., a heat sink, between and/or around adjacent phase change memorycells. The heat sink region 539 can quickly and efficiently dissipate heat generated by a particular cell, which can provide benefits such as reducing thermal crosstalk between adjacent cells. Reduction in thermal crosstalk between phase change memory cells can include benefits such as improving data reliability by preventing undesired bit flip and/or data read errors. Also, the heat sink region 539 can effectively reduce local thermal effects among cells due to heat accumulation that can result from continuous operation of a phase change memory device. In various embodiments, the heat sink region 539 can increase the quench rate associated with a reset pulse, e.g., reset pulse 311 shown in Figure 3. [0048] On a broader scale, the heat sink region 539 can also serve as a heat dissipater for a semiconductor chip that includes an array of phase change memory cells. As such, the heat sink region 539 can help to keep the chip cool. As an example, the heat sink region 539 associated with an array of phase change memory cells in accordance with embodiments described herein can dissipate heat from a chip more quickly and efficiently than previous approaches in which thermally non-conductive insulating materials are deposited between phase change memory cells, e.g., between stacks 521-1 and 521-2. Such thermally non-conductive insulating materials dissipate heat less quickly than thermally conductive materials. As such, heat produced by phase change memory cells according to previous approaches would be maintained in and around the cells for a longer time, which could increase thermal crosstalk between cells as well as increase the total temperature of the semiconductor chip. [0049] In one or more embodiments, the thermally conductive material 539 has a thermal conductivity of at least 30 Watts/meter-Kelvin (W/m-K). In various embodiments, the material 539 has a thermal conductivity of at least 100 Ψ/m-K. In one or more embodiments, the thermally conductive material 539 can be a thermally conductive insulative material or a combination of thermally conductive insulative materials such as SiN, DLC, AlN, carbon nanotubes, and/or various C-C composites, to name a few. The relatively high thermal conductivities of such thermally conductive dielectric materials can provide an efficient heat sink region between adjacent phase change memory cells. [0050] In one or more embodiments, the thermally conductive material 539 can be a metal. In such embodiments, the dielectric encapsulation layer 537can electrically isolate the metal heat sink region 539 from the first and second stacks 521-1 and 521-2. However, in embodiments in which the thermally conductive material 539 is not an electrically conductive material, such as a metal, the formation of the encapsulation layer 537 may be eliminated. That is, it may not be as useful to provide a dielectric layer 537 for electrically insulating the stacks 521-1 and 521-2 from each other when the thermally conductive material 539 is not itself electrically conductive. [0051] As one of ordinary skill in the art will appreciate, subsequent processing steps in a fabrication sequence can be performed on the phase change memory structure 520 shown in Figure 5C. For instance, although not shown in Figure 5C, conductive contacts can be formed on the structure 520 to connect the top electrodes 533-1 and 533-2 to a bit line, which can be formed thereon, for example. [0052] Figures 6A-6D are cross-sectional views illustrating formation of a phase change memory structure 620 in accordance with one or more embodiments of the present disclosure. The structure 620 described in Figures 6A-6D includes a portion of a number of phase change memory cells at various stages in a fabrication sequence. [0053] Figure 6A shows phase change structure 620 at a particular stage in a phase change memory device fabrication sequence. The embodiment illustrated in Figure 6A includes a number of access devices 642 formed on a substrate 640. In the embodiment illustrated in Figures 6A-6D, the access devices 642 are MOSFET (metal oxide semiconductor field effect transistor) devices having associated source 643, drain 644, and gate 645 regions. However, embodiments are not limited to a particular type of access device. For instance, as described above, the access devices 642 can be diodes or BJTs, among other types of access devices for operating phase change memory cells. As the reader will appreciate, the substrate 640 can be a silicon substrate foundation among various other semiconductor foundations such as SOI, SOS, etc. As an example, the substrate 640 can be a p-type semiconductor substrate with n-type source 643 and drain 644 regions. [0054] The phase change memory structure 620 includes a source contact 647 and drain contacts 646. The source and drain contacts are connected to the respective source 643 and drain 644 regions of the structure 620 and can bemetal contacts. A layer 649 is formed around the gate stacks of the transistors 642 to electrically insulate the transistors 642 from the contacts 646 and 647. As such, the layer 649 can be a dielectric material such as SiN, among various other dielectric materials. [0055] The structure 620 includes a dielectric layer 648 formed over the transistors 642 and located between the source 647 and drain 646 contacts. The layer 648 can be a dielectric material such as silicon dioxide or other suitable dielectric material. The upper surface of the structure 620 shown in Figure 6 A can be planarized to expose the contacts 646 and 647 via CMP or other suitable planarization techniques. Alternatively, the layer 648 can be removed via CMP down to layer 649. [0056] Figure 6B shows phase change structure 620 at another particular stage in a phase change memory device fabrication sequence. The embodiment illustrated in Figure 6B includes a first stack structure 621-1 associated with a first phase change memory cell and a second stack structure 621-2 located a distance from the first stack structure and associated with a second phase change memory cell, e.g., an adjacent cell in phase change array such as array 100 shown in Figure 1. [0057] As shown in the embodiment illustrated in Figure 6B, the first stack 621-1 includes a top electrode (TE) 633-1 formed on a layer of phase change material 627-1 (GST in this example), which is formed on a dielectric layer 636. The second stack 621-2 includes a top electrode 633-2 formed on a layer of phase change material 627-2, which is formed on a dielectric layer 636. As described above in connection with Figures 5A-5C, the separate stacks 621-1 and 621-2 are formed by a masking and etching process through the appropriate layers, and the etching can be a dry etch or other suitable process. [0058] As shown in Figure 6B, the stack 621-1 includes a bottom electrode (BE) 630-1 and the stack 621-2 includes a bottom electrode 630-2. The bottom electrodes 630-1 and 630-2 are formed of electrically conductive material and are connected to a respective conductive contacts 646. The contacts 646 can be metal contacts. In the embodiment illustrated in Figure 6B, the bottom electrodes 630-1 and 630-2 are connected to a drain region 644 of a transistor access device 642 via a corresponding drain contact 646.[0059] The stack structures 621 - 1 and 621 -2 can be formed in a similar manner as the stack structures 521-1 and 521-2 described in Figures 5A-5C. For instance, a dielectric layer 636 can be formed on the planarized surface of the structure 620, i.e., over the contacts 646 and 647 and the dielectric layer 648. The dielectric layer 636 can be silicon dioxide or other suitable dielectric material. [0060] The dielectric layer 636 can be masked and etched and then filled with portions 635 of material prior to formation of the bottom electrodes 630-1 and 630-2 therein. However, embodiments are not limited to this example. For instance, in various embodiments, the bottom electrodes 630-1 and 630-2 may be formed directly in dielectric layer 636. That is, in such embodiments, the filler material 635 may not be used in forming the bottom electrodes. Whether the filler material is used can depend on various factors such as the desired width of the electrodes 630-1 and 630-2, among other factors. [0061] After formation of the bottom electrodes 630-1 and 630-2, a layer of phase change material, e.g., a chalcogenide alloy such as GST or other suitable phase change material, is formed on the dielectric layer 636 and a layer of conductive material which will form top electrodes 633-1 and 633-2 is formed over the phase change material layer. As noted above in connection with the embodiment shown in Figures 5A-5C, a masking and etching process can then be used to remove the appropriate portions of structure 620 in order to form the individual stacks 621-1 and 621-2 corresponding to adjacent phase change memory cells. [0062] As illustrated in Figure 6B, the first stack 621-1 includes phase change material 627-1 located between and connected to the bottom electrode 630-1 and the top electrode 633-1. The second stack 621-2 includes phase change material 627-2 located between and connected to the bottom electrode 630-2 and the top electrode 633-2. In the embodiment shown in Figure 6B, the phase change materials 627-1 and 627-2 include respective portions 628-1 and 628-2, which can undergo structural phase transitions during operation of the phase change memory cells. [0063] Figure 6C shows phase change structure 620 at another particular stage in a phase change memory device fabrication sequence. In the embodiment illustrated in Figure 6C, an encapsulation layer 637 is formed on thestructure 620. The encapsulation layer 637 can be a dielectric material, which can be used to electrically insulate the stack 621-1 associated with a first phase change memory cell from the stack 621-2 associated with an adjacent phase change memory cell. As shown in Figure 6C, the layer 637 can form sidewalls on the stacks 621-1 and 621-2. The layer 637 can be an insulating material such as silicon nitride (SiN), polyimide, or fluorinated silicon dioxide, among other insulators, and can be deposited via a process such as chemical vapor deposition (CVD). However, embodiments are not limited to particular materials or to particular formation, e.g., deposition and/or growth, techniques. Subsequent to the formation of the layer 637, a spacer etch can be performed to remove excess material from between the stacks 621-1 and 621-2 in order to expose the surface of source contact 647, as shown in Figure 6C. [0064] Figure 6D shows the phase change structure 620 at another particular stage in a phase change memory device fabrication sequence. In Figure 6D, the structure 620 includes a thermally conductive material 652 deposited between adjacent phase change memory cells, e.g., in the gap between adjacent stack structures 621-1 and 621-2, and over at least a portion of the source contact 647, such that at least a portion of the material 652 is in direct contact with the source contact 647. [0065] In one or more embodiments of the present disclosure, the thermally conductive material 652 provides a heat dissipation region, e.g., a heat sink, between and/or around adjacent phase change memory cells. The heat sink region 652 (shown as "LOCAL INTERCONNECT/HEAT SINK") can quickly and efficiently dissipate heat generated by a particular cell, which can reduce thermal crosstalk between adjacent cells. Reduction in thermal crosstalk between phase change memory cells can include benefits such as improving data reliability by preventing undesired bit flip and/or data read errors. [0066] In the embodiment illustrated in Figure 6D, the heat sink material 652 is an electrically conductive material, e.g., a metal, such that the region 652 can serve as a local interconnect between portions of a phase change memory device due to its electrically conductive properties. That is, the heat sink material can be both thermally and electrically conductive. As such, the metal heat sink region 652 can provide a local interconnect as well as provideimproved heat dissipation properties over previous phase change memory structures. [0067] As an example, the region 652 can serve as a reference contact, e.g., a ground interconnect, to the source region 643 via source contact 647. The heat sink/local interconnect region 652 can help to keep a semiconductor chip cool by rapidly dissipating heat generated by operation of an array of phase change memory cells more efficiently than previous approaches in which thermally non-conductive insulating materials are deposited in the gap between phase change memory cells, e.g., between stacks 621-1 and 621-2. [0068] In various embodiments, the heat sink/local interconnect region 652 can increase the quench rate associated with a reset pulse, e.g., reset pulse 311 shown in Figure 3. That is, the heat dissipation provided by the region 652 can decrease the time it takes for a phase change cell to be reset. [0069] One or more embodiments of the present disclosure can reduce the complexity of previous fabrication processes by reducing the number of process steps associated with forming phase change memory structures. As an example, using the heat sink region 652 as a local interconnect, such as shown in the embodiment of Figure 6D, can eliminate a number of processing steps associated with forming one or more local interconnect regions. For instance, in previous approaches in which a thermally non-conductive dielectric material is formed in the gap between cells, e.g., as illustrated in Figure 4, the electrically conductive local interconnects would be formed in a number of additional processing steps. That is, in previous approaches, additional processing steps would be utilized, e.g., deposition, masking, and etching of metallization layers would be performed to create one or more local interconnects. [0070] The material 652 can be various metals such as tungsten or copper, among various other metals, metal suicides, and/or suitable electrical conductors. In one or more embodiments, the material 652 can have a thermal conductivity of at least 30 Watts/meter-Kelvin (W/m-K). In various embodiments, the material 652 has a thermal conductivity of at least 100 W/m-K. [0071] As one of ordinary skill in the art will appreciate, subsequent processing steps in a fabrication sequence can be performed on the phase change memory structure 620 shown in Figure 6D. For instance, although not shown inFigure 6D, conductive contacts can be formed on the structure 620 to connect the top electrodes 633-1 and 633-2 to a bit line of the phase change memory cells array, e.g., array 100 shown in Figure 1, for instance. [0072] Figure 7 is a functional block diagram of an electronic memory system 780 having at least one memory device in accordance with an embodiment of the present disclosure. Memory system 780 includes a processor 782 coupled to a non-volatile memory device 784 that includes a memory array 791 of phase change memory cells, e.g., phase change array 100 described in connection with Figure 1 and phase change array 200 described in connection with Figure 2. The memory system 780 can include separate integrated circuits or both the processor 782 and the memory device 784 can be on the same integrated circuit. The processor 782 can be a microprocessor or some other type of controlling circuitry such as an application-specific integrated circuit (ASIC). [0073] The array 791 of phase change memory cells can be organized according to various architectures known in the art. As an example, the access devices of each row of memory cells are coupled with a word line, while phase change memory elements of the memory cells are coupled to bit lines. [0074] The embodiment of Figure 7 includes address circuitry 788 to latch address signals provided over I/O connections 786 through I/O circuitry 795. Address signals are received and decoded by a row decoder 789 and a column decoder 792 to access the memory array 791. [0075] The memory array 791 can include phase change memory cell structures according to embodiments described herein. The memory device 784 reads data in the memory array 791 by sensing voltage and/or current changes in the memory array columns using sense/buffer circuitry that in this embodiment can be read/latch circuitry 793. The read/latch circuitry 793 can be coupled to read and latch data from the memory array 791. I/O circuitry 795 is included for bi-directional data communication over the I/O connections 786 with the processor 782. Write circuitry 794 is included to write data to the memory array 791. [0076] Control circuitry 787 decodes signals provided by control connections 785 from the processor 782. These signals can include chip signals, write enable signals, and address latch signals that are used to control theoperations on the memory array 791, including data read, data write, and data erase operations. In various embodiments, the control circuitry 787 is responsible for executing instructions from the processor 782 to perform the operating and programming embodiments of the present disclosure. The control circuitry 787 can be a state machine, a sequencer, or some other type of controller. It will be appreciated by those skilled in the art that circuitry and/or signals in addition to those shown in Figure 7 can be provided. [0077] Figure 8 is a functional block diagram of a memory module 890 having at least one memory device in accordance with an embodiment of the present disclosure. Memory module 890 is illustrated as a memory card, although the concepts discussed with reference to memory module 890 are applicable to other types of removable or portable memory (e.g., USB PCRAM drives) and are intended to be within the scope of "memory module" as used herein. In addition, although one example form factor is depicted in Figure 8, these concepts are applicable to other form factors as well. [0078] In some embodiments, memory module 890 will include a housing 896 (as depicted) to enclose one or more memory devices 898, though such a housing is not essential to all devices or device applications. At least one memory device 898 includes an array of phase change memory cells according to embodiments described herein. Where present, the housing 896 includes one or more contacts 897 for communication with a host device. Examples of host devices include digital cameras, digital recording and playback devices, PDAs, personal computers, memory card readers, interface hubs and the like. [0079] For some embodiments, the contacts 897 are in the form of a standardized interface. For example, with a USB PCRAM drive, the contacts 897 might be in the form of a USB Type-A male connector. For some embodiments, the contacts 897 may be in the form of a semi-proprietary interface. In general, however, contacts 897 provide an interface for passing control, address and/or data signals between the memory module 890 and a host having compatible receptors for the contacts 897. [0080] The memory module 890 may optionally include additional circuitry 899, which may be one or more integrated circuits and/or discrete components. For some embodiments, the additional circuitry 899 may include a memory controller for controlling access across multiple memory devices 898and/or for providing a translation layer between an external host and a memory device 898. For example, there may not be a one-to-one correspondence between the number of contacts 897 and a number of connections to the one or more memory devices 898. Thus, a memory controller could selectively couple an I/O connection (not shown in Figure 8) of a memory device 898 to receive the appropriate signal at the appropriate I/O connection at the appropriate time or to provide the appropriate signal at the appropriate contact 897 at the appropriate time. [0081] Similarly, the communication protocol between a host and the memory module 890 may be different than what is required for access of a memory device 898. A memory controller could then translate the command sequences received from a host into the appropriate command sequences to achieve the desired access to the memory device 898. Such translation may further include changes in signal voltage levels in addition to command . sequences. [0082] The additional circuitry 899 may further include functionality unrelated to control of a memory device 898 such as logic functions as might be performed by an ASIC. Also, the additional circuitry 899 may include circuitry to restrict read or write access to the memory module 890, such as password protection, biometrics or the like. [0083] The additional circuitry 899 may include circuitry to indicate a status of the memory module 890. For example, the additional circuitry 899 may include functionality to determine whether power is being supplied to the memory module 890 and whether the memory module 890 is currently being accessed, and to display an indication of its status, such as a solid light while powered and a flashing light while being accessed. The additional circuitry 899 may further include passive devices, such as decoupling capacitors to help regulate power requirements within the memory module 890. [0084] Although specific embodiments have been illustrated and described herein, those of ordinary skill in the art will appreciate that an arrangement calculated to achieve the same results can be substituted for the specific embodiments shown. This disclosure is intended to cover adaptations or variations of various embodiments of the present disclosure.[0085] It is to be understood that the above description has been made in an illustrative fashion, and not a restrictive one. Combination of the above embodiments, and other embodiments not specifically described herein will be apparent to those of skill in the art upon reviewing the above description. The scope of the various embodiments of the present disclosure includes other applications in which the above structures and methods are used. Therefore, the scope of various embodiments of the present disclosure should be determined with reference to the appended claims, along with the full range of equivalents to which such claims are entitled. [0086] In the foregoing Detailed Description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the disclosed embodiments of the present disclosure have to use more features than are expressly recited in each claim. [0087] Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment. |
An arrangement of pads with selective via in pad for mounting a semiconductor package on a substrate. In order to strengthen the soldered bonds, standard pads, which have a stronger bond, are used in locations of greatest stress and deflection. Vias in pad (VIP) are used at all other locations to improve routing advantages due to their smaller surface area. |
CLAIMS1. A substrate device, comprising: a substrate having an upper surface and a lower surface; a plurality of vias connecting said upper surface to said lower surface; a plurality of contact pads on said upper surface, with one pad corresponding to each via, including a first group of contact pads having a first arrangement with a corresponding via and a second group of contact pads having a second arrangement with a corresponding via; said first group of contact pads being arranged at points of greatest stress on said substrate and said second group of contact pads being arranged at other points. 2. The substrate device according to claim 1, wherein said first group of contact pads are laterally separated from corresponding vias and said second group of contact pads contain a corresponding via. 3. The substrate device according to claim 1, wherein said points of greatest stress are points of greatest deflection. 4. The substrate device according to claim 1, wherein said first group of contact pads provide a stronger bond with a received chip package than said second group of contact pads. 5. The substrate device according to claim 1, wherein said first group of contact pads are arranged near corners of said substrate. 6. A method of mounting a semiconductor chip package, comprising: providing a substrate having an upper surface and lower surface and vias joining said services ; arranging contact pads on said upper surface, with each contact pad having responding via; a first group of said contact pads having a first arrangement with an associated via and a second group of said contact pads having a second arrangement with an associated via; said first group of said contact pads being arranged at points of greatest stress on said substrate and said second group of said contact pads being arranged at other points. 7. The method according to claim 6, wherein said first group of contact pads are laterally separated from associated vias and said second group of contact pads contain associated vias. 8. The method according claim 6, wherein said points of greatest stress are points of greatest deflection. 9. The method according to claim 6, wherein said first group of contact pads provide a stronger bond with a received chip package than said second group of contact pads. 10. The method according to claim 6, wherein said first group of contact pads are arranged near corners of said substrate. 11. A substrate device containing at least one semiconductor chip package, comprising: a substrate having an upper surface and lower surface and a plurality of vias joining said services; a plurality of contact pads mounted on said upper surface, each associated with a via; said at least one semiconductor chip package having contacts arranged at corresponding locations to said contact pads; said plurality of contact pads including the first group of contact pads having a first arrangement with an associated via any second group of contact pads having a second arrangement with an associated via, said first arrangement providing a stronger bond with said contacts than said second arrangement; said first group of contact pads being arranged on said upper surface at points of greatest stress to provide a stronger bond with said semiconductor chip package. 12. The substrate device according to claim 11, wherein said first group of contact pads are laterally separated from corresponding vias and said second group of contact pads contain a corresponding via. 13. The substrate device according to claim 11, wherein said points of greatest stress are points of greatest deflection. 14. The substrate device according to claim 11, wherein said first group of contact pads are arranged near corners of said substrate. 15. The substrate device according to claim 14, wherein said first group of contact pads are arranged in a square near each corner. 16. The substrate device according to claim 14, wherein said first group of contact pads are arranged in a triangle near each corner. 17. The substrate device according to claim 14, wherein said first group of contact pads are arranged at said corners of said substrate and along edges of said substrate. |
TITLE : ARRANGEMENT OF VItS IN A SUBSTRATETO SUPPORT A BALL GRID ARRAYFIELD [0001] The present invention relates generally to an arrangement of vias in a substrate to provide stronger bonds with a semiconductor package. More specifically, the present invention relates to an arrangement of standard vias and vias in pad for better support of a package having a ball grid array. BACKGROUND [0002] When semiconductor chip packages are applied to a substrate, it is convenient for the packages to be arranged on one side of the substrate and the wiring between the packages to be arranged on the other side. In order to accomplish this, it is necessary to utilize an arrangement known as a via. This consists of a hole through the substrate extending from one surface to the other. The walls of the hole are coated with an electrically conductive material, such as by plating. An electrically conductive pad is formed on each surface which is in electrical contact with the via. In standard vias, on the surface which receives the packages, the via pads are connected by a surface conductor to another pad which is nearby. This pad is used to receive a contact connected to the package. One common package contacts aball-gri-d array- (BG-A). Balls-of solder are applied to the bottom of the package in a pattern which matches the contact pads on the substrate. Solder paste is applied to the contact pads and the assembly is heated to reflow the solder and to connect the package to the substrate.While this arrangement is widely used, this arrangement with two pads and a surface conductor for each package contact uses a lot of surface area and makes routing of connections difficult. Another arrangement has been suggested whereby a single pad is used both as a via pad and a contact pad. This arrangement, known as via in pad (VIP), allows for easier routing on the substrate surface. However there have been manufacturing difficulties in that the solder balls often do not form a strong connection with the pad. BRIEF DESCRIPTION OF THE DRAWINGS [0004] The foregoing and a better understanding of the present invention will become apparent from the following detailed description of example embodiments and the claims when read in connection with the accompanying drawings, all forming a part of the disclosure of this invention. While the foregoing and following written and illustrated disclosure focuses on disclosing example embodiments of the invention, it should be clearly understood that the same is by way of illustration and example only and that the invention is not limited thereto.The spirit and scope of the present invention are limited only by the terms of the appended claims. [0005] The following represents brief descriptions of the drawings, wherein: [0006] Figure 1 is an example background arrangement useful in gaining a more thorough understanding/appreciation of the present invention; [0007] Figure 2 is an example disadvantageous arrangement useful in gaining a more thorough understanding/appreciation of the present invention; [0008] Figure 3 is an example advantageous arrangement describing the present invention; [0009] Figure 4 is an example background arrangement useful in gaining a more thorough understanding/appreciation of the present invention; [0010] Figure 5 is an example advantageous arrangement describing the present invention. DETAILED DESCRIPTION [0011] Before beginning a detailed description of the subject invention, mention of the following is in order. When appropriate, like reference numerals and characters may be used to designate identical, corresponding or similar components in differing figure drawings. Further, in the detailed description to follow, example sizes/models/values/ranges may be given, although the present invention is not limited to the same.] In the VIP arrangement, a hole for the via is necessarily formed in the middle of the pad. This hole reduces the contact area between the solder ball of the BGA and the pad itself. This reduced contact area also reduces the strength of the bond between the ball and pad. Other problems also are involved with this connection. Since different materials are used in different parts of the unit, different thermal expansion coefficients cause the different parts to move unequally when the temperature changes. This can cause the bond to break or be weakened. - [Ool-3] via hole is often filled with a material referred to as a plug. During the reflow process, the plug material is also heated and may be a source of outgassing. The gas given off by the plug may also interfere with the bond of the ball to the pad. Attempts have been made to reduce the outgassing. This includes replacing the plug with a much smaller cap which only covers the end of the via rather than filling it. However the removal of the plug sometimes allows the solder balls to be squashed, due to the weight-of the package and heat sink. As result, the space between the package and the substrate is smaller than desired.] The use of standard vias avoids many of these problems since there is no hole in the via pad. The solder balls do not squash to the extent that the do with VIP pads. Also, there is no outgassing in the contact pad since the via is separated from it. However, as mentioned above, standard vias require more surface area and make routing of electrodes more difficult.] Attention is now directed to drawings and particularly to Figure 1 in which a VIP arrangement 10 is shown. A semiconductor chip package 12 includes a contact pad 14, to which a solder ball 16 is attached. The substrate 18, to which the package is to be connected, includes a contact pad 20 to which the solder ball 16 is in contact. A pad 22 is provided on the opposite surface of the substrate. A via 24 extends between pads 20 and 22. The plug 26 is present in the lower part of the via.] When this arrangement is heated, outgassing sometimes occurs from plug 26 causing a gas bubble 28 to form in the center of the solder ball. During the reflow process, when the solder is heated, the bubble may prevent the solder from forming a tight connection. When the process works correctly, the solder - ftowsjnto. the via as for as the top of the plug with the remaining solder making a contact on top of pad 20. Gas bubbles may cause the melted solder not to fill the via completely and leave an uneven joint which may not be mechanically sound.] Figure 2 shows another arrangement of a VIP, without a plug. One of the results is that the solder ball is squashed by the weight of the package reducing the spacing between the package and the substrate. When heated, the solder may actually drip causing additional problems. [0018] Since the solder joints are subject to a number of mechanical stresses, it is important that they be mechanically strong. In addition to thermomechanical stresses, vibration caused by moving the unit can also be damaging. This not only includes movement during the manufacturing process, but also during packaging and transporting. Thus, the bond may be broken when the box in which the unit is carried is dropped when loading it onto a truck. If the bonds are not strong or the separation between the package and the substrate is not sufficient, open joints or short circuited joints may occur. The stresses which are applied are not uniform across the substrate. Highest stresses occur where the substrate deflects the largest amount. In order to withstand this deflection, the present invention utilizes stronger standard vias at the points in which deflection is most likely to occur.Figure 3 shows an example embodiment of the present invention. In this Figure, two vias are shown. The via on the left shows a VIP arrangement, but with a cap 30 applied instead of a plug. The via on the right to shown in the standard arrangement where the joint is offset from the via. Since the joint is on a solid pad having no holes with no outgassing, it is more resistant to defects or stresses that occur in a VIP joint. Because this joint has a solid pad, it resists the lods~p~resented in reflow because due to the weight of the package and heat sink, and also resists deflection caused by thermomechanical and other stresses when not molten. [0020] The concept of the present invention is to utilize VIP pads where possible and to utilize standard pads for increased strength in the areas where stresses are greatest. For example, it is generally believed that the greatest problem areas are along the periphery of the package and especially at the corners. Thus, the present invention utilizes the standard arrangement in the problem areas and VIP pads in all others. [0021] Figure 4 shows a standard BGA field for a VIP situation. The substrate18 as an array of 32x32 pads for receiving a package having a similar arrangement of solder balls. Each of these pads is a VIP pad and accordingly may have weaknesses as described above.Figure 5 shows a field which a similar to that shown in Figure 4, except that some of the pads have been changed to standard arrangements rather than VIP arrangements. By having the standard pads at the highest stressed points, they withstand greater deflection before permanent damage renders them nonfunctional. Damage is less likely to occur at these points which are most susceptible to damage.The standard joints shown in Figure 5 occur at each of the 4 corners.The exact arrangement of the standard joints may vary, depending on which is most effective. The lower right hand corner of the Figure shows an arrangement of four locations in a square. The upper right hand corner shows seven locations including the corner and three additional locations on each side of the corner. The upper left-hand corner shows six locations including the corner and two additional A~,.. p, lus-t, he location nearest the corner on a diagonal. Thus a triangle is formed by the six locations. The lower left-hand corner includes five locations. This includes the corner, one location on each side separated from the corner by three locations and two locations on the diagonal. Thus, a triangle with openings on the side are formed. Clearly, other arrangements of particular locations can be utilized. While these are generally at the corners and along the sides where the deflections and stresses are greatest, they can be placed anywhere in the array the highest stresses occur. These locations may differ for different packages and different arrangements of heat sinks, since the stresses will be located differently.] In concluding, reference in the specification to"one embodiment","an embodiment","example embodiment, etc. "means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of such phrases in various parts of the specification are not necessarily all referring to the same embodiment. Further, when a particular feature, structure or characteristic is described in connection with any embodiment, it is submitted that is within the purview of one skilled in the art to effect such feature structure or characteristic in connection with other ones of the embodiments. [0025] This concludes the description of the example embodiments. Although the present invention has been described with reference to a number of illustrative embodiments thereof, it should be understood that numerous other modifications and embodiments can be devised by those skilled the art that will fall within the spirit and scope of the principles of this invention. More particularly, reasonable variations and modifications are possible in the components parts and/or arrangements of the subject combination arrangement within the scope of the foregoing disclosure, the drawings and the appended claims without departing from the spirit of the invention. In addition to variations and modifications in the component parts and/or arrangements, alternative uses will be apparent to those skilled the art. |
Apparatus and methods for protecting electronic circuits are disclosed. In one embodiment, an apparatus for providing protection from transient signals (14, 16) comprises an integrated circuit (1), a pad (42) on a surface of the integrated circuit, and a configurable protection circuit (22) within the integrated circuit. The configurable protection circuit is electrically connected to the pad. The configurable protection circuit comprises a plurality of subcircuits (72, 74, 76) arranged in a cascade, and selection of one or more of the plurality of the subcircuits for operation determines at least one of a holding voltage or a trigger voltage of the configurable protection circuit. |
An apparatus for providing protection from transient electrical events (14, 16), the apparatus comprising:an integrated circuit (1);a pad (42) on a surface of the integrated circuit;a configurable protection circuit (22) within the integrated circuit, wherein the configurable protection circuit is electrically connected to the pad, and wherein the configurable protection circuit comprises a plurality of subcircuits (72, 74, 76) arranged in series, and wherein the plurality of subcircuits are configured such that selection of one or more of the plurality of the subcircuits for operation determines at least one of a holding voltage or a trigger voltage of the configurable protection circuit; anda pad controller (23) configured to select one or more of the plurality of subcircuits for activation to achieve a desired holding voltage and/or a desired trigger voltage for the protection circuit.The apparatus of Claim 1, wherein the plurality of subcircuits comprises one or more subcircuits of a first type and one or more subcircuits of a second type.The apparatus of Claim 2, wherein each subcircuit of the first type has a first holding voltage and a first trigger voltage, and wherein each subcircuit of the second type has a second holding voltage and a second trigger voltage, and wherein when a subcircuit of the first type is selected the holding and trigger voltages of the configurable protection circuit are increased by about the first holding and first trigger voltages, respectively, and wherein when a subcircuit of the second type is selected the holding and trigger voltages of the configurable protection circuit are increased by about the second holding and second trigger voltages, respectively.The apparatus of Claim 3, wherein the first and second trigger voltages are substantially equal.The apparatus of any of Claims 2 to 4, wherein the subcircuit of the first type includes a first end and a second end for arrangement in the cascade, and wherein the subcircuit of the first type comprises a NPN bipolar transistor (100) having an emitter and a collector, wherein the collector is electrically connected to the first end of the subcircuit and the emitter is electrically connected to the second end of the subcircuit.The apparatus of Claim 5, wherein the subcircuit of the second type includes a first end and a second end for arrangement in the cascade, and wherein the subcircuit of the second type comprises a PNP bipolar transistor (102) having an emitter, a base and a collector and a NPN bipolar transistor (103) having an emitter, a base, and a collector, wherein the emitter of the PNP bipolar transistor is electrically connected to the first end of the subcircuit, and wherein the base of the PNP bipolar transistor is electrically connected to the collector of the NPN bipolar transistor, and wherein the collector of the PNP bipolar transistor is electrically connected to the base of the NPN bipolar transistor, and wherein the emitter of the NPN bipolar transistor is electrically connected to the second end of the subcircuit.The apparatus of any of Claims 2 to 6, wherein the plurality of subcircuits further comprises one or more subcircuits of a third type.The apparatus of Claim 7, wherein the subcircuit of the third type includes a first end and a second end for arrangement in the cascade, and wherein the subcircuit of the third type comprises a PNP bipolar transistor (106) having an emitter and a collector, wherein the emitter is electrically connected to the first end of the subcircuit and wherein the collector is electrically connected to the second end of the subcircuit.The apparatus of any preceding Claim, further comprising an internal circuit within the integrated circuit, wherein the configurable protection circuit is electrically connected to the internal circuit for protection of the internal circuit from transient electrical events.The apparatus of Claim 9, wherein the internal circuit is configured to operate at an operating voltage, and wherein a first selection of the one or more of the plurality of the subcircuits for operation determines a first holding voltage of the configurable protection circuit, wherein the first holding voltage is less than the operating voltage, and wherein a second selection of the one or more of the plurality of the subcircuits for operation determines a second holding voltage of the configurable protection circuit, wherein the second holding voltage is greater than the operating voltage.The apparatus of any preceding Claim, wherein at least one of the plurality of the subcircuits is disposed such that the at least one of the plurality of the subcircuits fits under the pad.The apparatus of any preceding Claim, wherein the subcircuits selected for activation are at least partially determined by connections in metallization layers (313, 315, 317) of the integrated circuit.The apparatus of any preceding Claim, wherein at least one of the following applies:(a) the pad controller includes metal or poly fuses;(b) the integrated circuit is a power management circuit (20);(c) selection of the one or more of the plurality of the subcircuits for operation determines both the holding voltage and the trigger voltage of the configurable protection circuit.A method for providing protection from transient signals (14, 16), the method comprising:providing an integrated circuit (1) having a pad (42) on a surface of the integrated circuit and having a configurable protection circuit (22) comprising a plurality of subcircuits (72, 74, 76) arranged in series, wherein the plurality of subcircuits are configured such that selection of the one or more of the plurality of the subcircuits for operation determines at least one of a holding voltage or a trigger voltage of the configurable protection circuit; andselecting, by a pad controller (23), one or more of the plurality of the subcircuits for operation in series to achieve a desired holding voltage and/or desired trigger voltage for the protection circuit.The method of Claim 14, wherein the plurality of subcircuits comprises one or more subcircuits of a first type and one or more subcircuits of a second type.The method of Claim 15, wherein each subcircuit of the first type has a first holding voltage and a first trigger voltage, and wherein each subcircuit of the second type has a second holding voltage and a second trigger voltage, and wherein the first and second trigger voltages are substantially equal.The method of any of Claims 14 to 16, wherein at least one of the following applies:(a) selecting the one or more of the plurality of the subcircuits comprises connecting the one or more of the plurality of the subcircuits using metallization layers (313, 315, 317) of the integrated circuit;(b) selecting the one or more of the plurality of the subcircuits comprises blowing at least one fuse of the pad controller;(c) selecting the one or more of the plurality of the subcircuits for operation determines both the holding voltage and the trigger voltage of the configurable protection circuit. |
BACKGROUNDFieldEmbodiments of the invention relate to electronic systems, and more particularly, to protection circuits for electronic systems.Description of the Related TechnologyCertain electronic systems can be exposed to a transient signal event, or an electrical signal of a relatively short duration having rapidly changing voltage and high power. Transient signal events can include, for example, electrostatic discharge (ESD) events arising from the abrupt release of charge from an object or person to an electronic system.Transient signal events can damage integrated circuits (ICs) inside an electronic system due to overvoltage conditions and/or high levels of power dissipation over relatively small areas of the ICs. High power dissipation can increase IC temperature, and can lead to numerous problems, such as gate oxide punch-through, junction damage, metal damage, and surface charge accumulation. Moreover, transient signal events can induce latch-up (in other words, inadvertent creation of a low-impedance path), thereby disrupting the functioning of the IC and potentially causing permanent damage to the IC. Thus, there is a need to provide an IC with protection from such transient signal events.SUMMARYIn one embodiment, an apparatus for providing protection from transient electrical events comprises an integrated circuit, a pad on a surface of the integrated circuit, and a configurable protection circuit within the integrated circuit. The configurable protection circuit is electrically connected to the pad. Additionally, the configurable protection circuit comprises a plurality of subcircuits arranged in a cascade, and selection of one or more of the plurality of the subcircuits for operation determines at least one of a holding voltage or a trigger voltage of the configurable protection circuit.In another embodiment, a method for providing protection from transient signals comprises providing an integrated circuit having a pad on a surface of the integrated circuit and having a configurable protection circuit comprising a plurality of subcircuits. The method further comprises selecting one or more of the plurality of the subcircuits for operation in a cascade, wherein selecting the one or more of the plurality of the subcircuits for operation determines at least one of a holding voltage or a trigger voltage of the configurable protection circuit.ASPECTS OF THE DISCLOSURENon-limiting aspects of the disclosure are set out in the following numbered clauses.Clause 1. An apparatus for providing protection from transient electrical events (14, 16), the apparatus comprising:an integrated circuit (1);a pad (42) on a surface of the integrated circuit;a configurable protection circuit (22) within the integrated circuit, wherein the configurable protection circuit is electrically connected to the pad, and wherein the configurable protection circuit comprises a plurality of subcircuits (72, 74, 76) arranged in a cascade, and wherein selection of one or more of the plurality of the subcircuits for operation determines at least one of a holding voltage or a trigger voltage of the configurable protection circuit.Clause 2. The apparatus of Clause 1, wherein the plurality of subcircuits comprises one or more subcircuits of a first type and one or more subcircuits of a second type.Clause 3. The apparatus of Clause 2, wherein each subcircuit of the first type has a first holding voltage and a first trigger voltage, and wherein each subcircuit of the second type has a second holding voltage and a second trigger voltage, and wherein when a subcircuit of the first type is selected the holding and trigger voltages of the configurable protection circuit are increased by about the first holding and first trigger voltages, respectively, and wherein when a subcircuit of the second type is selected the holding and trigger voltages of the configurable protection circuit are increased by about the second holding and second trigger voltages, respectively.Clause 4. The apparatus of Clause 3, wherein the first and second trigger voltages are substantially equal.Clause 5. The apparatus of Clause 2, wherein the subcircuit of the first type includes a first end and a second end for arrangement in the cascade, and wherein the subcircuit of the first type comprises a NPN bipolar transistor (100) having an emitter and a collector, wherein the collector is electrically connected to the first end of the subcircuit and the emitter is electrically connected to the second end of the subcircuit.Clause 6. The apparatus of Clause 5, wherein the subcircuit of the second type includes a first end and a second end for arrangement in the cascade, and wherein the subcircuit of the second type comprises a PNP bipolar transistor (102) having an emitter, a base and a collector and a NPN bipolar transistor (103) having an emitter, a base, and a collector, wherein the emitter of the PNP bipolar transistor is electrically connected to the first end of the subcircuit, and wherein the base of the PNP bipolar transistor is electrically connected to the collector of the NPN bipolar transistor, and wherein the collector of the PNP bipolar transistor is electrically connected to the base of the NPN bipolar transistor, and wherein the emitter of the NPN bipolar transistor is electrically connected to the second end of the subcircuit.Clause 7. The apparatus of Clause 2, wherein the plurality of subcircuits further comprises one or more subcircuits of a third type.Clause 8. The apparatus of Clause 7, wherein the subcircuit of the third type includes a first end and a second end for arrangement in the cascade, and wherein the subcircuit of the third type comprises a PNP bipolar transistor (106) having an emitter and a collector, wherein the emitter is electrically connected to the first end of the subcircuit and wherein the collector is electrically connected to the second end of the subcircuit.Clause 9. The apparatus of Clause 1, further comprising an internal circuit within the integrated circuit, wherein the configurable protection circuit is electrically connected to the internal circuit for protection of the internal circuit from transient electrical events.Clause 10. The apparatus of Clause 9, wherein the internal circuit is configured to operate at an operating voltage, and wherein a first selection of the one or more of the plurality of the subcircuits for operation determines a first holding voltage of the configurable protection circuit, wherein the first holding voltage is less than the operating voltage, and wherein a second selection of the one or more of the plurality of the subcircuits for operation determines a second holding voltage of the configurable protection circuit, wherein the second holding voltage is greater than the operating voltage.Clause 11. The apparatus of Clause 1, wherein at least one of the plurality of the subcircuits is disposed such that the at least one of the plurality of the subcircuits fits under the pad.Clause 12. The apparatus of Clause 11, wherein at least two of the plurality of the subcircuits are disposed such that the at least two of the plurality of the subcircuits fits under the pad.Clause 13. The apparatus of Clause 1, wherein the subcircuits selected for activation are at least partially determined by connections in metallization layers (313, 315, 317) of the integrated circuit.Clause 14. The apparatus of Clause 12, wherein the subcircuits selected for activation are determined by connections with three of the metallization layers of the integrated circuit.Clause 15. The apparatus of Clause 1, wherein the subcircuits selected for activation are at least partially determined by a pad controller (23).Clause 16. The apparatus of Clause 1, wherein the pad controller includes metal or poly fuses.Clause 17. The apparatus of Clause 1, wherein the integrated circuit is a power management circuit (20).Clause 18. The apparatus of Clause 1, wherein selection of the one or more of the plurality of the subcircuits for operation determines both the holding voltage and the trigger voltage of the configurable protection circuit.Clause 19. A method for providing protection from transient signals (14, 16), the method comprising:providing an integrated circuit (1) having a pad (42) on a surface of the integrated circuit and having a configurable protection circuit (22) comprising a plurality of subcircuits (72, 74, 76); andselecting one or more of the plurality of the subcircuits for operation in a cascade, wherein selecting the one or more of the plurality of the subcircuits for operation determines at least one of a holding voltage or a trigger voltage of the configurable protection circuit.Clause 20. The method of Clause 14, wherein the plurality of subcircuits comprises one or more subcircuits of a first type and one or more subcircuits of a second type.Clause 21. The method of Clause 15, wherein each subcircuit of the first type has a first holding voltage and a first trigger voltage, and wherein each subcircuit of the second type has a second holding voltage and a second trigger voltage, and wherein the first and second trigger voltages are substantially equal.Clause 22. The method of Clause 14, wherein selecting the one or more of the plurality of the subcircuits comprises connecting the one or more of the plurality of the subcircuits using metallization layers (313, 315, 317) of the integrated circuit.Clause 23. The method of Clause 14, wherein selecting the one or more of the plurality of the subcircuits comprises connecting the one or more of the plurality of the subcircuits using a pad controller (23).Clause 24. The method of Clause 14, wherein selecting the one or more of the plurality of the subcircuits comprises blowing at least one fuse of the pad controller.Clause 25. The method of Clause 14, wherein selecting the one or more of the plurality of the subcircuits for operation determines both the holding voltage and the trigger voltage of the configurable protection circuit.BRIEF DESCRIPTION OF THE DRAWINGSFigure 1 is a schematic block diagram of one example of an electronic system including integrated circuits (ICs).Figure 2 is a schematic block diagram of an integrated circuit including pad circuits according to some embodiments.Figure 3A is a graph of one example of pad circuit current versus transient signal voltage.Figure 3B is a graph of another example of pad circuit current versus transient signal voltage.Figure 4A is a schematic block diagram of a pad circuit in accordance with one embodiment.Figure 4B is a schematic block diagram of a pad circuit in accordance with another embodiment.Figure 5A is a circuit diagram illustrating a pad circuit building block in accordance with one embodiment.Figure 5B is a circuit diagram illustrating a pad circuit building block in accordance with another embodiment.Figure 5C is a circuit diagram illustrating a pad circuit building block in accordance with yet another embodiment.Figure 6A is a cross section of a conventional NMOS transistor having a lightly doped drain (LDD) structure.Figure 6B is a cross section of an NPN bipolar transistor in accordance with one embodiment.Figure 6C is a cross section of a PNP bipolar transistor in accordance with another embodiment.Figure 7A is a circuit diagram illustrating a pad circuit building block in accordance with yet another embodiment.Figure 7B is a cross section of one implementation of the pad circuit building block of Figure 7A .Figure 8A is a circuit diagram illustrating a pad circuit building block in accordance with yet another embodiment.Figure 8B is a cross section of one implementation of the pad circuit building block of Figure 8A .Figure 9A is a schematic block diagram of a pad circuit according to a first embodiment.Figure 9B is a circuit diagram of the pad circuit of Figure 9A .Figure 10A is a schematic block diagram of a pad circuit according to a second embodiment.Figure 10B is a circuit diagram of the pad circuit of Figure 10A .Figure 11A is a schematic block diagram of a pad circuit according to a third embodiment.Figure 11B is a circuit diagram of the pad circuit of Figure 11A .Figure 12A is a schematic block diagram of a pad circuit according to a fourth embodiment.Figure 12B is a circuit diagram of the pad circuit of Figure 12A .Figure 13A is a schematic block diagram of a pad circuit according to a fifth embodiment.Figure 13B is a circuit diagram of the pad circuit of Figure 13A .Figure 14A is a schematic block diagram of a pad circuit according to a sixth embodiment.Figure 14B is a circuit diagram of the pad circuit of Figure 14B .Figure 15 is a circuit diagram illustrating a pad circuit building block in accordance with yet another embodiment.Figure 16A is a schematic block diagram of a pad circuit according to a seventh embodiment.Figure 16B is a circuit diagram of the pad circuit of Figure 16A .Figure 17A is a perspective view of one implementation of the pad circuit of Figure 12B .Figure 17B is a cross section of the pad circuit of Figure 17A taken along the line 17B-17B.Figure 17C is a cross section of the pad circuit of Figure 17A taken along the line 17C-17C.Figure 17D is a cross section of the pad circuit of Figure 17A taken along the line 17D-17D.Figure 17E is a top plan view of the active and polysilicon layers of the pad circuit of Figure 17A .Figure 17F is a top plan view of the contact and first metal layers of the pad circuit of Figure 17A .Figure 17G is a top plan view of the first metal layer and first via layer of the pad circuit of Figure 17A .Figure 17H is a top plan view of the second metal layer and second via layer of the pad circuit of Figure 17A .Figure 17I is a top plan view of the third metal layer of the pad circuit of Figure 17A .Figure 18A is a perspective view of one implementation of the pad circuit of Figure 11B .Figure 18B is a cross section of the pad circuit of Figure 18A taken along the line 18B-18B.DETAILED DESCRIPTION OF EMBODIMENTSThe following detailed description of certain embodiments presents various descriptions of specific embodiments of the invention. However, the invention can be embodied in a multitude of different ways as defined and covered by the claims. In this description, reference is made to the drawings where like reference numerals indicate identical or functionally similar elements.Electronic systems are typically configured to protect circuits or components therein from transient signal events. Furthermore, to help assure that an electronic system is reliable, manufacturers can test the electronic system under defined stress conditions, which can be described by standards set by various organizations, such as the Joint Electronic Device Engineering Council (JEDEC), the International Electrotechnical Commission (IEC), and the Automotive Engineering Council (AEC). The standards can cover a wide range of transient signal events, including ESD events.Electronic circuit reliability can be improved by coupling pad protection circuits to the pads of an IC for transient signal protection. The pad circuits can be configured to maintain the voltage level at the pad within a predefined safe range. However, it can be difficult to provide pad circuits that meet reliability and performance requirements with low manufacturing cost and a relatively small circuit area.An integrated circuit (IC) can have many pads, and different pads can be exposed to different voltage domains. Each voltage domain can have different performance and reliability requirements. For example, each voltage domain can have a different minimum operating voltage, maximum operating voltage, and constraint on leakage current. There is a need for providing IC protection pads operating over a multitude of voltage domains to enhance electronic circuit reliability for ICs in a simple and cost-effective manner.Overview of Electronic SystemsFigure 1 is a schematic block diagram of an electronic system 10, which can include one or more pad circuits according to an embodiment of the invention. The illustrated electronic system 10 includes a first IC 1, a second IC 2, and pins 4, 5, 6. As illustrated in Figure 1 , the pin 4 is electrically connected to the first IC 1 by a connection 7. The pin 5 is electrically connected to the second IC 2 by a connection 8. The electronic system 10 can also include pins electrically connected to both the first and second ICs 1, 2. For example, the illustrated pin 6 is electrically connected to the first and second ICs 1, 2 by a connection 9. Additionally, the first and second ICs 1, 2 can be electrically connected to one another by one or more connections internal to the electronic system 10, such as by connections 11 and 12. The first and second ICs 1, 2 can be exposed to user contact via, for example, the pins 4, 5, 6. The user contact can be through a relatively low-impedance connection.The first and second ICs 1, 2 can be exposed to transient signal events, such as ESD events, which can cause IC damage and induce latch-up. For example, the connection 11 can receive a device-level transient signal event 14, and/or the pin 6 can receive a system-level transient signal event 16. The transient signal events 14, 16 can travel along the connections 11, 9, respectively, and can be received at the pads of the first and second ICs 1, 2.In some embodiments, the first and second ICs 1, 2 can include pads, and can be provided with pad circuits configured to ensure reliability of the ICs by maintaining the voltage level at the pads within a selected range, which can vary from pad to pad. For example, either or both of the first and second ICs 1, 2 can include one or more pads configured to operate over a multitude of voltage domains or current bias conditions, each having varying performance and reliability requirements.Overview of Power Management ICsIn some embodiments, one or more pad circuits can be employed in an IC, such as the first IC 1 of Figure 1 , and can be configured to provide transient signal protection to one or more internal circuits of the IC. The pad circuit can be configured to divert a current associated with a transient signal event received on a pad of the IC to other nodes or pads of the IC, thereby providing transient signal protection, as will be described in further detail below. The current can be shunted from, for example, a low-impedance output pad, a high-impedance input pad, or a low-impedance power or ground pad, to a low impedance pad or node of the IC. When no transient signal event is present, the pad circuit can remain in a high-impedance/low-leakage state, thereby reducing or minimizing static power dissipation resulting from leakage current and improving the operation of leakage sensitive circuitry, as will be described in detail below.In other embodiments, one or more pad circuits can be provided in a single IC (for example, the first IC 1 of Figure 1 ), and can be configured to provide transient signal protection for another component (for example, the second IC 2 of Figure 1 ). The first IC 1 can be physically separated from the second IC 2, or it can be encapsulated in a common package with the second IC 2. In such embodiments, one or more pad circuits can be placed in a stand-alone IC, in a common package for system-on-a-package applications, or integrated with an IC in a common semiconductor substrate for system-on-a-chip applications.Figure 2 is a schematic block diagram of one example of an integrated circuit (IC) including pad circuits according to some embodiments. The IC 20 can be a power management IC, which can include, for example, pad circuits 22a-22p, a pad controller 23, comparators 27a-27h, a multiplexer 30, first and second OR gates 31a, 31b, an output logic 32, a clear logic 33, a voltage reference circuit 35, a timer 39, and pads 42a-42p. The power management IC 20 can be included in an electronic system, such as the electronic system 10 of Figure 1 , and can be, for example, the first IC 1 or the second IC 2. Depending on a design specification, not all of the illustrated components are necessary. For example, skilled artisans will appreciate that the pad controller 23 need not be included, that the power management IC 20 can be modified to monitor more or fewer voltage domains, and that the power management IC 20 can have more extensive or less extensive functionality.Furthermore, although the pad circuits are illustrated in the context of the power management IC 20, the pad circuits can be employed in a wide array of ICs and other electronics having pads configured to operate over a multitude of voltage domains or current bias conditions.The power management IC 20 can be configured to simultaneously monitor multiple voltage domains for overvoltage and undervoltage conditions, as will be described below. For example, the power management IC 20 can generate an overvoltage signal coupled to the pad 42i (OVERVOLTAGE), which can indicate whether or not an overvoltage condition is detected on any of the pads 42a-42d (VH1, VH2, VH3, and VH4, respectively). Additionally, the power management IC 20 can generate an undervoltage signal coupled to the pad 42j (UNDERVOLTAGE), which can indicate whether or not an undervoltage condition is detected on any of the pads 42e-42h (VL1, VL2, VL3, and VL4, respectively). Although the illustrated power management IC 20 is configured to monitor up to four voltage domains, skilled artisans will appreciate that this choice is merely illustrative, and that alternate embodiments of the power management IC 20 can be configured to be able to monitor more or fewer voltage domains, as well as to feature more extensive or less extensive functionality.The power management IC 20 can aid in the integration and bias of ICs and other components of the electronic system 10. The power management IC 20 can also detect overvoltage conditions and/or undervoltage conditions which can endanger the proper operation of the electronic system 10. Additionally, the power management IC 20 can aid in reducing power consumption by detecting overvoltage conditions which can undesirably increase power consumption.The power management IC 20 can be subject to stringent performance and design requirements. For example, the power management IC 20 can be subject to relatively tight constraints on leakage current in order to reduce static power dissipation and to improve performance for leakage-sensitive circuitry, as will be described below. Additionally, the power management IC 20 can be used to interact with multiple voltage domains, and thus should be able to handle relatively high input and output voltages without latching-up or sustaining physical damage. Moreover, there can be stringent requirements regarding the expense of the design and manufacture of the power management IC 20. Furthermore, in certain embodiments, configurability of the performance and design parameters of the power management IC 20 can be desirable, thereby permitting the power management IC 20 to be employed in a vast array of electronic systems and applications.Each of the comparators 27a-27h can monitor an overvoltage or undervoltage condition of a voltage domain. This can be accomplished by providing a voltage from a voltage domain to a comparator. For example, a resistor divider (not shown in Figure 2 ) having a series of resistors can be placed between a voltage supply of a voltage domain and a voltage reference, such as ground. A voltage can be tapped between the series of resistors and can be provided to a pad of the power management IC 20, such as, for example, the pad 42a (VH1). The voltage received at the pad 42a can be provided to the comparator 27a, which in turn can compare the voltage received from the pad 42a to a threshold voltage Vx. In one embodiment, the threshold voltage Vx is selected to be about 500mV. By selecting the voltage provided to the pad 42a (for example, by selecting the number and magnitude of the resistors in the divider), the output of the comparator 27a can be configured to change when the voltage supply of a voltage domain exceeds a selected value. Likewise, by selecting the voltage provided to the pad 42e in a similar manner, the output of the comparator 27e can be configured to change when the supply of a voltage domain falls below a selected value.As described above, the voltage provided to the pads 42a-42h can be provided from a resistor divider. The impedance of the resistors in the resistor divider can be relatively large (for example, tens of Mega-Ohms) so as to minimize system-level static power consumption. Thus, the accuracy of the resistor divider can be sensitive to the leakage of the pads 42a-42h, and there can be stringent performance requirements on the leakage current of the pads 42a-42h.The first OR gate 31a can determine if one or more of the comparators coupled to its inputs indicate that an overvoltage condition has been detected. Likewise, the second OR gate 31b can determine if one or more of the comparators coupled to its inputs indicate that an undervoltage condition has been detected. In the illustrated embodiment, the outputs of comparators 27a, 27b are provided to the first OR gate 31a, while the outputs of the comparators 27e, 27f are provided to the second OR gate 31b.Additionally, the first and second OR gates 31a, 31b can each receive signals from the multiplexer 30. The multiplexer 30 can allow overvoltage and undervoltage detection to be performed on voltage domains having a negative polarity with respect to the voltage received on the ground pad 42o (GND), such that overvoltage and undervoltage relate to magnitudes or absolute values of voltage. In particular, the multiplexer 30 can select which comparator signals are provided to the first and second OR gates 31a, 31b in response to a select control signal received from the pad 42p (SEL). For example, the multiplexer 30 can be configured to selectively provide the first OR gate 31a with the output of the comparator 27c or the comparator 27g, and the output of the comparator 27d or the comparator 27h, based on a state of the select control signal received from the pad 42p (SEL). Likewise, the multiplexer 30 can be configured to selectively provide the second OR gate 31b with the output of the comparator 27c or the comparator 27g, and the output of the comparator 27d or the comparator 27h, based on a state of the select control signal received from the pad 42p (SEL). By selecting which comparator outputs are provided to the first and second OR gates 31a, 31b, overvoltage and undervoltage detection can be performed on the voltages on the pads 42c, 42d and 42g, 42h, even for voltage domains having a negative polarity with respect to ground. The multiplexer 30 can be implemented with logic gates, with 3-state gates, or the like.The output logic 32 can control the state of the pad 42i (OVERVOLTAGE) and the pad 42j (UNDERVOLTAGE). For example, the output logic 32 can indicate that an overvoltage or undervoltage condition has been detected based at least in part on the outputs of the first and second OR gates 31a, 31b. The output logic 32 can signal the detection of an overvoltage or undervoltage condition for a duration exceeding the time that the first or second OR gates 31a, 31b indicates that an overvoltage or undervoltage condition has been detected. For example, the output logic 32 can receive a signal from the timer 39, which can indicate the duration that the overvoltage or undervoltage condition should be asserted. The timer 39 can be electrically connected to the pad 42m (TIMER) and can be configured to have a drive strength and corresponding drive resistance. The pad 42m can be electrically connected to an external capacitor, which can have a variable capacitance to establish an RC time constant for determining the reset delay of the timer 39.The output logic 32 can also be configured to communicate with the clear logic 33. The clear logic 33 can receive a clear control signal from pad 42k (CLEAR). In response to the clear control signal, the output logic 32 can reset the state of the pads 42i (OVERVOLTAGE) and 42j (UNDERVOLTAGE) to indicate that no overvoltage or undervoltage condition has been detected.The power management IC 20 can also provide an output reference voltage on pad 421 (VREF). This voltage can be selected to be, for example, about 1 V. The output voltage reference can be used by other components of the electronic system in which the power management IC 20 is implemented (for example, the electronic system 10 of Figure 1 ). For example, the reference voltage can be provided as a reference voltage to one end of a resistor divider configured to provide a voltage to the pads 42a-42h for overvoltage or undervoltage detection.As described above, the power management IC 20 can be configured to monitor multiple voltage domains, for example, four voltage domains for overvoltage and undervoltage conditions. Each of the voltage domains can have the same or different operating conditions and parameters. Additionally, the power management IC 20 can include a multitude of output pads, such as the pad 42i for indicating the detection of an overvoltage condition, the pad 42j for indicating the detection of an undervoltage condition, the pad 42p for providing the output voltage reference. The power management IC 20 can also include control pads, such as the pad 42p (SEL), the pad 42k (CLEAR), and the pad 42m (TIMER). Furthermore, the power management IC 20 can include the power pad 42n (Vcc) and the ground pad 42o (GND).In some embodiments, the electronic system (for example, the electronic system 10 of Figure 1 ) having the pads 42a-42p can have different requirements for minimum operating voltage, maximum operating voltage, and leakage current for each of the pads 42a-42p. Thus, each of the pads 42a-42p described above can have different performance and design requirements. In order to meet reliability requirements across a wide variety of applications, it can be desirable that one or more of the pads 42a-42p have a pad circuit configured to protect the power management IC 20 from overvoltage conditions and latch-up. Furthermore, it can be desirable that each pad circuit 22a-22p is configurable to operate with different reliability and performance parameters, for example, by changing only metal layers during back-end processing, or by using the pad controller 23 after fabrication. This can advantageously permit the pad circuits 22a-22p to be configurable for a particular application without requiring a redesign of the power management IC 20.Figure 3A illustrates a graph 60 of one example of pad circuit current versus transient signal voltage. As described above, it can be desirable for each pad circuit 42a-42p to be configured to maintain the voltage level at the pad within a predefined safe range. Thus, the pad circuit can shunt a large portion of the current associated with the transient signal event before the voltage of the transient signal VTRANSIENTreaches a voltage VFAILUREthat can cause damage to the power management IC 20. Additionally, the pad circuit can conduct a relatively low current at the normal operating voltage VOPERATING, thereby minimizing static power dissipation resulting from the leakage current ILEAKAGEand improving the performance of leakage sensitive circuitry, such a resistor divider.Furthermore, as shown in the graph 60, the pad circuit can transition from a high-impedance state ZHto a low-impedance state ZLwhen the voltage of the transient signal VTRANSIENTreaches the voltage VTRIGGER. Thereafter, the pad circuit can shunt a large current over a wide range of transient signal voltage levels. The pad circuit can remain in the low-impedance state ZLas long as the transient signal voltage level is above a holding voltage VHOLDINGand the rate of voltage change is in the range of normal frequency operating conditions, rather than in the range of high frequency conditions and relatively fast rise and fall times which can be associated with a transient signal event. In certain embodiments, it can be desirable for the holding voltage VHOLDINGto be above the operating voltage VOPERATIONso that the pad circuit does not remain in the low-impedance state ZLafter passage of the transient signal event and a return to normal operating voltage levels.Figure 3B is a graph 62 of another example of pad circuit current versus transient signal voltage. As shown in Figure 3B , a pad circuit can transition from a high-impedance state ZHto a low-impedance state ZLwhen the voltage of the transient signal VTRANSIENTreaches the voltage VTRIGGER. Thereafter, the pad circuit can shunt a large current over a wide range of transient signal voltage levels. The pad circuit can remain in the low-impedance state ZLas long as the transient signal voltage level is above a holding voltage VHOLDING. It can be desirable for the holding voltage VHOLDINGto be below the operating voltage VOPERATIONin order to provide enhanced protection against transient signal events and to reduce the circuit area needed to provide a desired pad shunting current. This technique can be employed, for example, in embodiments in which the holding current IHOLDINGexceeds the maximum current the pad can supply when biased at normal operating voltage levels. Thus, in certain embodiments, the pad circuit need not remain in the low-impedance state ZLafter passage of the transient signal event and a return to normal operating voltage levels, even when VOPERATIONexceeds VHOLDING, because the pad may not be able to supply a sufficient holding current IHOLDING to retain the pad circuit in the low-impedance state ZL.As described above, the operating and reliability parameters of a pad circuit can vary widely, depending on a particular application. For purposes of illustration only, one particular electronic system can have the characteristics shown in Table 1 below for selected pads of Figure 2 .Table 1 Pad MinMaxMinMaxMinMaxMinMaxVH10 V8 V9 V13 V16 V20 V0 nA15 nAVH20 V8 V6 V10V16 V20 V0 nA15 nAVH30 V8 V3 V7 V16 V20 V0 nA15 nAVH40 V16 V6 V10V24 V30 V0 nA15 nAVcc18 V20 V22 V24 V24 V30 V0 nA10 nAOVERVOLTAGE0 V16 V14 V18 V24 V30 V0 nA15 nAUNDERVOLTAGE0 V16 V8 V12 V24 V30 V0 nA15 nAThere is a need for pad circuits which can be configured to meet the performance and design parameters of an electronic circuit or IC (such as the power management IC 20 of Figure 2 ) required for a particular application. Furthermore, in certain embodiments, there is a need for pad circuits which can operate with different reliability and performance parameters, for example, by changing only metal layers, or by configuring the power management IC 20 post-fabrication by selecting the setting of a pad controller 23. This can advantageously permit pad circuits 42a-42p to be configured for a particular application without requiring a redesign of the power management IC 20. The pad controller 23 can employ metal or poly fuses to control the operation of an ESD tolerant switch, as will be described in further detail below.IC Pad Circuits For Protection From Transient Signal EventFigure 4A is a schematic block diagram of a pad circuit 22 according to an embodiment of the invention. The illustrated pad circuit 22 includes a first building block 72, a second building block 74, and a third building block 76. The first, second, and third building blocks 72, 74, 76 can be connected end-to-end in a cascade configuration between a pad 42 and a node 82, and can be subcircuits of the pad circuit 22. Additional or fewer building blocks can be included in the cascade to achieve the desired reliability and performance parameters, as will be described in further detail below. The pad circuit 22 can be, for example, any of the pad circuits 22a-22p shown in Figure 2 , and the pad 42 can be any of the pads 42a-42p, including, for example, low-impedance output pads, high-impedance input pads, and low-impedance power pads. The node 82 can be, for example, a low impedance node or pad of the power management IC 20 configured to handle a relatively large shunted current.The building blocks 72, 74, 76 can form a pad circuit that has characteristics shown in Figure 3A or 3B . In one embodiment, the first, second and third building blocks 72, 74, 76 can be selected from a variety of types, such as a variety of electrically isolated clamp structures, so as to achieve the desired performance and reliability parameters for the pad circuit 22. For example, a first type of building block (Type A) can have a holding voltage VH_Aand a trigger voltage VT_A. A second type of building block (Type B) can have, for example, a trigger voltage VT_Band a holding voltage VH_B. By arranging additional or fewer of each type of building block, the overall holding voltage and trigger voltage of embodiments of the pad circuit 22 can be selectively varied. As will be described below, the building block types can be selected such that, when combining i number of Type A building blocks and j number of Type B building blocks in a cascade configuration, the pad circuit 22 can have a trigger voltage VTRIGGERroughly equal to abouti*VT_A+j*VT_B, and a holding voltage VHOLDINGroughly equal to abouti*VH_A+j*VH_B. Thus, by selecting the type and/or number of building blocks employed after manufacturing, and/or selecting the value of VH_A, VH_B, VT_Aand VT_Bduring design of the building blocks, a scalable family of pad circuit embodiments can be created which can be adapted for a multitude of electronic systems and applications.The design cost associated with designing the pad circuits can be reduced as compared to, for example, an approach in which different diode, bipolar, silicon controlled rectifier, and/or MOS devices are employed to achieve the reliability and performance requirements needed for each pad circuit. Moreover, in one embodiment, a first building block is placed below the pad and additional building blocks are placed in the vicinity of the pad. During back-end fabrication (for example, fabrication of metal layers), building blocks can be included in a cascade configuration with the first building block. Thus, each pad circuit 22 can be configured for a particular electronic system or application by changing the metal layers to control the building block configuration, as will be described below.Figure 4B is a schematic block diagram of a pad circuit in accordance with one embodiment. The illustrated pad circuit 22 includes a first building block 72, a second building block 74, and a third building block 76. The first, second, and third building blocks 72, 74, 76 can be connected end-to-end in a cascade configuration between a pad 42 and a node 82. Additional or fewer building blocks and blocks of a variety of types can be included in the cascade, as described earlier in connection with Figure 4A .Additionally, as illustrated in Figure 4B , the pad controller 23 can be configured to control the connections between the cascaded building blocks. For example, the pad controller 23 can be configured to bypass the second building block 74, thus selectively omitting the second building block 74 from the cascade. In one embodiment, a first building block is formed below the pad and additional building blocks are formed in the vicinity of the pad. After completing both front-end and back-end fabrication, particular building blocks can be included in a cascade with the first building block using the pad controller 23. For example, the pad controller 23 can be configured to include or exclude particular building blocks, thereby configuring the pad circuit 22 to have the trigger voltage VTRIGGERand holding voltage VHOLDINGdesired for a particular application. In one embodiment, each pad circuit 22 can be individually controlled by the pad controller 23 to achieve the desired cascade. In alternative embodiments, groupings of pads can be collectively configured by the pad controller 23. This can be desirable, for example, when a particular group of pads, such as VH1 and VL1 of Figure 2 , may have similar performance and reliability requirements.In one embodiment, the pad controller 23 is configured to use metal or poly fuses to control the operation of an ESD tolerant switch. The switch can be configured to bypass the operation of particular building blocks in the pad circuit 22. In an alternate embodiment, the pad controller 23 can include a multitude of fuse-controlled filaments that can be independently biased to configure each pad circuit 22 per combinations of building block types, such as the building block types which will be described later with reference to Figures 5A-5C .Although Figures 4A and 4B were described in the context of Type A and Type B building blocks, additional building block types can be used. For example, a Type C building block can have a holding voltage VH_Cand a trigger voltage VT_Cthat are different from the holding voltages and the trigger voltages, respectively, of the first and second types of building blocks. The pad circuit 22 can combineinumber of Type A building blocks,jnumber of Type B building blocks, andknumber of Type C building blocks such that the pad circuit 22 has a trigger voltage VTRIGGERroughly equal to abouti*VT_A+j*VT_B+k*VT_C, and a holding voltage VHOLDINGroughly equal to abouti*VH_A+j*VH_B+k*VH_C. The inclusion of additional building block types can increase the multitude of configurations of the cascade at the expense of an increase in design complexity. Furthermore, the number of building blocks in the cascade can also be increased to provide additional configurations, provided that each building block remains properly biased at the increased trigger and holding voltages. For example, in an electrically isolated clamp embodiment in which a deep n-well layer provides electrical isolation between building blocks, the number of building blocks can be limited by the voltage level provided to the deep n-well to maintain electrical isolation.Figures 5A-5C illustrate the circuits of a family of building block types, one or more of which can be employed as a building block type in the pad circuits of Figures 4A and 4B .Figure 5A is a circuit diagram illustrating a pad circuit building block (for example, the Type A building block described above in connection with Figures 4A and 4B ) in accordance with one embodiment. The Type A building block 91 includes a resistor 101 and a NPN bipolar transistor 100 having an emitter, a base, and a collector. The resistor 101 includes a first end electrically connected to the base of the transistor 100, and a second end electrically connected to the emitter of the transistor 100. The resistor 101 can have, for example, a resistance between about 5 Ω and about 55 Ω. The collector of the transistor 100 can be electrically connected to another building block or to a pad 42. The emitter of the transistor 100 can be electrically connected to another building block or to a node 82.Figure 5B is a circuit diagram illustrating a pad circuit building block (for example, the Type B building block described above in connection with Figures 4A and 4B ) in accordance with another embodiment. The Type B building block 92 includes a PNP bipolar transistor 102, an NPN bipolar transistor 103, a first resistor 104 and a second resistor 105. The PNP transistor 102 and the NPN transistor 103 each include an emitter, a base, and a collector. The first resistor 104 includes a first end electrically connected to the emitter of the PNP transistor 102, and a second end electrically connected to the base of the PNP transistor 102 and to the collector of the NPN transistor 103. The first resistor 104 can have, for example, a resistance between about 5 Ω and about 35 Ω. The second resistor 105 includes a first end electrically connected to the collector of the PNP transistor 102 and to the base of the NPN transistor 103, and a second end electrically connected to the emitter of the NPN transistor 103. The second resistor 105 can have, for example, a resistance between about 50 Ω and about 250 Ω. The emitter of the PNP transistor 102 can be electrically connected to another building block or to a pad 42. The emitter of the NPN transistor 103 can be connected to another building block or to a node 82.As skilled artisans will appreciate, the PNP transistor 102 and NPN transistor 103 are configured to be in feedback. At a certain level of the collector current of the PNP transistor 102, the feedback between the PNP transistor 102 and the NPN transistor 103 can be regenerative and can cause the Type B building block 92 to enter a low-impedance state.Figure 5C is a circuit diagram illustrating a pad circuit building block (for example, the Type C building block described above in connection with Figures 4A-4B ) in accordance with yet another embodiment. The Type C building block 93 includes a resistor 107 and a PNP bipolar transistor 106 having an emitter, a base, and a collector. A first end of the resistor 107 is electrically connected to the emitter of the transistor 106, and a second end is electrically connected to the base of the transistor 106. The resistor 107 can have, for example, a resistance between about 11 Ω and about 85 Ω. The emitter of the transistor 106 can be electrically connected to another building block or to a pad 42. The collector of the transistor 106 can be connected to another building block or to a node 82.With reference to Figures 5A-5C , the trigger and holding voltages of the Type A, Type B, and Type C building blocks can be selected so as to aid in configuring the pad circuit 22 to have a trigger voltage VTRIGGERand a holding voltage VHOLDINGdesired for a particular electronic system or application. For example, the trigger voltage of the Type A building block VT_Aand the trigger voltage of the Type B building block VT_Bcan be based on the collector-emitter breakdown voltage of the NPN transistor 100 and the NPN transistor 103, respectively. Additionally, the positive feedback between the NPN transistor 103 and the PNP transistor 102 in Type B Building block 92 can make the holding voltage VH_Bof the Type B building block 92 less than the holding voltage VH_Aof the Type A building block 91. Furthermore, the Type C building block can have a holding voltage VH_Cgreater than either the holding voltage VH_Aor VH_B, and can have a trigger voltage VT_Cbased on the collector-emitter breakdown voltage of the PNP transistor 106.In one embodiment, the Type A building block 91 and the Type B building block 92 are configured to have about the same trigger voltage, VT_A= VT_B= VT. Additionally, the positive feedback between the NPN transistor 103 and the PNP transistor 102 is employed to selectively decrease the holding voltage VH_Bof the Type B building block 92 relative to the holding voltage VH_Aof the Type A building block. Thus, in some embodiments,inumber of Type A building blocks andjnumber of Type B building blocks can be combined in a cascade configuration to produce a pad circuit 22 having a trigger voltage VTRIGGERroughly equal to about (i+j)*VT, and a holding voltage VHOLDINGroughly equal to abouti*VH_A+j*VH_B, where VH_Bis selected to be less than VH_A. This permits configurations having the same number of building blocks in the cascade to have about the same trigger voltage VTRIGGER. Additionally, the type of building blocks in the cascade can be selected to achieve the desired holding voltage VHOLDINGof the pad circuit 22.Skilled artisans will appreciate that the desired trigger voltage and holding voltage of each building block type can be achieved by proper selection of a variety of parameters, including, for example, the geometries of the transistors, the common-emitter gain or "β" of the transistors, and by selecting the resistance of the resistors.Bipolar Transistor Structures For Pad CircuitsFigures 6A-6C illustrate cross sections of various transistor structures. As will be described below, Figures 6B and 6C illustrate cross sections of transistor structures according to embodiments of the invention. These transistors can be used in pad circuit building blocks, even in processes lacking dedicated bipolar transistor masks.Figure 6A illustrates a cross section of a conventional NMOS transistor having a lightly doped drain (LDD) structure. The LDD NMOS transistor 120 is formed on a substrate 121 and includes an n+ drain region 122, an n+ source region 123, a gate 125, gate oxide 127, a lightly doped (n-) drain extension region 128, a lightly doped source extension region 129, and sidewall spacers 130.The n+ drain region 122 can be more heavily doped than the n- drain extension region 128. The difference in doping can reduce the electric fields near the drain region, thereby improving the speed and reliability of the transistor 120 while lowering gate-drain capacitance and minimizing the injection of hot electrons into the gate 125. Likewise, the n+ source region 123 can be more heavily doped than the n- source extension region 129 and provide similar improvements to the transistor 120.In a conventional LDD process, the gate electrode 125 is used as a mask for n-LDD implantation used to form the drain and source extension regions 128, 129. Thereafter, sidewall spacers 130 can be provided and employed as a mask for n+ implantation used to form the drain region 122 and the source region 123.Figure 6B illustrates a cross section of a parasitic NPN bipolar transistor in accordance with one embodiment. The illustrated parasitic NPN bipolar transistor 140 includes an emitter 141, a base 142 formed of a p-well, a collector 143, a plate 145, an oxide layer 147, an isolation layer 151, and sidewall spacers 150. The emitter 141, the collector 143, the plate 145, and the oxide layer 147 have structures similar to those of the drain region 122, the source region 123, the gate 125, and the oxide layer 127, respectively, of the conventional NMOS transistor 120 of Figure 6A . In contrast to the LDD NMOS transistor 120 shown in Figure 6A , the illustrated bipolar transistor 140 does not have structures similar to those of the source and drain extension regions of the NMOS transistor 120.Removal of the source and drain extension regions can result in transistor conduction being dominated by a bipolar component, rather than by a FET component. In particular, when a voltage is applied to the plate 145, the inversion layer may not extend from the emitter 141 to the collector 143, and thus the FET component of the current can be weak. Thus, during an overvoltage condition, the parasitic NPN bipolar transistor 140 can serve as the primary conduction path, and the parasitic NPN bipolar transistor 140 can function similarly to a traditional bipolar transistor.The resulting structure can have lower leakage than a conventional NMOS structure and withstand relatively large voltages without breakdown. Further, the resulting structure can be sized so as to employ the parasitic bipolar structure for transient signal protection without drawbacks, such as reduced reliability, typically encountered in high performance analog applications when degrading the standard MOS device characteristics. Since the parasitic NPN bipolar transistor 140 can be formed using a process used to create a conventional LDD MOS transistor, such as the NMOS transistor 120 of Figure 6A , both the parasitic NPN bipolar transistor 140 and the LDD NMOS transistor 120 can be fabricated simultaneously on a common substrate.The parasitic bipolar transistor 140 can have desirable properties for ESD protection and can be used in building blocks described above in connection with Figures 5A-5B . The use of the parasitic NPN bipolar transistor 140 can be desirable, for example, in a process which includes conventional LDD MOS transistors, but which lacks a dedicated bipolar process. In one embodiment, a single additional mask can be added during fabrication of transistors to determine which transistor structures receive the LDD implant and which do not.The sidewall spacers 150 can be formed using, for example, an oxide, such as SiO2, or a nitride. However, other sidewall spacer materials can be utilized in certain manufacturing processes. A distancex1between the emitter 141 and the plate 145 can be selected to be, for example, in a range of about 0.1 µm to 2.0 µm. A distancex2between the collector 143 and the plate 145 can be selected to be, for example, in a range of about 0.1 µm to 2.0 µm.The plate 145 can be formed from a variety of materials, including, for example, doped or undoped polysilicon. Although the plate 145 is illustrated as a single layer, the plate 145 can include multiple layers, such as, for example, layers of polysilicon and silicide. In one embodiment, the plate 145 can have a plate lengthx3selected to be in a range of about 0.25 µm to about 0.6 µm, for example, about 0.5 µm. However, skilled artisans will appreciate that the length of the plate 145 can vary depending on the particular process and application. The plate 145 can be formed over the oxide layer 147, which can correspond to, for example, any oxide layer dielectric known in the art or any oxide layer dielectric later discovered, including high-k oxide layers.The emitter 141 and the collector 143 of the bipolar transistor 140 can be formed using a variety of materials, including for example, any n-type doping material. The spacing between the emitter 141 and the collector 143 can correspond to the sum of the distancex1, the distancex2, and the plate lengthx3. In one embodiment, the spacing between the emitter 141 and collector 143 is selected to be in the range of about 0.45 µm to about 4.6 µm. The doping between the emitter and the collector, both beneath the sidewall spacers 151 and the plate can consist essentially of n-type, which can result in transistor conduction being dominated by a bipolar component, rather than by a FET component. Thus, when a voltage is applied to the plate 145, the inversion layer may not extend from the emitter 141 to the collector 143, and thus the FET component of the current can be weak. Accordingly, during an overvoltage condition, the parasitic NPN bipolar transistor 140 can serve as the primary conduction path, and the parasitic NPN bipolar transistor 140 can function similarly to a traditional bipolar transistor.The base 142 can be electrically isolated from the substrate 144 using a wide variety of techniques. In the illustrated embodiment, the isolation layer 151 is a deep n-well layer provided to electrically isolate the base 142 from the substrate 144. Persons of ordinary skill in the art will appreciate that a variety of techniques to provide electrical isolation are well known in the art and can be used in accordance with the teachings herein. For example, the isolation layer 151 can be an n-type buried layer or an isolation layer of a silicon-on-insulator (SOI) technology. The parasitic bipolar transistor 140 can undergo back end processing to form, for example, contacts and metallization. Skilled artisans will appreciate that various processes can be used for such back end processing.Figure 6C is a cross section of a PNP bipolar transistor 160 in accordance with one embodiment. The illustrated PNP bipolar transistor 160 includes an emitter 161, a base 162 formed of an n-well, a collector 163, a plate 165, an oxide layer 167, and sidewall spacers 170. The PNP bipolar transistor 160 can be formed in a manner similar to that of the NPN bipolar transistor 140 by selecting impurities with opposite polarity to that described above.The parasitic NPN bipolar transistor 140 and the parasitic PNP bipolar transistor 160 can be formed by omitting the implantation of the LDD layer in a conventional MOS process. As will be described in detail below, the NPN bipolar transistor 140 and the PNP bipolar transistor 160 can be used in the building blocks of Figures 5A-5C , thereby permitting the fabrication of a family of pad circuit building blocks even with a process lacking dedicated bipolar masks. The building blocks can be cascaded to achieve the desired holding and trigger voltages for a pad circuit, such as the pad circuit 22 of Figures 4A and 4B .Alternative Embodiments of IC Pad CircuitsFigures 7A-8B represent building block types, one or more of which can be employed as a building block type in the pad circuits of Figures 4A and 4B .Figure 7A is a circuit diagram illustrating a pad circuit building block in accordance with yet another embodiment. The illustrated Type A' building block 201 can be connected in a cascade between a pad 42 and a node 82, and includes a first resistor 203, a second resistor 205, a diode 204, and a NPN bipolar transistor 202 having an emitter, a base, a collector, and a plate. The NPN bipolar transistor 202 can have the structure of the NPN bipolar transistor 140 of Figure 6B .The diode 204 includes an anode electrically connected to the node 82, and a cathode electrically connected to the collector of the NPN bipolar transistor 202 at a node N1. The node N1can be electrically connected to another building block in a cascade, such as the cascade of Figure 4A , or to the pad 42. The first resistor 203 includes a first end electrically connected to the base of the NPN bipolar transistor 202, and a second end electrically connected to the emitter of the NPN bipolar transistor 202 and to a first end of the second resistor 205 at a node N2. The first resistor 203 can have, for example, a resistance between about 5 Ω and about 55 Ω. In one embodiment, described below with reference to Figure 7B , the first resistor 203 is implemented using a multi-finger array to achieve the target resistance, such as an array of six fingers each having a resistance selected from the range of about 30 Ω and about 320 Ω. The node N2can be electrically connected to another building block in a cascade or to the node 82. The second resistor 205 includes a second end electrically connected to the plate of the NPN bipolar transistor 202. The second resistor 205 can have, for example, a resistance between about 50 Ω and about 50 kQ.As was described before with reference to Figures 4A and 4B , the pad circuit 22 can be employed in, for example, any of the pad circuits 22a-22p shown in Figure 2 , and the pad 42 can be any of the pads 42a-42p, including, for example, low-impedance output pads, high-impedance input pads, and low-impedance power pads. The node 82 can be, for example, a low impedance node or pad of the power management IC 20 configured to handle a relatively large shunted current. A transient signal event can be received at the pad 42. If the transient signal event has a voltage which is negative with respect to the node 82, the diode 204 can provide current which can aid in protecting the power management IC 20.If the transient signal event has a voltage that is positive with respect to the node 82, the NPN bipolar transistor 202 can aid in providing transient signal protection. The trigger voltage of the Type A' building block VT_A'can be based on the collector-emitter breakdown voltage of the NPN bipolar transistor 202. Additionally, the plate and the collector of the NPN bipolar transistor 202 can function to form a capacitor, which can enhance how the NPN bipolar transistor 202 performs when a transient signal event having a positive voltage is received by increasing the displacement current, as will be described below.If the transient signal event received on pad 42 causes the node N1to have a rate of change dVN1/dt and the capacitance between the plate and the collector of the NPN bipolar transistor 202 has a value of C202, a displacement current can be injected by the capacitor equal to about C202*dVN1/dt. A portion of this current can be injected in the base of the NPN bipolar transistor 202, which can increase the speed at which the Type A' building block 201 provides transient signal protection. As described above, a transient signal event can be associated with fast rise and fall times (for example, from about 0.1 ns to about 1.0 ms) relative to the range of normal signal operating conditions. Thus, the NPN bipolar transistor 202 can be configured to have a trigger voltage which decreases in response to rates of voltage change associated with the very high frequency conditions of a transient signal event. During normal operation, the absence of the lightly doped drain (LDD) can make the leakage of the NPN bipolar transistor 202 relatively low, even over a relatively wide range of temperatures, for example, between about -40 °C and about 140 °C.Figure 7B illustrates an annotated cross section of one implementation of the pad circuit building block of Figure 7A . The illustrated Type A' building block 201 includes a substrate 221, emitters 211a-211f, base 212, collectors 213a-213e, plates 215a-215j, base contacts 217a, 217b, n-wells 218a, 218b, deep n-well 219, and substrate contacts 220a, 220b. The cross section has been annotated to illustrate examples of circuit devices formed, such as parasitic NPN bipolar transistors 202a-202j, resistors 203a, 203b, and diodes 204a, 204b. The diagram is also annotated to show the second resistor 205, which can be formed using, for example, n-diffusion or poly (not shown in this Figure). The Type A' building block 201 can undergo back end processing to form contacts and metallization. These details have been omitted from Figure 7B for clarity.The diodes 204a, 204b can be formed from the substrate 221 and n-wells 218a, 218b. For example, the diode 204a has an anode formed from the substrate 221 and a cathode formed from the n-well 218a. Similarly, the diode 204b has an anode formed from the substrate 221 and a cathode formed from the n-well 218b.The NPN bipolar transistors 202a-202j can be formed from emitters 211a-211f, collectors 213a-213e, plates 215a-215j, and base 212. For example, the NPN bipolar transistor 202a can be formed from the emitter 211a, the plate 215a, the collector 213a, and the base 212. The NPN bipolar transistors 202b-202j can be formed in a similar manner from emitters 211b-211f, collectors 213a-213e, plates 215b-215j, and base 212. Additional details of the NPN bipolar transistors 202a-202j can be as described above with reference to Figure 6B .The base 212 can be electrically isolated from the substrate 221 using n-wells 218a, 218b and deep n-well 219. The n-wells 218a, 218b and deep n-well 219 can also provide electrically isolation of the building block from other building blocks. The n-well contacts 222a, 222b can form a guard ring around the Type A' building block 201. The n-well contacts 222a, 222b can be contacted to a metal layer above by using multiple rows of contacts, thereby permitting the guard ring to be connected to the collectors 213a-213e through metal. The guard ring can eliminate the formation of unintended parasitic paths between the pad circuit and surrounding semiconductor components when integrated on-chip. Additionally, the substrate contacts 220a, 220b can form a substrate ring which can aid in protecting the Type A' building block 201 from latch-up.The resistors 203a, 203b can be formed from the resistance between the bases of NPN bipolar transistors 202a-202j and the base contacts 217a, 217b. The resistance along the paths between the bases of the NPN bipolar transistors 202a-202j and the base contacts 217a, 217b can be modeled by the resistors 203a, 203b.Persons of ordinary skill in the art will appreciate that the cross-section shown in Figure 7B can result in the formation of the circuit shown in Figure 7A . For example, each of the emitters of the NPN bipolar transistors 202a-202j can be electrically connected together to form a common emitter. Likewise, each of the collectors, plates, and bases of the NPN bipolar transistors 202a-202j can be electrically connected together to form a common collector, a common plate, and a common base, respectively. Thus, each of the NPN bipolar transistors 202a-202j can be legs of the NPN bipolar transistor 202. Additionally, the diodes 204a, 204b can be represented by the diode 204, and the resistors 203a, 203b can be represented by the first resistor 203. The second resistor 205 can be formed using, for example, n-diffusion or poly (not shown in this Figure). Thus, Figure 7B illustrates a cross section of an implementation of the pad circuit building block of Figure 7A . Skilled artisans will appreciate that numerous layout implementations of the Type A' building block 201 are possible.As described earlier with reference to Figure 7A , the capacitance between the plate and the collector of the NPN bipolar transistor 202 can result in a current which can be injected in the base of the NPN bipolar transistor 202. This can increase the speed at which the Type A' building block 201 provides transient signal protection. The second resistor 205 can have a resistance selected to provide injection into the base of the NPN bipolar transistors at a frequency associated with a transient signal event. In one embodiment, the second resistor 205 can have a resistance in the range of about 200 Ω to 50 kΩs.Each of the NPN bipolar transistors 202a-202j can be legs of the NPN bipolar transistor 202 as described above. In one embodiment, each of the NPN bipolar transistors has a plate width (for example, the width of the plate 145 in a direction orthogonal to the plate lengthx3of Figure 6B ) between about 30 µm and 100 µm, so that the total plate width (the sum of the plates widths of all legs) is in the range of about 300 µm to 1,000 µm. In one embodiment, the plate length of each NPN bipolar transistors (for example,x3in Figure 6B ) is selected to be between about 0.25 µm and about 0.6 µm, for example, about 0.5 µm. Although the cross section shown in Figure 7B illustrates the NPN bipolar transistor 202 as having ten legs, skilled artisans will appreciate that more or fewer legs can be selected depending on, for example, the desired dimensions of the pad circuit and the desired total plate width. In one embodiment described with reference to Figures 17A-17H , the number and width of the legs are selected so that the implementation of the Type A' building block 201 can fit under a bonding pad.Figure 8A is a circuit diagram illustrating a pad circuit building block in accordance with yet another embodiment. The illustrated Type B' building block 231 can be connected in a cascade between the pad 42 and the node 82, and includes a PNP transistor 232, a NPN bipolar transistor 233, a first resistor 234, a second resistor 235, a third resistor 236, and a diode 237. The PNP transistor 232 includes an emitter, a base, and a collector. The NPN bipolar transistor 233 includes an emitter, a base, a collector and a plate, and can have a structure similar to that of the NPN bipolar transistor 140 of Figure 6B .The diode 237 includes an anode electrically connected to the node 82, and a cathode electrically connected to a first end of the first resistor 234 and to the emitter of the PNP transistor 232 at a node N3. The node N3can be electrically connected to another building block in a cascade, such as the cascade of Figure 4A , or to the pad 42. The first resistor 234 also includes a second end electrically connected to the base of the PNP transistor 232 and to the collector of the NPN bipolar transistor 233. The first resistor 234 can have, for example, a resistance between about 5 Ω and about 35 Ω. In one embodiment, described below with reference to Figure 8B , the first resistor 234 is implemented using a multi-finger array to achieve the target resistance, such as an array of two fingers each having a resistance selected from the range of about 10 Ω and about 70 Ω. The second resistor 235 includes a first end electrically connected to the collector of the PNP transistor 232 and to the base of the NPN bipolar transistor 233, and a second end electrically connected to the emitter of the NPN bipolar transistor 233 and to a first end of the third resistor 236 at a node N4. The second resistor 235 can have, for example, a resistance between about 50 Ω and about 250 Ω. In one embodiment, described below with reference to Figure 8B , the second resistor 235 is implemented using a multi-finger array to achieve the target resistance, such as an array of two fingers each having a resistance selected from the range of about 100 Ω and about 500 Ω. The node N4can be electrically connected to another building block in a cascade or to the node 82. The third resistor 236 includes a second end electrically connected to the plate of the NPN bipolar transistor 233. The third resistor 236 can have, for example, a resistance between about 200 Ω and about 50 kΩ.As was described before with reference to Figures 4A and 4B , the pad circuit 22 can be, for example, any of the pad circuits 22a-22p shown in Figure 2 , and the pad 42 can be any of the pads 42a-42p. The node 82 can be, for example, a low impedance node or pad of the power management IC 20 configured to handle a relatively large shunted current. A transient signal event can be received at the pad 42. If the transient signal event has a voltage that is negative with respect to the node 82, the diode 237 can provide current which can aid in protecting the power management IC 20.If the transient signal event has a voltage which is positive with respect to the node 82, the PNP transistor 232 and the NPN bipolar transistor 233 can aid in providing transient signal protection. The trigger voltage of the Type B' building block VT_B'can be based on the collector-emitter breakdown voltage of the NPN bipolar transistor 233. Additionally, the positive feedback between the NPN bipolar transistor 233 and the PNP transistor 232 can make the holding voltage VT_B'of the Type B' building block 231 less than the holding voltage VH_A'of the Type A' building block 201 of Figure 7A .The plate and the collector of the NPN bipolar transistor 233 can function to form a capacitor which can enhance the performance of the NPN bipolar transistor 233 when a transient signal event having a positive voltage is received, as was described earlier. For example, a portion of this current can be injected in the base of the NPN bipolar transistor 233 through capacitive coupling, which can aid the speed at which the Type B' building block 231 provides transient signal protection. Thus, the NPN bipolar transistor 233 can be configured to have a trigger voltage which is lower at rates of voltage change associated with the very high frequency conditions of a transient signal event. During normal operation, the absence of the lightly doped drain (LDD) can make the leakage of the NPN bipolar transistor 233 low, even at relatively high temperatures.Figure 8B is an annotated cross section of one implementation of the pad circuit building block of Figure 8A . The illustrated Type B' building block 231 includes NPN emitters 241a, 241b, NPN bases 242a, 242b, NPN collector contacts 243a, 243b, plates 245a, 245b, NPN base contacts 247a, 247b, PNP base 258, PNP base contacts 257a, 257b, n-wells 248a, 248b, deep n-well 249, and substrate contacts 250a, 250b. As illustrated, the NPN collector contacts 243a, 243b are each formed partially in a p-well and partially in an n-well. For example, the NPN collector contact 243a is partially formed in the NPN base 242a, and partially formed in the PNP base 258, and the NPN collector contact 243b is partially formed in the NPN base 242b and partially formed in the PNP base 258. The cross section has been annotated to show certain circuit components formed from the layout, including NPN bipolar transistors 233a, 233b, PNP transistors 232a, 232b, p-well resistors 235a, 235b, n-well resistors 234a, 234b, and diodes 237a, 237b. The diagram is also annotated to show the third resistor 236, which can be formed using, for example, n-diffusion (not shown in this Figure). The Type B' building block 231 can undergo back end processing to form contacts and metallization. These details have been omitted from Figure 8B for clarity.The diodes 237a, 237b can be formed from substrate 251 and n-wells 248a, 248b. For example, the diode 237a has an anode formed from the substrate 251 and a cathode formed from the n-well 248a. The diode 237b has an anode formed from the substrate 251 and a cathode formed from the n-well 248b.The NPN bipolar transistors 233a, 233b can be formed from NPN emitters 241a, 241b, PNP base 258, NPN collector contacts 243a, 243b, plates 245a, 245b, and NPN bases 242a, 242b. For example, the NPN bipolar transistor 233a can be formed from the NPN emitter 241a, the plate 245a, the PNP base 258, the NPN collector contact 243a, and the NPN base 242a. Likewise, the NPN bipolar transistor 233b can be formed from the NPN emitter 241b, the plate 245b, the PNP base 258, the NPN collector contact 243b, and the NPN base 242b. Although the NPN bipolar transistors 233a, 233b are connected to NPN collector contacts 243a, 243b, in the illustrated embodiment, the contacts 243a, 243b are not connected to metal layers, and thus the PNP base 258 can also serve as the collectors for NPN bipolar transistors 233a, 233b. Additional details of the NPN bipolar transistors 233a, 233b can be found above with reference to Figure 6B .The NPN bases 242a, 242b can be electrically isolated using n-wells 248a, 248b, n-well of the PNP base 258, and deep n-well 249. The n-well contacts 252a, 252b can form part of a guard ring around the Type B' building block 231. The substrate contacts 250a, 250b can form a portion of a substrate ring which can aid in protecting the Type B' building block 231 from latch-up.The p-well resistors 235a, 235b can be formed from the resistance between the bases of NPN bipolar transistors 233a, 233b and the base contacts 247a, 247b. Skilled artisans will appreciate that the p-wells of the bases 242a, 242b can have a resistivity along the electrical path between the bases of NPN bipolar transistors 233a, 233b and the base contacts 247a, 247b, which can be modeled by p-well resistors 235a, 235b.The PNP transistors 232a, 232b can be formed from PNP emitters 254a, 254b, PNP base 258, and the NPN bases 242a, 242b. For example, the PNP transistor 232a can have an emitter formed from the PNP emitter 254a, a base formed from the PNP base 258, and a collector formed from the NPN base 242a. Likewise, the PNP transistor 232b can have an emitter formed from the PNP emitter 254b, a base formed from the PNP base 258, and a collector formed from the NPN base 242b.The n-well resistors 234a, 234b can be formed from the resistance between the bases of PNP transistors 232a, 232b and the PNP base contacts 257a, 257b. Skilled artisans will appreciate that the n-well of the PNP base 258 can have a resistivity along the electrical path between the bases of PNP transistors 232a, 232b and the PNP base contacts 257a, 257b, which can be modeled by n-well resistors 234a, 234b.Persons of ordinary skill in the art will appreciate that the cross-section shown in Figure 8B can result in the formation of the circuit shown in Figure 8A . For example, each of the NPN bipolar transistors 233a, 233b can be legs of the NPN bipolar transistor 233. Likewise, each of the PNP transistors 232a, 232b can be legs of the PNP transistor 232. Additionally, the diodes 237a, 237b can form the diode 237, the n-well resistors 234a, 234b can form the first resistor 234, and the p-well resistors 235a, 235b can form the second resistor 235. The third resistor 236 can be formed using, for example, n-diffusion or poly (not shown in this Figure). Thus, Figure 8B is a cross section of one implementation of the of the pad circuit building block of Figure 8A . Skilled artisans will appreciate that numerous variations of the Type B' building block 201 are possible.As was described above with reference to Figure 8A , when a transient signal is present, the capacitance between the plate and the collector of the NPN bipolar transistor 233 can result in a current being injected in the base of the NPN bipolar transistor 233. This can aid the speed at which the Type B' building block 231 provides transient signal protection. The third resistor 236 can have a resistance selected to provide injection into the base of the NPN bipolar transistor 233 at a frequency associated with a particular transient signal event. In one embodiment, the third resistor 236 has a resistance selected in the range of about 200 Ω to 50 kΩs.Each of the NPN bipolar transistors 233a, 233b can be legs of the NPN bipolar transistor 233. In one embodiment, each NPN bipolar transistor 233a, 233b has a plate width typically selected between about 30 µm and 50 µm, so that the total plate width of the NPN bipolar transistor 233 is in the range of about 60 µm to 100 µm. The length of each NPN bipolar transistor 233a, 233b can have a length selected between, for example, about 0.25 µm and 0.6 µm, for example, about 0.5 µm. Although the cross section in Figure 8B shows the NPN bipolar transistor 233 as having two legs, skilled artisans will appreciate that additional or fewer legs can be selected depending on a variety of factors, including the desired pad circuit dimensions and the desired total plate width. In one embodiment described with reference to Figures 18A-18B , the number and width of the legs is selected so that two instantiations of the Type B' building block 231 can fit under a bonding pad.The PNP transistors 232a, 232b can be legs of the PNP transistor 232. Although the cross section illustrated in Figure 8B shows the PNP transistor 232 as having two legs, skilled artisans will appreciate that additional or fewer legs can be selected depending on a variety of factors such as the manufacturing process and application.With reference to Figures 4A, 4B , 7A , and 8A , the trigger voltages VT_A', VT_B'and the holding voltages VH_A', VH_B'of the Type A' and Type B' building blocks can be selected so that the pad circuit 22 has a trigger voltage VTRIGGERand a holding voltage VHOLDINGdesired for a particular electronic system or application. For example,inumber of Type A' building blocks andjnumber of Type B' building blocks can be cascaded so that the pad circuit 22 has a trigger voltage VTRIGGERroughly equal to abouti*VT_A'+j*VT_B', and a holding voltage VHOLDINGroughly equal to abouti*VH_A'+j*VH_B'. By selecting the Type and number of building blocks employed, and/or by selecting the value of VH_A', VH_B', VT_A'and VT_B'during design of the building blocks, a scalable family of pad circuits can be created which can be adapted for a multitude of electronic systems and applications. The design cost associated with designing the pad circuits can be reduced as compared to, for example, an approach in which different diode, bipolar, silicon controlled rectifier and MOS devices are employed to achieve the reliability and performance requirements needed for each pad circuit. The desired trigger voltage and holding voltage of each building block type can be achieved by proper selection of a variety of parameters, including, for example, the geometries of the transistors, the common-emitter gain or "β" of the transistors, and by selecting the resistance of the resistors.In one embodiment, the Type A' building block 201 and the Type B' building block 231 are configured to have about the same trigger voltage, VT_A'= VT_B'= VT'. Additionally, the positive feedback between the NPN bipolar transistor 233 and the PNP transistor 232 is employed to selectively decrease the holding voltage VH_B'of the Type B' building block 231 relative to the holding voltage VH_A'of the Type A' building block 201. Thus, i number of Type A' building blocks andjnumber of Type B' building blocks can be combined in a cascade configuration to produce a pad circuit 22 having a trigger voltage VTRIGGERroughly equal to about (i+j)*VT', and a holding voltage VHOLDINGroughly equal to abouti*VH_A'+j*VH_B', where VH_B'is selected to be less than VH_A'. This permits configurations having the same number of building blocks in the cascade to have about the same trigger voltage VTRIGGER. Additionally, the type of building blocks in the cascade can be selected to achieve the desired holding voltage VHOLDINGof the pad circuit 22.Figures 9A-14B illustrate various other embodiments in a family of cascaded building blocks using Type A' building block 201 and Type B' building block 231. Although Figures 9A-14B are described in the context of Type A' and Type B' building blocks 201, 231 of Figures 7A and 8A , skilled artisans will appreciate that similar configurations can be created using the Type A and Type B building blocks 91, 92 of Figures 5A and 5B .As was described earlier with reference to Table 1 and Figures 3A and 3B , there is a need for pad circuits which can be configured to meet the performance and design parameters required for a particular application. For example, various pads of the power management IC 20 can have different reliability and performance parameters, as shown in Table 1. Figures 9A-14B illustrate various cascade configurations of Type A' and Type B' building blocks 201, 231, which can be employed to meet different reliability and performance parameters, as will be described below. In one embodiment, the type and number of building blocks are selected during design for a particular application. In another embodiment, a multitude of building blocks are placed in the vicinity of the pad during front end fabrication, and the desired configuration is selected by changing metal layers and via connections during back end processing. In yet another embodiment, a multitude of building blocks are placed in the vicinity of the bonding pad, and the type and number of the building blocks are selected using the pad controller 23 after fabrication, as was described earlier.Figure 9A is a schematic block diagram of a pad circuit according to a first embodiment. The illustrated pad circuit 281 includes two Type A' building blocks 201 connected in a cascade between the pad 42 and the node 82. The Type A' building block 201 can be configured to have a trigger voltage VT_A'equal to about the trigger voltage VT_B'of the Type B' building block 231 of Figure 8A . However, the holding voltage VH_A'of the Type A' building block 201 can be configured to be greater than the holding voltage VH_B'of the Type B' building block 231. Thus, the pad circuit 281 can be employed, for example, in an input pad having a moderate operating voltage and requiring a relatively high holding voltage. For example, if VT_A'is equal to about 9 V and VH_A'is equal to about 5 V, the pad circuit 281 can have a trigger voltage of about 18 V and a holding voltage of about 10 V. Thus, the pad circuit 281 can have a holding voltage and trigger voltage appropriate for the pad VH1 in Table 1.Figure 9B is a circuit diagram of the pad circuit of Figure 9A . The illustrated pad circuit 281 includes two Type A' building blocks connected in a cascade configuration between the pad 42 and the node 82. Each Type A' building block 201 includes a first resistor 203, a second resistor 205, a diode 204, and a NPN bipolar transistor 202 having an emitter, a base, a collector, and a plate. Additional details of the Type A' building block 201 can be as described earlier with reference to Figure 7A .Figure 10A is a schematic block diagram of a pad circuit according to a second embodiment. The illustrated pad circuit 282 includes a Type A' building block 201 connected in a cascade with a Type B' building block 231 between the pad 42 and the node 82. As described above, the Type A' building block 201 can be configured to have a trigger voltage VT_A'equal to about the trigger voltage VT_B'of the Type B' building block 231. However, the holding voltage VH_A'of the Type A' building block 201 can be configured to be greater than the holding voltage VH_B'of the Type B' building block 231. Thus, the pad circuit 282 can be employed, for example, in an input pad having a relatively moderate operating voltage and requiring a relatively moderate holding voltage. For example, if VT_A'and VT_B'are equal to about 9 V, VH_A'is equal to about 5 V, and VH_B'is equal to about 2.5 V, the pad circuit 282 can have a trigger voltage of about 18 V and a holding voltage of about 7.5 V. Thus, the pad circuit 282 can have a holding voltage and trigger voltage appropriate for the pad VH2 in Table 1.Figure 10B is a circuit diagram of the pad circuit of Figure 10A . The illustrated pad circuit 282 includes a Type A' building block 201 and a Type B' building block 231 connected in a cascade configuration between the pad 42 and the node 82. The Type A' building block 201 includes a first resistor 203, a second resistor 205, a diode 204, and a NPN bipolar transistor 202 having an emitter, a base, a collector, and a plate. Additional details of the Type A' building block 201 can be as described earlier with reference to Figure 7A . The Type B' building block 231 includes a PNP transistor 232, a NPN bipolar transistor 233, a first resistor 234, a second resistor 235, a third resistor 236, and a diode 237. The PNP transistor 232 includes an emitter, a base, and a collector, and the NPN bipolar transistor 233 includes an emitter, a base, a collector and a plate. Additional details of the Type B' building block 231 can be as described earlier with reference to Figure 8A .Figure 11A is a schematic block diagram of a pad circuit according to a third embodiment. The illustrated pad circuit 283 includes two Type B' building block 231 connected in a cascade between the pad 42 and the node 82. As described above, the Type B' building block 231 can be configured to have a trigger voltage VT_B'equal to about the trigger voltage VT_A'of the Type A' building block 201 of Figure 7A . However, the holding voltage VH_B'of the Type B' building block 231 can be configured to be greater than the holding voltage VH_A'of the Type A' building block 201. Thus, the pad circuit 283 can be employed, for example, in an input pad having a relatively moderate operating voltage and requiring a relatively low holding voltage. For example, if VT_B'is equal to about 9 V and VH_B'is equal to about 2.5 V, the pad circuit 283 can have a trigger voltage of about 18 V and a holding voltage of about 5 V. Thus, the pad circuit 283 can have a holding voltage and trigger voltage appropriate for the pad VH3 in Table 1.Figure 11B is a circuit diagram of the pad circuit of Figure 11A . The illustrated pad circuit 283 includes two Type B' building blocks 231 connected in a cascade configuration between the pad 42 and the node 82. Each Type B' building block 231 includes a PNP transistor 232, a NPN bipolar transistor 233, a first resistor 234, a second resistor 235, a third resistor 236, and a diode 237. The PNP transistor 232 includes an emitter, a base, and a collector, and the NPN bipolar transistor 233 includes an emitter, a base, a collector and a plate. Additional details of the Type B' building block 231 can be as described earlier with reference to Figure 8A .Figure 12A is a schematic block diagram of a pad circuit according to a fourth embodiment. The illustrated pad circuit 284 includes three Type A' building blocks 201 connected in a cascade between the pad 42 and the node 82. The Type A' building block 201 can be configured to have a trigger voltage VT_A'equal to about the trigger voltage VT_B'of the Type B' building block 231 of Figure 8A . However, the holding voltage VH_A'of the Type A' building block 201 can be configured to be greater than the holding voltage VH_B'of the Type B' building block 231. Thus, the pad circuit 284 can be employed, for example, in an output pad having a relatively high operating voltage and requiring a relatively high holding voltage. For example, if VT_A'is equal to about 9 V and VH_A'is equal to about 5 V, the pad circuit 284 can have a trigger voltage of about 27 V and a holding voltage of about 15 V. Thus, the pad circuit 284 can have a holding voltage and trigger voltage appropriate for the pad OVERVOLTAGE in Table 1.Figure 12B is a circuit diagram of the pad circuit of Figure 12A . The illustrated pad circuit 284 includes three Type A' building blocks connected in a cascade configuration between the pad 42 and the node 82. Each Type A' building block 201 includes a first resistor 203, a second resistor 205, a diode 204, and a NPN bipolar transistor 202 having an emitter, a base, a collector, and a plate. Additional details of the Type A' building block 201 can be as described earlier with reference to Figure 7A .Figure 13A is a schematic block diagram of a pad circuit according to a fifth embodiment. The illustrated pad circuit 285 includes two Type B' building blocks 231 connected in a cascade with a Type A' building block 201 between the pad 42 and the node 82. As described above, the Type A' building block 201 can be configured to have a trigger voltage VT_A'equal to about the trigger voltage VT_B'of the Type B' building block 231. However, the holding voltage VH_A'of the Type A' building block 201 can be configured to be greater than the holding voltage VH_B'of the Type B' building block 231. Thus, the pad circuit 285 can be employed, for example, in an output pad having a relatively high operating voltage and requiring a relatively moderate holding voltage. For example, if VT_A'and VT_B'are equal to about 9 V, VH_A'is equal to about 5 V, and VH_B'is equal to about 2.5 V, the pad circuit 285 can have a trigger voltage of about 27 V and a holding voltage of about 10 V. Thus, the pad circuit 285 can have a holding voltage and trigger voltage appropriate for the pad UNDERVOLTAGE in Table 1.Figure 13B is a circuit diagram of the pad circuit of Figure 13A . The illustrated pad circuit 285 includes two Type B' building blocks 231 connected in a cascade with a Type A' building block 201 between the pad 42 and the node 82. The Type A' building block 201 includes a first resistor 203, a second resistor 205, a diode 204, and a NPN bipolar transistor 202 having an emitter, a base, a collector, and a plate. Additional details of the Type A' building block 201 can be as described earlier with reference to Figure 7A . Each Type B' building block 231 includes a PNP transistor 232, a NPN bipolar transistor 233, a first resistor 234, a second resistor 235, a third resistor 236, and a diode 237. The PNP transistor 232 includes an emitter, a base, and a collector, and the NPN bipolar transistor 233 includes an emitter, a base, a collector and a plate. Additional details of the Type B' building block 231 can be as described earlier with reference to Figure 8A .Figure 14A is a schematic block diagram of a pad circuit according to a sixth embodiment. The illustrated pad circuit 286 includes three Type B' building block 231 connected in a cascade between the pad 42 and the node 82. As described above, the Type B' building block 231 can be configured to have a trigger voltage VT_B'equal to about the trigger voltage VT_A'of the Type A' building block 201 of Figure 7A . However, the holding voltage VH_B'of the Type B' building block 231 can be configured to be greater than the holding voltage VH_A'of the Type A' building block 201. Thus, the pad circuit 286 can be employed, for example, in an input pad having a relatively high operating voltage and requiring a relatively low holding voltage. For example, if VT_B'is equal to about 9 V and VH_B'is equal to about 2.5 V, the pad circuit 286 can have a trigger voltage of about 27 V and a holding voltage of about 7.5 V. Thus, the pad circuit 286 can have a holding voltage and trigger voltage appropriate for the pad VH4 in Table 1.Figure 14B is a circuit diagram of the pad circuit of Figure 14B . The illustrated pad circuit 286 includes three Type B' building block 231 connected in a cascade between the pad 42 and the node 82. Each Type B' building block 231 includes a PNP transistor 232, a NPN bipolar transistor 233, a first resistor 234, a second resistor 235, a third resistor 236, and a diode 237. The PNP transistor 232 includes an emitter, a base, and a collector, and the NPN bipolar transistor 233 includes an emitter, a base, a collector and a plate. Additional details of the Type B' building block 231 can be as described earlier with reference to Figure 8A .In the embodiments shown in Figures 9A-14B , cascaded building block configurations employ Type A' and Type B' building blocks 201, 231. However, one or more additional building block types can be included. For example, a Type C' building block having a holding voltage VH_C'and a trigger voltage VT_C'can be utilized. The pad circuit 22 can combine i number of Type A' building blocks,jnumber of Type B' building blocks, andknumber of Type C' building blocks such that the pad circuit 22 has a trigger voltage VTRIGGERroughly equal to abouti*VT_A'+j*VT_B'+k*VT_C', and a holding voltage VHOLDINGroughly equal to abouti*VH_A'+j*VH_B'+k*VH_C'. Providing additional types of building block can increase the multitude of configurations of the cascade at the expense of an increase in design complexity.Figure 15 is a circuit diagram illustrating a pad circuit building block in accordance with yet another embodiment. The Type C' building block 291 can be connected in a cascade with other building blocks between the pad 42 and the node 82. The illustrated Type C' building block 291 includes a first resistor 293, a second resistor 295, a diode 294, and a PNP bipolar transistor 292 having an emitter, a base, a collector, and a plate. The PNP bipolar transistor 292 can have a structure similar to that of the PNP bipolar transistor 160 of Figure 6C .The diode 294 includes an anode electrically connected to the node 82, and a cathode electrically connected to the emitter of the PNP bipolar transistor 292 and to a first end of the first resistor 293 at a node N5. The node N5can be electrically connected to another building block in a cascade, such as the cascaded building blocks of Figures 4A and 4B , or to the pad 42. The first resistor 293 includes a second end electrically connected to the base of the PNP bipolar transistor 292. The first resistor 293 can have, for example, a resistance between about 11 Ω and about 85 Ω. In one embodiment, the first resistor 293 is implemented using a multi-finger array to achieve the target resistance, such as an array of six fingers each having a resistance selected from the range of about 66 Ω and about 510 Ω. The second resistor 295 includes a first end electrically connected to the plate of the PNP bipolar transistor 292, and a second end electrically connected to the collector of the NPN bipolar transistor 292 at a node N6. The second resistor 295 can have, for example, a resistance between about 200 Ω and about 50 kΩ. The node N6can be electrically connected to another building block in a cascade or to the node 82.The pad circuit 22 can be, for example, any of the pad circuits 22a-22p shown in Figure 2 , and the pad 42 can be any of the pads 42a-42p, including, for example, low-impedance output pads, high-impedance input pads, and low-impedance power pads. The node 82 can be, for example, a low impedance node or pad of the power management IC 20 configured to handle a relatively large shunted current. A transient signal event can be received at the pad 42. If the transient signal event has a voltage that is negative with respect to the node 82, the diode 294 can provide current which can aid in protecting the power management IC 20.If the transient signal event has a voltage which is positive with respect to the node 82, the PNP bipolar transistor 292 can aid in providing transient signal protection. The trigger voltage of the Type C' building block VT_C'can be based on the collector-emitter breakdown voltage of the PNP bipolar transistor 292. The Type C' building block can have a holding voltage VH_C'greater than either the holding voltage VH_A'or VH_B'. During normal operation, the absence of the LDD can make the leakage of the PNP bipolar transistor 292 low, even at relatively high temperatures. The PNP bipolar transistor 292 can have a lower leakage current as compared to a similarly sized PMOS transistor.Figure 16A is a schematic block diagram of a pad circuit according to a seventh embodiment. The illustrated pad circuit 297 includes a Type C' building block 291, a Type B' building block 231, and a Type C' building block 291 connected in a cascade between the pad 42 and the node 82. As described above, the holding voltage VH_C'of the Type C' building block 291 can be configured to be greater than the holding voltage VH_B'of the Type B' building block 231 or the holding voltage VH_A'of the Type A' building block 201. Furthermore, in certain processes, the leakage of the Type C' building block 291 can be less than that of the Type A' and Type B' building blocks 201, 231. Thus, the pad circuit 297 can be used, for example, in a very low leakage power pad having a relatively high operating voltage and requiring a relatively high holding voltage. For example, if VT_A'and VT_B'are equal to about 9 V, VT_C'is equal to about 10 V, VH_B'is equal to about 2.5 V, and VH_C'is equal to about 10V, the pad circuit 285 can have a trigger voltage of about 29 V and a holding voltage of about 22.5 V. Thus, the pad circuit 297 can have a holding voltage and trigger voltage appropriate for the pad Vcc in Table 1. Additionally, in certain processes, the leakage current of the pad circuit 297 can be less than certain pad circuit configurations using only Type A' and Type B' building blocks, and thus pad circuit configurations with Type C' building blocks can be employed for very low leakage pads.Figure 16B is a circuit diagram of the pad circuit of Figure 16A . The illustrated pad circuit 297 includes a Type C' building block 291, a Type B' building block 231, and a Type C' building block 291 connected in a cascade between the pad 42 and the node 82. Each Type C' building block 291 includes a first resistor 293, a second resistor 295, a diode 294, and a PNP bipolar transistor 292 having an emitter, a base, a collector, and a plate. Additional details of the Type C' building block 291 can be as described earlier with reference to Figure 15 . The Type B' building block 231 includes a PNP transistor 232, a NPN bipolar transistor 233, a first resistor 234, a second resistor 235, a third resistor 236, and a diode 237. The PNP transistor 232 includes an emitter, a base, and a collector, and the NPN bipolar transistor 233 includes an emitter, a base, a collector and a plate. Additional details of the Type B' building block 231 can be as described earlier with reference to Figure 8A .Figure 17A is a perspective view of one implementation of the pad circuit of Figure 12B . The illustrated pad circuit 300 includes a bonding pad 305, a first Type A' building block 301, a second Type A' building block 302, and a third Type A' building block 303 connected in a cascade. The layout of the first Type A' building block 301 is configured such that the first Type A' building block 301 can fit below the bonding pad 305. The second and Type A' building blocks 302, 303 have layouts extending outside the bonding pad area.During back-end fabrication (for example, fabrication of metal layers), building blocks can be included in a cascade configuration with the first Type A' building block. Thus, for example, the pad circuit 300 can be configured to have the configuration shown in Figure 9B by changing the metal layers. Furthermore, additional building blocks, such as a Type B' building block can be placed adjacent to the pad 305, and can be included in the cascade by changing metal layers. Thus, an IC using the pad circuit 300, such as the power management IC 20, can be configured for a particular electronic system or application.As will be described in further detail below with reference to Figures 17B-17I , the pad circuit 300 can advantageously be constructed with three metal layers, thereby permitting fabrication in processes with limited numbers of metal layers. Moreover, the pad circuit 300 can be implemented in a small circuit area, and a large portion of the pad circuit 300 can be positioned directly under the bonding pad 305.Figure 17B is a cross section of the pad circuit 300 of Figure 17A taken along the line 17B-17B. The first Type A' building block 301 includes a substrate 307, plates 309, a deep n-well 310, n-wells 311, contacts 312, a first metal layer 313, first vias 314, a second metal layer 315, second vias 316, a third metal layer 317, and passivation layer 318. In contrast to the Type A' building block 201 shown in Figure 7B , the first Type A' building block 301 is illustrated with back end processing. The deep n-well 310 and n-wells 311 can electrically isolate the first Type A' building block 301 from other building blocks, such as the second and third Type A' building blocks 302, 303. Additional details of the base layers of the first Type A' building block can be similar to those described earlier with reference to Figure 7B .Figure 17C is a cross section of the pad circuit of Figure 17A taken along the line 17C-17C. The second Type A' building block 302 can be formed in the same substrate 307 as the first Type A' building block 301. The second Type A' building block 302 can include plates 309, a deep n-well 310, n-wells 311, contacts 312, a first metal layer 313, first vias 314, a second metal layer 315, second vias 316, and a third metal layer 317. Additional details of the base layers of the second Type A' building block 302 can be similar to those described earlier with reference to Figure 7B . Skilled artisans will appreciate that the geometries of first Type A' building block 301 and the second Type B' building block 302 can be different. For example, the plates 309 of the first Type A' building block 301 can have different plate widths than the plates 309 of the second Type A' 302, as can been seen in Figure 17E .Figure 17D is a cross section of the pad circuit of Figure 17A taken along the line 17D-17D. The third Type A' building block 303 can be formed in the same substrate 307 as the first and second Type A' building blocks 301, 302. The third Type A' building block 303 can include plates 309, a deep n-well 310, n-wells 311, contacts 312, a first metal layer 313, first vias 314, a second metal layer 315, second vias 316, and a third metal layer 317. Additional details of the third Type A' building block 303 can be as described earlier in connection with Figure 7B .Figure 17E is a top plan view of the active and polysilicon layers of the pad circuit of Figure 17A . Figure 17F is a top plan view of the contact and first metal layers of the pad circuit of Figure 17A . As shown in Figure 17E , each of the building blocks 301-303 includes a plurality of rows of emitters 320, 322 and a plurality of rows of collectors 321, when viewed from above. The rows of emitters 320, 322 and collectors 321 extend substantially parallel to one another. As shown in Figure 17F , the emitters 320 on both of the peripheries of the pad circuit 300 can have a single row of contacts, while emitters 322 not on the peripheries of the pad circuit 300 and collectors 321 can have a double row of contacts.The contacts of the emitters 320, collectors 321 and emitters 322 can be spaced so as to permit first, and second vias to be stacked, as shown in Figures 17F-17H . The n-diffusion resistors 323 can have a resistance similar to that described above with reference to Figure 7A . Each n-diffusion resistor 323 can have, for example, a width WRof 0.7 µm and a length LRof 9 µm.As shown in Figures 17E-17F , a guard ring 325 can be connected through two rows of contacts. Additionally, a substrate guard ring 326 can be contacted with a double row of contacts. The plates 327a and plates 327b can each have ten fingers, and each plate can have a plate length of, for example, about 0.5 µm. The plates 327a can have a width of, for example, about 615 µm, and the plates 327b can have a width of, for example, about 300 µm. The contact to diffusion overlap can be, for example, about 2 µm.Figure 17G is a top plan view of the first metal layer 313 and first via layer 314 of the pad circuit of Figure 17A . Four rows of vias 340 can be provided to contact the drains of NPN bipolar transistors. Figure 17H is a top plan view of the first via layer 314, the second metal layer 315 and the second via layer 316 of the pad circuit of Figure 17A . Figure 17I is a top plan view of the third metal layer 317 and the second via layer 316 of the pad circuit of Figure 17A .Although Figures 17A-17I describe the construction and dimensions of one particular layout for a cascaded pad circuit, skilled artisans will appreciate that this example was for purposes of illustration. Pad circuit building blocks can be formed in a variety of ways, and can have different circuit layouts depending on a variety of factors, including, for example, fabrication process and application of the pad circuit.Figure 18A is a perspective view of one implementation of the pad circuit of Figure 11B . The illustrated pad circuit 400 includes a first Type B' building block 401 and a second Type B' building block 402. The layout of the first and second Type B' building blocks 401, 402 is configured such that the both Type B' building blocks 401, 402 can fit below a bonding pad, which has been omitted from Figure 18A for clarity. Additional building blocks, such as a Type A' building block, can be placed adjacent to the bonding pad, and can be included in the cascade, for example, by a change metal layers. Thus, an IC using the pad circuit 400, such as the power management IC 20, can be configured for a particular electronic system or application.Figure 18B is a cross section of the pad circuit of Figure 18A taken along the line 18B-18B. The first Type B' building block 401 includes a substrate 407, plates 409, a deep n-wells 410, n-wells 411, contacts 412, a first metal layer 413, first vias 414, a second metal layer 415, second vias 416, a third metal layer 417, and passivation layer 418. In contrast to the Type B' building block 231 shown in Figure 8B , the Type B' building blocks 401, 402 of Figure 18B are illustrated with back end processing. The deep n-wells 410 and n-wells 411 can provide electrically isolation of building blocks, such as between first and second Type B' building blocks 401, 402, as well as electrical isolation of each building block from the substrate 407. Additional details of the base layers of the first Type B' building block can be similar to those described earlier in connection with Figure 8B .The foregoing description and claims may refer to elements or features as being "connected" or "coupled" together. As used herein, unless expressly stated otherwise, "connected" means that one element/feature is directly or indirectly connected to another element/feature, and not necessarily mechanically. Likewise, unless expressly stated otherwise, "coupled" means that one element/feature is directly or indirectly coupled to another element/feature, and not necessarily mechanically. Thus, although the various schematics shown in the Figures depict example arrangements of elements and components, additional intervening elements, devices, features, or components may be present in an actual embodiment (assuming that the functionality of the depicted circuits is not adversely affected).ApplicationsDevices employing the above described schemes can be implemented into various electronic devices. Examples of the electronic devices can include, but are not limited to, consumer electronic products, parts of the consumer electronic products, electronic test equipment, etc. Examples of the electronic devices can also include memory chips, memory modules, circuits of optical networks or other communication networks, and disk driver circuits. The consumer electronic products can include, but are not limited to, a mobile phone, a telephone, a television, a computer monitor, a computer, a hand-held computer, a personal digital assistant (PDA), a microwave, a refrigerator, an automobile, a stereo system, a cassette recorder or player, a DVD player, a CD player, a VCR, an MP3 player, a radio, a camcorder, a camera, a digital camera, a portable memory chip, a washer, a dryer, a washer/dryer, a copier, a facsimile machine, a scanner, a multi functional peripheral device, a wrist watch, a clock, etc. Further, the electronic device can include unfinished products.Although this invention has been described in terms of certain embodiments, other embodiments that are apparent to those of ordinary skill in the art, including embodiments that do not provide all of the features and advantages set forth herein, are also within the scope of this invention. Moreover, the various embodiments described above can be combined to provide further embodiments. In addition, certain features shown in the context of one embodiment can be incorporated into other embodiments as well. Accordingly, the scope of the present invention is defined only by reference to the appended claims. |
A voltage-switchable dielectric layer may be employed on a die for electrostatic discharge (ESD) protection. The voltage-switchable dielectric layer functions as a dielectric layer between terminals of the die during normal operation of the die. When ESD events occur at the terminals of the die, a high voltage between the terminals switches the voltage-switchable dielectric layer into a conducting layer to allow current to discharge to a ground terminal of the die without the current passing through circuitry of the die. Thus, damage to the circuitry of the die is reduced or prevented during ESD events on dies with the voltage-switchable dielectric layer. The voltage-switchable dielectric layer may be deposited on the back side of a die for protection during stacking with a second die to form a stacked IC. |
1.An apparatus comprising:a first die having a first terminal and a second terminal;A voltage on the first die can switch a dielectric layer coupled to the first terminal and the second terminal.2.The apparatus of claim 1 wherein said voltage switchable dielectric layer conducts current between said first terminal and said second terminal in a first voltage range, and said voltage switchable dielectric layer is The current is not conducted between the first terminal and the second terminal in the second higher voltage range.3.The apparatus of claim 1 wherein said voltage switchable dielectric layer is on a back side of said first die, said back side being opposite a front side of said first die having an active circuit.4.The apparatus of claim 1 wherein a top surface of said voltage switchable dielectric layer is substantially flush with a top surface of said first terminal.5.The apparatus of claim 1 wherein a distance between said first terminal and said second terminal is selected to control an electrostatic discharge protection voltage of said voltage switchable dielectric layer.6.The apparatus of claim 1 wherein the thickness of the voltage switchable dielectric layer is selected to control an electrostatic discharge protection voltage of the voltage switchable dielectric layer.7.The apparatus of claim 1 wherein said first terminal is a power supply terminal and said second terminal is a ground terminal.8.The apparatus of claim 1 wherein said first die further comprises a pad and said voltage switchable dielectric layer is coupled to said pad.9.The device of claim 1 further comprising a second die coupled to said first die, wherein said voltage switchable dielectric layer provides said first die from said second die High voltage electrostatic discharge protection.10.The device of claim 1 integrated into at least one of: a mobile phone, a set top box, a music player, a video player, an entertainment unit, a navigation device, a computer, a handheld personal communication system PCS unit , portable data units, and fixed location data units.11.A method comprising depositing a voltage switchable dielectric layer between a first terminal and a second terminal on a first die.12.The method of claim 11 wherein said voltage switchable dielectric layer conducts current between said first terminal and said second terminal in a first voltage range, and wherein said voltage switchable dielectric layer is The current is not conducted between the first terminal and the second terminal in the second higher voltage range.13.The method of claim 11 wherein said depositing comprises depositing said voltage switchable dielectric layer on a back side of said first die, said back side and said first die having an active circuit Opposite the front side, the method further includes coupling a second die to the first die.14.The method of claim 11 further comprising integrating the first die into at least one of: a mobile phone, a set top box, a music player, a video player, an entertainment unit, a navigation device, A computer, a handheld personal communication system PCS unit, a portable data unit, and a fixed location data unit.15.A method comprising the step of depositing a voltage switchable dielectric layer between a first terminal and a second terminal on a first die.16.The method of claim 15 wherein said voltage switchable dielectric layer conducts current between said first terminal and said second terminal in a first voltage range, and wherein said voltage switchable dielectric layer is The current is not conducted between the first terminal and the second terminal in the second higher voltage range.17.The method of claim 15 further comprising the step of integrating said first die into at least one of: a mobile phone, a set top box, a music player, a video player, an entertainment unit, navigation A device, a computer, a handheld personal communication system PCS unit, a portable data unit, and a fixed location data unit.18.An apparatus comprising:a first die having a first terminal and a second terminal;Means for protecting the first die from electrostatic discharge on the first die, the electrostatic discharge protection device being coupled to the first terminal and the second terminal.19.The device of claim 18, wherein said electrostatic discharge protection device is on a back side of said first die, said back side being opposite a front side of said first die having an active circuit, and The device further includes a second die coupled to the back side of the first die.20.The device of claim 18 integrated into at least one of: a mobile phone, a set top box, a music player, a video player, an entertainment unit, a navigation device, a computer, a handheld personal communication system PCS unit , portable data units, and fixed location data units. |
Voltage switchable dielectric for die-level electrostatic discharge (ESD) protectionTechnical fieldThe present invention generally relates to integrated circuits (ICs). More specifically, the present invention relates to electrostatic discharge (ESD) protection of integrated circuits.Background techniqueElectrostatic discharge (ESD) events are a common part of everyday life, and some large discharges can be detected by human perception. Humans do not notice a small discharge because the ratio of the intensity of the discharge to the surface area over which the discharge occurs is minimal.Integrated circuits (ICs) have shrunk at an alarming rate over the past few decades. As the transistor size shrinks, the support components surrounding the transistor typically shrink. The reduction in IC size reduces the ESD tolerance of the transistor, which in turn increases the sensitivity of the integrated circuit to ESD stress.An ESD event occurs when an object at a first potential approaches or contacts an object at a second potential. The rapid transfer of charge from the first object to the second object occurs such that the two objects are at approximately equal potentials. When an object with a lower charge is an IC, the discharge attempts to find the smallest resistance path through the IC to ground. This path often flows through the interconnects. Any part of this path that is unable to withstand the energy associated with the discharge suffers damage.Conventionally, diode-based ESD protection structures have been built into ICs for protection. These structures are complex to ensure high voltage protection and fast response times. Due to the complexity, a significant amount of IC area (tens to thousands of square microns per ESD protection structure) is consumed by the ESD protection structure, which could otherwise be used for active circuits. In order to meet the increasing demand for smaller form factors in the IC, the ESD protection circuit size should be reduced.Therefore, it is necessary to consume ESD protection of a smaller IC area.Summary of the inventionAccording to an aspect of the invention, an apparatus includes a first die having a first terminal and a second terminal. The device also includes a voltage switchable dielectric layer on the first die coupled to the first terminal and the second terminal.In another aspect, a method includes depositing a voltage switchable dielectric layer between a first terminal and a second terminal on a first die.In still another aspect, an apparatus includes a first die having a first terminal and a second terminal. The device also has means for protecting the first die from electrostatic discharge on the first die. The electrostatic discharge protection device is coupled to the first terminal and the second terminal.The features and technical advantages of the present invention are set forth in the <RTIgt; Additional features and advantages of the invention are described below. Those skilled in the art will appreciate that the present invention can be readily utilized as a basis for modifying or designing other structures to achieve the same objectives of the present invention. Those skilled in the art should also appreciate that such equivalent constructions do not depart from the teachings of the invention as set forth in the appended claims. The novel features which are believed to be characteristic of the invention in the aspects of < It is to be understood, however, that the claims are not intended to be limitedDRAWINGSFor a more complete understanding of the present invention, reference is now made to the accompanying drawings1 is a cross-sectional view illustrating an exemplary die with electrostatic discharge protection in accordance with a first embodiment.2 is a cross-sectional view illustrating an exemplary die with electrostatic discharge protection in accordance with a second embodiment.3 is a cross-sectional view illustrating an exemplary stacked die with electrostatic discharge protection in accordance with a first embodiment.4 is a cross-sectional view illustrating an exemplary stacked die with electrostatic discharge protection in accordance with a second embodiment.FIG. 5 is a cross-sectional view illustrating an exemplary stacked die with electrostatic discharge protection in accordance with a third embodiment.6 is a block diagram showing an exemplary wireless communication system in which embodiments of the present invention may be advantageously employed.Figure 7 is a block diagram illustrating a design workstation for the circuit, layout, and logic design of a semiconductor component in accordance with one embodiment.Detailed waysA voltage switchable dielectric layer can be deposited on the die as an electrostatic discharge (ESD) protection structure. A single dielectric layer reduces the amount of die area occupied by the ESD protection structure and allows for the construction of smaller form factor ICs without compromising the IC's ability to withstand ESD events. The voltage switchable dielectric layer is a dielectric layer that acts as an insulator in the first low voltage operating range. In the second higher voltage range, the voltage switchable dielectric layer is switched to a conductive layer.1 is a cross-sectional view illustrating an exemplary die with electrostatic discharge protection in accordance with a first embodiment. Die 100 includes a substrate 102 having a transistor gate 104 coupled to interconnect 106. Interconnect 106 is separated from other interconnects by dielectric layer 108. Terminal 110 is coupled to interconnect 106 to provide communication between transistor gate 104 and an external circuit (not shown). In one embodiment, the leftmost terminal 110 is coupled to ground, the rightmost terminal 110 is coupled to the power supply, and the intermediate terminal 110 is coupled to I/O (input/output).During fabrication, handling or operation of the terminal 110 die 100, a high voltage can be formed between the terminals 110, resulting in an ESD event. During an ESD event, the discharge current seeks the lowest resistance path to ground that can pass through interconnect 106, transistor gate 104, substrate 102 and to the other of terminals 110. Discharge can result in damage to interconnect 106, transistor gate 104, or substrate 102.The voltage switchable dielectric layer 112 deposited between the terminals 110 provides a low resistance current path during an ESD event. For example, when an ESD event causes a voltage on both of the terminals 110 to exceed a switchable voltage of the voltage switchable dielectric layer 112, the voltage switchable dielectric layer 112 switches to a conductive state, and the current is substantially from the terminal 110 One of the flow-through voltages can switch the dielectric layer 112 to the other of the terminals 110. Interconnect 106, transistor gate 104, and substrate 102 experience reduced current flow due to current transfer in voltage switchable dielectric layer 112.After the ESD event has ended and the voltage between terminals 110 has decreased below the switching voltage of the voltage switchable dielectric layer 112, the voltage switchable dielectric layer 112 returns to an insulated state and does not conduct current between the terminals 110. According to one embodiment, little or no damage to the voltage switchable dielectric layer 112 occurs such that the voltage switchable dielectric layer 112 can continue to protect the die 100 from ESD events. For example, the voltage switchable dielectric layer 112 can have self-healing properties.The switching voltage of the voltage switchable dielectric layer 112 between the two of the terminals 110 depends in part on the material properties of the voltage switchable dielectric layer 112 and the distance between the two terminals 110 participating in the ESD event. For example, as the distance between the two terminals 110 participating in the ESD event increases, the switching voltage for conduction between the two terminals 110 by the voltage switchable dielectric layer 112 increases. That is, the dielectric breakdown voltage of the voltage switchable dielectric layer is tens of volts. According to one embodiment, the breakdown voltage of the voltage switchable dielectric layer 112 can also be adjusted by varying portions of the voltage switchable dielectric layer 112 to increase or decrease the breakdown voltage between the two of the terminals 110.The distance d between the terminals 110 can be selected to coincide with the desired switching voltage of the voltage switchable dielectric layer 112. For example, the distance between the two of the terminals 110 can be selected such that during normal operation of the die in the first voltage range, the voltage switchable dielectric layer 112 is an insulating layer and at a second higher voltage During the ESD event in the range, the voltage switchable dielectric layer 112 is a conductive layer. The distance may be selected such that the first voltage range in which the voltage switchable dielectric layer 112 is an insulating layer extends from 0 volts to at least the supply voltage of the die 100. The thickness t can be similarly selected.Voltage switchable dielectric layer 112 may be deposited on dielectric layer 108 after fabrication of interconnect 106 and dielectric layer 108 and interconnect 106. After deposition of the voltage switchable dielectric layer 112, an opening may be formed in the voltage switchable dielectric layer 112 in which the terminal 110 is deposited.The voltage switchable dielectric layer can also be placed in a configuration on a die with pads. 2 is a cross-sectional view illustrating an exemplary die with electrostatic discharge protection in accordance with a second embodiment. Die 200 includes a substrate 202 having a transistor gate 204 coupled to interconnect 206. The interconnects 206 are separated by a dielectric layer 208. Voltage switchable dielectric layer 212 is deposited over dielectric layer 208 and terminal 210 is coupled to interconnect 206 by dielectric layer 208. In one embodiment, the leftmost terminal 210 is coupled to ground, the rightmost terminal 210 is coupled to the power supply, and the intermediate terminal 210 is coupled to the I/O (input/output).Pad 214 may be coupled to one of terminals 210 and extend over voltage switchable dielectric layer 212. During an ESD event involving one of the pad 214 and the terminal 210, current flows from the pad 214 through the voltage switchable dielectric layer 212 to one of the terminals 210. The switching voltage of the voltage switchable dielectric layer 212 depends in part on the thickness t of the voltage switchable dielectric layer 212 between the pads 214 and an adjacent one of the interconnects 206 along which the minimum resistance path follows. For example, the minimum resistance path for the ESD current can be from the leftmost terminal 210 to the leftmost upper interconnect layer 206, through which the dielectric layer 212 can be switched and to the pad 214. The overlap between pad 214 and terminal 210 can be adjusted to meet the designed discharge current target. Similarly, the distance d between the pad 214 and the leftmost terminal 210 can be selected to control the switching voltage of the voltage switchable dielectric layer 212.A voltage switchable dielectric layer can also be used to protect the stacked IC from ESD events. For example, an ESD event can occur during disposal or shipment of a first layer of die or a second layer of die. If the first tier die is at a different potential than the second tier die, the ESD event can also occur during the coupling of the first tier die to the second tier die. Figure 3 is a cross-sectional view illustrating an exemplary stacked die with electrostatic discharge protection in accordance with a first embodiment. Stacked die 300 includes a first layer of die 360 and a second layer of die 350. The first tier die 360 includes a package connection 314 that is coupled to an interconnect 306. Interconnect 306 is separated by dielectric layer 308 and coupled to transistor gate 304 on substrate 302. Interconnect 306 is also coupled to via 316, interconnect 340, and terminal 342. In one embodiment, the leftmost terminal 342 is coupled to ground, the rightmost terminal 342 is coupled to the power supply, and the intermediate terminal 342 is coupled to the I/O (input/output).The voltage switchable dielectric layer 312 deposited on the back side of the substrate 302 of the first dies 360 provides a conductive path between the terminals 342. Terminal 342 is coupled to package connection 320 of second die 350. Package connection 320 is coupled to interconnect 336, transistor gate 334, and substrate 332. Dielectric layer 338 separates interconnect 336.During an ESD event, such as when the second die 350 is coupled to the first tier die 360, current may flow from one of the terminals 342 to the interconnect 340 through which the dielectric layer 312 can be switched to the interconnect 340 The other of them, and through the other of the terminals 342 to ground. The switching voltage of the voltage switchable dielectric layer 312 between the two of the terminals 342 depends in part on the distance between the interconnects 340.According to one embodiment, a dielectric layer can be placed between the substrate and the voltage switchable dielectric layer. 4 is a cross-sectional view illustrating an exemplary stacked die with electrostatic discharge protection in accordance with a second embodiment. Dielectric 408 is deposited on the back side of substrate 302, and interconnect 340 couples terminal 342 to via 316. In one embodiment, the leftmost terminal 342 is coupled to ground, the rightmost terminal 342 is coupled to the power supply, and the intermediate terminal 342 is coupled to the I/O (input/output).A voltage switchable dielectric layer 412 is deposited over the dielectric layer 408 and partially surrounds the terminal 342. According to one embodiment, after the voltage switchable dielectric layer 412 is deposited on the dielectric layer 408, the voltage switchable dielectric layer 412 is patterned and the terminals 342 are deposited in the patterned voltage switchable dielectric layer 412. The switching voltage of the voltage switchable dielectric layer 412 between the terminals 342 is based in part on the distance between the terminals 342 involved in the ESD event.According to another embodiment, the pad can be deposited as a terminal on the voltage switchable dielectric layer. Figure 5 is a cross-sectional view illustrating an exemplary stacked die with electrostatic discharge protection in accordance with a third embodiment. Pad 510 is deposited over voltage switchable dielectric layer 412 and coupled to terminal 342. When current from an ESD event is conducted through the voltage switchable dielectric layer 412 of FIG. 5, when the minimum resistance path for current is from the voltage switchable dielectric layer 412 to the interconnect 340, based in part on the pad 510 and interconnect The voltage between the pieces 340 can switch the thickness of the dielectric layer 412 to determine the switching voltage. Pad 510 can be coupled to ground, power, and/or input/output lines of second die 350. In one embodiment, the leftmost pad 510 is coupled to ground, the rightmost pad 510 is coupled to the power supply, and the intermediate pad 510 is coupled to the I/O (input/output) line.The voltage-switchable dielectric layer deposited on the die provides ESD protection for the circuitry on the die and consumes very little to no additional die area. Thus, a die employing a voltage switchable dielectric layer for ESD protection can have a smaller form factor than a die incorporating conventional ESD protection circuitry and structures. The switching voltage can be controlled, for example, by the material properties of the voltage switchable dielectric layer, the thickness of the voltage switchable dielectric layer, and the distance between the terminals on the die surrounded by the voltage switchable dielectric layer. In one embodiment, the voltage switchable dielectric layer is a voltage switchable dielectric available from Shocking Technologies, Inc. of San Jose, California.FIG. 6 is a block diagram showing an exemplary wireless communication system 600 in which embodiments of the present invention may be advantageously employed. For purposes of illustration, FIG. 6 shows three remote units 620, 630, and 650 and two base stations 640. It will be appreciated that a wireless communication system can have many more remote units and base stations. Remote units 620, 630, and 650 include IC devices 625A, 625C, and 625B that include the disclosed ESD protection. It will be appreciated that any device containing an IC may also include the ESD protection disclosed herein, including base stations, switching devices, and network devices. 6 shows forward link signals 680 from base station 640 to remote units 620, 630, and 650 and reverse link signals 690 from remote units 620, 630, and 650 to base station 640.In FIG. 6, remote unit 620 is shown as a mobile telephone, remote unit 830 is shown as a portable computer, and remote unit 650 is shown as a fixed location remote unit in a wireless local loop system. For example, the remote unit can be a mobile phone, a handheld personal communication system (PCS) unit, a portable data unit such as a personal digital assistant, a GPS enabled device, a navigation device, a set top box, a music player, a video player, entertainment A unit, such as a fixed location data unit such as a meter reading device or any other device that stores or retrieves data or computer instructions, or any combination thereof. Although Figure 6 illustrates a remote unit in accordance with the teachings of the present invention, the invention is not limited to these exemplary illustrated units. Embodiments of the invention may be suitably employed in any device that includes ESD protection.7 is a block diagram illustrating a design workstation for circuitry, layout, and logic design of a semiconductor component, such as the ESD protection configuration disclosed above. The design workstation 700 includes a hard disk 701 containing operating system software, supporting files, and design software such as Cadence or OrCAD. Design workstation 700 also includes a display to facilitate the design of circuit 710 or semiconductor component 712, such as a packaged integrated circuit with ESD protection. Storage medium 704 is provided for tangibly storing circuit design 710 or semiconductor component 712. Circuit design 710 or semiconductor component 712 can be stored on storage medium 704 in a file format such as GDSII or GERBER. Storage medium 704 can be a CD-ROM, a DVD, a hard drive, a flash memory, or other suitable device. In addition, design workstation 700 includes a drive device 703 for accepting input from storage medium 704 or writing output to storage medium 704.The data recorded on the storage medium 704 can specify a logic circuit configuration, pattern data for a photolithographic mask, or mask pattern data for a serial write tool such as electron beam lithography. The data may further include logic check data, such as a timing diagram or mesh circuit associated with the logic simulation. Providing data on storage medium 704 facilitates the design of circuit design 710 or semiconductor component 712 by reducing the number of processes used to design the semiconductor wafer.For firmware and/or software implementations, the methods can be implemented by modules (eg, procedures, functions, and the like) that perform the functions described herein. Any machine readable medium tangibly embodying instructions can be used to implement the methods described herein. For example, the software code can be stored in a memory and executed by a processor unit. The memory can be implemented within the processor unit or external to the processor unit. As used herein, the term "memory" refers to any type of long-term, short-term, volatile, non-volatile, or other memory, and is not limited to any particular type of memory or number of memories or types of media on which the memory is stored. .If implemented in firmware and/or software, the functions may be stored on the computer readable medium as one or more instructions or code. Examples include computer readable media encoded in a data structure and computer readable media encoded in a computer program. Computer readable media includes physical computer storage media. The storage medium can be any available media that can be accessed by a computer. By way of example and not limitation, such computer-readable media may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage device, disk storage device or other magnetic storage device, or may be stored in the form of an instruction or data structure Any other medium that is intended to be program code and accessible by a computer; as used herein, a magnetic disk and an optical disk include a compact disk (CD), a laser disk, an optical disk, a digital versatile disk (DVD), a floppy disk, and a Blu-ray disk, wherein the disk Data is typically reproduced magnetically, while optical disks use lasers to optically reproduce data. Combinations of the above should also be included within the scope of computer readable media.In addition to being stored on a computer readable medium, the instructions and/or data may be provided as a signal on a transmission medium contained in the communication device. For example, a communication device can include a transceiver with signals indicative of instructions and data. The instructions and data are configured to cause one or more processors to perform the functions recited in the claims.Although specific circuits have been set forth, those skilled in the art will appreciate that the invention is not required to practice the invention. Moreover, some well known circuits have not been described to maintain focus on the present invention.Having described the invention and its advantages, it is to be understood that various changes, substitutions and changes may be made therein without departing from the scope of the invention. For example, relational terms such as "above" and "below" are used with respect to a substrate or electronic device. Of course, if the substrate or electronic device is inverted, the upper side becomes lower and vice versa. In addition, if oriented to the side, the upper and lower sides may be referred to as the sides of the substrate or electronic device. Moreover, the scope of the present application is not intended to be limited to the specific embodiments of the process, the machine, the manufacture, the composition of matter, the device, the method and the steps described in the specification. As will be readily apparent to those skilled in the art from this disclosure, it is possible in accordance with the present invention to perform substantially the same functions or substantially the same results as the corresponding embodiments described herein, either currently or later developed. Process, machine, manufacturing, material composition, device, method or step. Accordingly, the appended claims are intended to cover such processes, machines, manufacture, compositions, devices, methods or steps. |
A method ( 200 ) of placing inputs, outputs, and clocks in a circuit design can include assigning ( 205 ) initial locations to inputs and outputs of the circuit design, selecting ( 210 ) at least one component type for the circuit design, and generating ( 215 ) a cost function having parameters corresponding to the selected component type. The method further can include annealing ( 220 ) the selected component type using the cost function and determining design constraints ( 225 ) for the selected component type according to the annealing step. The method can repeat to process additional component types such that design constraints determined for each additional component type do not violate design constraints determined for prior component types. |
What is claimed is:1. A method of placing a circuit design comprising the steps of:(a) assigning initial locations on an integrated circuit to inputs and outputs of the circuit design;(b) selecting a component type used in the circuit design;(c) generating a cost function corresponding to the inputs and outputs and the selected component type;(d) annealing the selected component type using the cost function;(e) determining location constraints for the inputs and outputs and the selected component type according to said step (d); and(f) repeating steps (b)-e) for additional component types such that location constraints determined for each additional component type do not violate location constraints determined for prior component types.2. The method of claim 1, wherein the at least one component type is selected from a group consisting of inputs and outputs, local clock sources, and global clock sources.3. The method of claim 1, wherein the selected component type includes inputs and outputs or local clock sources, and wherein the additional component types include global clock sources.4. The method of claim 3, said step (d) further comprising the step of assigning locations to global clock sources.5. The method of claim 4, said step (e) further comprising the step of constraining loads of global clock sources to windows of the integrated circuit.6. The method of claim 1, said step (d) further comprising the step of assigning individual inputs and outputs to input/output banks of the integrated circuit.7. The method of claim 1, said step (d) further comprising the step of assigning locations to local clock sources.8. The method of claim 7, said step (e) further comprising the step of constraining loads of local clock sources to windows of the integrated circuit.9. A method of placing a circuit design comprising the steps of:(a) assigning initial locations on an integrated circuit to inputs and outputs of the circuit design;(b) selecting at least two different component types used in the circuit design;(c) generating a cost function corresponding to the inputs and outputs and each selected component type;(d) annealing the selected component types of the circuit design simultaneously using the cost function; and(e) determining location constraints for the circuit design according to said step (d).10. The method of claim 9, said step (b) further comprising the step of selecting inputs and outputs, local clock sources, and global clock sources.11. The method of claim 10, said step (d) further comprising the steps of:assigning individual inputs and outputs to input/output banks of the integrated circuit;assigning locations to local clock sources; andassigning locations to global clock sources.12. The method of claim 11, said step (e) further comprising the steps of:constraining loads of local clock sources to windows of the integrated circuit; andconstraining loads of global clock sources to windows of the integrated circuit.13. A system for placing a circuit design comprising the steps of:means for assigning initial locations on an integrated circuit to inputs and outputs of the circuit design;means for selecting a component type used in the circuit design;means for generating a cost function corresponding to the inputs and outputs and the selected component type;means for annealing the inputs and outputs and the selected component type using the cost function;means for determining location constraints for the selected component type according to said annealing step; andmeans for causing said means for assigning, means for selecting, means for generating, means for annealing, and means for determining to operate on additional component types such that location constraints determined for each additional component type do not violate design constraints determined for prior component types.14. The system of claim 13, wherein the component types are selected from the group consisting of inputs and outputs, local clock sources, and global clock sources.15. The system of claim 14, wherein the selected component types include inputs and outputs and local clock sources.16. The system of claim 15, wherein the additional component types include global clock sources.17. The system of claim 16, wherein said means for annealing further comprise means for assigning locations to global clock sources.18. The system of claim 15, wherein said means for annealing further comprise means for assigning individual inputs and outputs to input/output banks of the circuit design.19. The system of claim 15, wherein said means for annealing further comprise means for assigning locations to local clock sources.20. The system of claim 19, wherein said means for determining further comprise means for constraining loads of local clock sources to windows of the integrated circuit.21. The system of claim 17, wherein said means for determining further comprise means for constraining loads of global clock sources to windows of the integrated circuit.22. A system for placing a circuit design comprising the steps of:means for assigning initial locations on an integrated circuit to inputs and outputs of the circuit design;means for selecting at least two different component types used in the circuit design;means for generating a cost function corresponding to the inputs and outputs and each selected component type;means for annealing the inputs and outputs and the selected component types of the circuit design simultaneously using the cost function; andmeans for determining location constraints for the circuit design according to said annealing step.23. The system of claim 22, wherein said means for selecting select inputs and outputs, local clock sources, and global clock sources.24. The system of claim 23, wherein said means for annealing further comprise:means for assigning individual inputs and outputs to input/output banks of the integrated circuit;means for assigning locations to local clock sources; andmeans for assigning locations to global clock sources.25. The system of claim 24, wherein said means for determining further comprise:means for constraining loads of local clock sources to windows of the integrated circuit; andmeans for constraining loads of global clock sources to windows of the integrated circuit.26. A machine readable storage, having stored thereon a computer program having a plurality of code sections executable by a machine for causing the machine to perform the steps of:(a) assigning initial locations on an integrated circuit to inputs and outputs of the circuit design;(b) selecting component type used in the circuit design;(c) generating a cost function corresponding to the inputs and outputs and the selected component type;(d) annealing the inputs and outputs and the selected component type using the cost function;(e) determining location constraints for the inputs and outputs and the selected component type according to said step (d); and(f) repeating steps (b)-(e) for additional component types such that location constraints determined for each additional component type do not violate location constraints determined for prior component types.27. The machine readable storage of claim 26, wherein the component types are selected from the group consisting of inputs and outputs, local clock sources, and global clock sources.28. The machine readable storage of claim 27, wherein the selected component types include inputs and outputs and local clock sources.29. The machine readable storage of claim 28, wherein the additional component types include global clock sources.30. The machine readable storage of claim 29, said step (d) further comprising the step of assigning locations to global clock sources.31. The machine readable storage of claim 30, said step (e) further comprising the step of constraining loads of global clock sources to windows of the integrated circuit.32. The machine readable storage of claim 28, said step (d) further comprising the step of assigning individual inputs and outputs to input/output banks of the integrated circuit.33. The machine readable storage of claim 28, said step (d) further comprising the step of assigning locations to local clock sources.34. The machine readable storage of claim 33, said step (e) further comprising the step of constraining loads of local clock sources to windows of the integrated circuit.35. A machine readable storage, having stored thereon a computer program having a plurality of code sections executable by a machine for causing the machine to perform the steps of:(a) assigning initial locations on an integrated circuit to inputs and outputs of the circuit design;(b) selecting at least two different component types used in the circuit design;(c) generating a cost function corresponding to the inputs and outputs and each selected component type;(d) annealing the inputs and outputs and the selected component types of the circuit design simultaneously using the cost function; and(e) determining location constraints for the circuit design according to said step (d).36. The machine readable storage of claim 35, said step (b) further comprising the step of selecting inputs and outputs, local clock sources, and global clock sources.37. The machine readable storage of claim 36, said step (d) further comprising the steps of:assigning individual inputs to input/output banks of the circuit design;assigning locations to local clock sources; andassigning locations to global clock sources.38. The machine readable storage of claim 37, said step (e) further comprising the steps of:constraining loads of local clock sources to windows of the integrated circuit; andconstraining loads of global clock sources to windows of the integrated circuit. |
BACKGROUND1. Field of the InventionThis invention relates to the field of physical circuit design and, more particularly, to the placement of a design.2. Description of the Related ArtDesigns for Field Programmable Gate Arrays (FPGA's) have become increasingly complex and heterogeneous. Modern FPGA designs can include a variety of different components or resources including, but not limited to, block random access memory (RAM), multipliers, processors, and the like. This increasing complexity makes placement of circuit design components more cumbersome.Components of circuit designs traditionally have been placed through a series of discrete phases or tasks. Each task is performed sequentially and independently of the others. More particularly, for a given circuit design, a general placement is first performed. The general placement assigns locations on the physical circuit design or chip to inputs and outputs (I/O's).After the general placement, the I/O assignments are analyzed and relocated as necessary to ensure that the I/O's conform with select I/O standards. The select I/O standards ensure that I/O's located on a same bank of the physical circuit design do not conflict with one another.The I/O's of a FPGA device can be configured to conform to any one of a variety of different I/O standards. Not all of these standards, however, are compatible with one another. To avoid incompatibility issues, the I/O's of a FPGA circuit design are arranged in groupings called banks. While banks can vary from one circuit design to another, typically, banks span approximately one-half the length of an edge of a chip. Accordingly, a conventional rectangular chip can include 8 banks of I/O's, 2 per side. The I/O's within each bank must conform to I/O standards that are compatible with one another.After the I/O's are placed, local clock sources can be assigned to physical locations on the circuit design. While the task of placing the local clock sources begins after the placement of the I/O's, the task operates in an independent manner. In other words, the local clock placement task operates without any knowledge of I/O assignments or design constraints determined during the general I/O placement task or the select I/O placement task.The local clock source placement task seeks to control or minimize clock skew and clock signal delay by assigning local clock sources to particular physical locations within the circuit design. Once the local clock sources are assigned to locations, the loads of the local clock sources of the circuit design can be constrained. The circuit design can be divided into one or more areas often referred to as windows. As such, the loads for each local clock source can be assigned to a particular window as dictated by predetermined design constraints for minimizing clock skew and clock signal delay.Finally, the global clock sources can be assigned to physical locations on the circuit design. Like the other placement tasks, the global clock placement task begins executing without any knowledge of the placement of I/O's or local clock sources. Once the global clock sources are assigned to locations, the loads of the global clock sources can be assigned to windows of the circuit design.This placement strategy, however, fails to acknowledge the interdependencies of each respective placement task. That is, while a proper placement may be determined after the I/O's are placed, the subsequent task of placing local clock sources may lead to an improper or illegal circuit placement with respect to design constraints developed for the I/O's during the I/O placement task. In other words, the placement developed by the local clock placement task may disregard requirements determined by the I/O placement task, for example by locating incompatible I/O's within the same bank. Similarly, the global clock placement task may determine location assignments for the global clocks which disregard design constraints determined for the local clocks.Despite the fact that each placement task influences the other placement tasks, each is performed without incorporating any knowledge of design constraints determined in prior tasks. What is needed is a technique in which design constraints and placement information determined during each individual placement task can be utilized and incorporated in subsequent placement tasks.SUMMARY OF THE INVENTIONVarious embodiments of the present invention provide a solution for placing inputs and outputs (I/O) as well as various clock sources. Rather than placing I/O, local clock sources and loads, and global clock sources and loads independently of one another, the interdependencies between each task are recognized and the components placed together. One embodiment of the present invention can place I/O's, local clock sources and loads, and global clock sources and loads simultaneously.In another embodiment, the various circuit design components can be placed through a series of tasks which operate in cooperation with one another. As design constraints are determined, those constraints can be provided in a feed-forward manner to successive placement tasks. That is, design constraints determined during the placement of local clock sources and I/O's can be used during the next placement task, in this case placing global clock sources. The resulting final placement of the circuit design conforms to design constraints for each of the successive placement tasks.One embodiment of the present invention can include a method of placing a circuit design. The method can include assigning initial locations to inputs and outputs of the circuit design, selecting at least one component type for the circuit design, generating a cost function having parameters corresponding to the selected component type, annealing the selected component type using the cost function, and determining design constraints for the selected component type according to the annealing step. The above steps can be repeated for additional component types such that design constraints determined for each additional component type do not violate design constraints determined for prior component types.The component types to be selected can include inputs and outputs, local clock sources, and global clock sources. A component type selection indicating inputs and outputs and local clock sources can be received. Accordingly, the step of annealing can include the step of assigning individual I/O's to banks of the circuit design and assigning locations to the local clock sources. The step of determining design constraints can include constraining loads of local clock sources to windows of the circuit design.The additional component types can include global clock sources. As such, the annealing step can include the step of assigning locations to global clock sources. The step of determining design constraints can include the step of constraining loads of global clock sources to windows of the circuit design.Another embodiment of the present invention can include a method of placing a circuit design including assigning initial locations to inputs and outputs of the circuit design, selecting at least two different component types for the circuit design, generating a cost function having parameters corresponding to each selected component type, annealing the selected component types of the circuit design simultaneously using the cost function, and determining design constraints for the circuit design according to the annealing step.The selecting step can include selecting I/O's, local clock sources, and global clock sources. Accordingly, the annealing step can include assigning each individual I/O to a bank of the circuit design, assigning locations to local clock sources, and assigning locations to global clock sources. The determining step can include constraining loads of local clock sources to windows of the circuit design and constraining loads of global clock sources to windows of the circuit design.BRIEF DESCRIPTION OF THE DRAWINGSThere are shown in the drawings, embodiments which are presently preferred, it being understood, however, that the invention is not limited to the precise arrangements and instrumentalities shown.FIG. 1 is a schematic diagram illustrating a system for placing components of a circuit design in accordance with one embodiment of the present invention.FIG. 2 is a flow chart illustrating a method of placing a circuit design in accordance with another embodiment of the present invention.DETAILED DESCRIPTION OF THE INVENTIONEmbodiments of the present invention provide a method, system, and apparatus for placing inputs and outputs (I/O's) and clock resources for circuit designs. In accordance with one embodiment of the present invention, as design constraints are determined by individual placement tasks, the design constraints are communicated to subsequent placement tasks. Each placement task executes with the benefit of knowing the design constraints determined from previous placement tasks and builds upon those design constraints.In accordance with the inventive arrangements disclosed herein, clocks and I/O's are placed and loads of the clocks are constrained to portions of the circuit design. A detailed placement can be used in cooperation with an embodiment of the present invention to place remaining circuit logic. The resulting placement of the circuit design conforms to the design constraints for each individual task. In another embodiment, one or more placement tasks can be combined such that more than one type of circuit component is placed simultaneously. In either case, the circuit design can be placed in a manner that recognizes the interdependencies of placing different types of circuit components.FIG. 1 is a schematic diagram illustrating a system 100 for placing components of a circuit design in accordance with one embodiment of the present invention. As shown, the system 100 can include an I/O placer 105 and a clock placer 110. The I/O placer 105 is a software component or application that determines an initial placement of I/O for the circuit design. The I/O placer 105 assigns physical locations on the chip to the circuit I/O thereby associating circuit I/O's with physical circuit pins.According to one embodiment of the present invention, the I/O placer 105 assigns locations to circuit I/O's without regard to whether each particular I/O is located in a bank with other compatible I/O's. As such, the initial placement of the circuit I/O's may be illegal in that one or more incompatible I/O standards are being utilized within the same bank of the circuit design. In other words, the initial I/O placement determined by the I/O placer 105 may not be compliant with select I/O standards.The clock placer 110 is a software component or application that determines initial placements of local and global clock sources. The clock placer 110 also operates upon the initial placements to systematically determine a final placement for I/O's, local clock sources and loads, as well as global clock sources and loads. As shown, the clock placer 110 includes a cost processor 115 and an annealer 120. The cost processor 115 models predetermined input timing requirements and other placement constraints as cost functions which are used to determine the quality of placement decisions made during the annealing process. The cost processor 115 computes any of several different cost functions depending upon the placement task at hand.In the case of placing I/O's in conformance with select I/O standards, the cost function penalizes component movements that violate these standards. With respect to local clock sources, the cost function penalizes movements which lead to the local clock nets having too high of a skew or delay as specified by predetermined design specifications or tolerances. With respect to global clock sources, the cost function penalizes component movements that violate any previously determined constraints as well as predetermined global clock placement rules.The annealer 120 determines placement solutions for selected circuit components by implementing a simulated annealing process. The simulated annealing process is one variety of stochastic hill-climber algorithm inspired through an analogy with the cooling of metals. The simulated annealing process is disclosed by S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi: "Optimization by simulated annealing", Science, vol. 220, no. 4598, pp. 671-680 (May 13, 1983), which is fully incorporated herein by reference.The annealing process implemented by the annealer 120 begins with a simulated high temperature and begins randomly generating placement solutions by swapping the position of two or more components such as I/O's, local clock sources, or global clock sources. After each component swap or iteration, the annealer 120 access the cost processor 115 to recalculate the relevant cost function to evaluate the proposed solution. If the cost function decreases, indicating that the proposed solution has improved over the last iteration, the solution can be accepted as the current solution and used as a basis for subsequent annealing iterations. If, however, the cost function increases, the solution may or may not be accepted. Specifically, placement solutions producing increasing cost functions can be accepted as the current solution, but also be assigned a probability that is dependent upon the current value of the temperature.The probability of accepting a proposed solution showing an increase in a cost function decreases as the temperature decreases during the annealing process. The annealing process incorporates a cooling schedule, or rate of decrease of temperature, such that at high temperatures, almost any proposed placement solution is accepted. Accordingly, at high temperatures, the annealer 120 stresses the exploration of different placement solutions. At lower temperatures, the probability of accepting a solution in which the cost function increases is lessened. Thus, at lower temperatures, the annealer 120 stresses exploitation of placement solutions under development and converges to a solution.In operation, an unplaced circuit design 125 can be provided to the I/O placer 105. The I/O placer 105 can assign initial locations to I/O's for the circuit design. The circuit design with initially placed I/O's 130 then can be provided to the clock placer 110.In one embodiment of the present invention, the clock placer 110 can simultaneously place the I/O's, local clock sources, as well as global clock sources. In another embodiment, the placement component 110 can operate on a grouping of one or more component types and then operate on subsequent groupings of component types until the circuit design is placed. The determination of which component types the clock placer 110 will operate upon simultaneously can be determined by the component type selections 135 as specified by a circuit designer.Accordingly, the clock placer 110 can determine final placements for the I/O's, local clock sources, and global clock sources, simultaneously, independently, or in various combinations as specified by the component type selections 135. The circuit design with I/O, local and global clock source constraints 140 can be provided as output. In the case where each component type is operated upon simultaneously by the clock placer 110, the placement constraints determined will conform to the placement constraints for each respective component type. If component types are placed individually or in groupings, the design constraints determined by each iteration of the annealing process can be used as a baseline for performing the annealing process on subsequent components to be placed.Those skilled in the art will recognize that the present invention is not limited by the particular software configuration or architecture used. For example, while the system 100 is depicted as having two components, according to another embodiment, the components can be combined into a single and more complex program. According to another embodiment, each of the various tasks described herein can be implemented using one or more individual software components or applications.FIG. 2 is a flow chart illustrating a method 200 of placing a circuit design in accordance with one embodiment of the present invention. The method 200 can begin in step 205 where an initial placement of the I/O's of the circuit design can be performed. More particularly, the I/O's of the circuit design can be initially assigned to banks of the circuit design.In step 210, a designer selection indicating which component types are to be operated upon simultaneously can be received. While all component types such as I/O's, local clock sources, and global clock sources can be operated upon at the same time during the annealing process, those skilled in the art will recognize that such a processing task can be overly time consuming.Accordingly, a designer can specify that each placement task is to be performed separately such that as placement constraints are determined by each respective task, those constraints are provided to the next placement task in feed-forward fashion to be used as a baseline for performing the next placement task. Alternatively, the design selection can specify that more than one task such as select I/O determinations and local clock source assignments should be performed simultaneously. In that case, placement constraints determined during the combined tasks are used during the subsequent placement task of constraining the global clock sources. Thus, while the method 200 illustrates the case where select I/O requirements and local clock sources are annealed together, those skilled in the art will appreciate that the placement tasks also can be performed sequentially with each task providing determined placement constraints to the next task, or simultaneously such that all of the aforementioned component types are annealed at the same time and subsequently constrained at the same time.Continuing to step 215, the cost function for the I/O's and the local clock sources can be generated. The cost function for select I/O, referenced as CselectI/O, regulates the placement of I/O's according to select I/O standards and, as such, models the select I/O standards. The cost function CselectI/O penalizes any movement of components that leads to an illegal placement of I/O's with respect to select I/O standards.The cost function for local clocks, referred to as CLocalClock, penalizes any movement of components that produces local clock nets having skews and delays that are larger than predetermined design specifications. Local clock placement is defined as finding appropriate locations for every local clock net in the circuit design and determining locations or region assignments for all components that are either connected to the local clock net or to the source of data to be latched by the local clock source such that all connections related to the local clock are routed with low delays and skews in accordance with the design specifications. The cost function CLocalClock models these goals.Based upon each of the above cost functions, the following cost function can be used to model select I/O standards and local clock source design tolerances: C=CselectI/O+CLocal Clock.In step 220, the I/O's and the local clock sources can be annealed using the above cost function. As the annealing process continues through repeated iterations, the cost function C guides the annealer toward a select I/O compliant and local clock source legal solution. The I/O's are constrained to particular banks of the circuit design and the local clock sources are assigned to particular physical locations on the circuit design. The annealer determines appropriate locations for every local clock net in the circuit design.The annealer continues to iterate until it converges upon a solution. Those skilled in the art will recognized, however, that in some cases the annealer may not converge upon a solution. If the annealer does not converge upon a solution, an error condition can be generated and the method can end. Such can be the case in situations where the annealer does not converge upon a solution after a predetermined time period or after a predetermined number of iterations.In step 225, as the I/O's and the local clock sources have been assigned to particular locations in the circuit design, the annealer seeks to determine location assignments for all components connected to the local clock net or the sources of data to be latched by the clock such that all connections related to the local clock sources are routed with delay and skews that do not exceed predetermined design tolerances. In other words, the local clock loads are assigned or constrained to particular windows, or subdivisions of windows referred to as regions, of the circuit design so that connections between the local clock loads and the local clock sources meet the specified design tolerances. A further description of placement of local clock loads within a particular window is described in co-pending U.S. patent application titled "AUTOMATED LOCAL CLOCK PLACEMENT FOR FPGA DESIGNS" by Qiang Wang, et. al., filed concurrently with this application, and which is herein incorporated by reference.In step 230, the cost function for the global clocks is determined. Additionally, the local clock placer determines an initial placement for the global clock sources. The cost function for the global clocks reflects all of the predetermined clock rules that must be obeyed and all of the constraints that have been determined and imposed by each previous stage or task.More particularly, global clock source placement includes the placement of various global clock source types such as, clock multiplexers, clock managers, and global clock I/O's into available locations for each type of global clock source. The above three types of components have dedicated connections between one another which ensure that the fastest signal routing is used to connect the various components. As such, only certain placement configurations of the aforementioned global clock source types allow the use of the dedicate signal routes. The global clock cost function models the placement configurations between various global clock source types by penalizing configurations that forbid the use of dedicated connections.In step 235, the global clock sources are annealed to assign locations to each global clock source. The annealer begins determining a final placement for the global clock sources using the initial global clock placement as a baseline. As noted, the annealer determines a placement for each global clock source so that interconnects between global clock sources can be routed with minimum delays. Additionally, the annealer assigns locations to the global clock sources so that all previously determined constraints are obeyed and no clock rules are violated.The clock rules specify placement configurations that allow the global clock sources to use the dedicated routing resources allocated for the circuit design. For example, according to one embodiment of the present invention, there can be 16 locations on a chip where the clock multiplexer can be placed and 4 locations where a clock manager can be placed. Only certain placement configurations of the clock manager and the clock multiplexer, however, allow usage of the dedicated routing resources.Thus, when annealing the global clock sources, all moves that violate any clock constraints are penalized. Only those moves that do not violate local clock placement constraints are allowed. In this manner, the cost function guides the annealer toward a solution that is legal with respect to all clock rules, obeys all select I/O standards, and obeys local clock constraints.As noted with respect to step 220, if the annealer does not converge upon a solution, an error condition can be generated and the method can end. The annealer can be allowed to iterate a predetermined number of times or for a predetermined amount of time before terminating without converging upon a placement solution for the circuit design.In step 240, constraints for the global clock loads can be determined. Once the global clock sources have been assigned to particular positions, the loads of the global clock sources can be assigned to particular windows or regions of the circuit design. The loads of the global clocks are constrained such that no clock window and/or region includes loads for a pair of conflicting global clocks.In illustration, a clock region represents a portion of a chip, typically approximating 1/16 to [1/4] of the total area of the chip depending upon the particular device. This restriction arises in particular circuit design architectures utilizing paired global clock sources. In particular chip architectures, clock multiplexer locations can be divided into pairs. If, for example, there are 16 possible locations where a clock multiplexer can be placed in a circuit design, 8 possible pairs exist. If two clock multiplexers are placed into a pair of these multiplexer locations, only one of the clock multiplexers can drive the loads present in such a region of the circuit design. No single region can have components that are driven by both pairs of clock multiplexers. This situation can be avoided by placing one multiplexer in one pair, leaving the other pair empty, and placing the other multiplexer in another pair.The clock placer has a significant degree of freedom in placing global clock buffers and constrains the loads to particular circuit windows and/or regions while obeying constraints relating to select I/O, global clock sources, and local clock sources.After step 240, the method can end. A detailed placement can be implemented to follow the method 200 which can place remaining logic components of the circuit design. An embodiment of present invention provides a method, system, and apparatus for placing clocks and I/O's of a circuit design as well as constraining the loads of the clocks to particular portions of the circuit design.In accordance with an embodiment of the present invention disclosed herein, I/O's, local clock sources and loads, as well as global clock sources and loads can be placed such that each task is performed without violating constraints determined during previously performed tasks. The resulting placement complies with select I/O requirements as well as local and global clock source constraints.The present invention can be realized in hardware, software, or a combination of hardware and software. The present invention can be realized in a centralized fashion in one computer system, or in a distributed fashion where different elements are spread across several interconnected computer systems. Any kind of computer system or other apparatus adapted for carrying out the methods described herein is suited. A typical combination of hardware and software can be a general purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein.The present invention also can be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which when loaded in a computer system is able to carry out these methods. Computer program in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form.This invention can be embodied in other forms without departing from the spirit or essential attributes thereof. Accordingly, reference should be made to the following claims, rather than to the foregoing specification, as indicating the scope of the invention. |
A packaged device includes a package substrate and a plurality of optical structures formed on a semiconductive substrate and positioned on the package substrate, forming an active area. The packaged device further includes a contiguous solderable seal structure surrounding the plurality of optical structures and a cap formed over the plurality of optical structures and upon the contiguous solderable seal structure. The cap has, formed thereon, patterned metalization. The patterned metalization is located over the active area. |
What is claimed is: 1. A packaged device, comprising:a package substrate; a plurality of optical structures formed on a semiconductive substrate and positioned on said package substrate, forming an active area; a contiguous solderable seal structure surrounding said plurality of optical structures; and a cap formed over said plurality of optical structures and upon said contiguous solderable seal structure; said cap having, formed thereon, patterned metalization, said patterned metalization being located over said active area. 2. The packaged device as claimed in claim 1 wherein said optical structures are a plurality of moveable optical mirrors.3. The packaged device as claimed in claim 1, wherein said cap is a glass lid.4. The packaged device as claimed in claim 1, wherein said patterned metalization is a solid metal layer having clear apertures therein, said clear apertures being located to enable light from a source external of said cap to be incident upon the optical structures.5. The packaged device as claimed in claim 1, wherein said optical structures are a plurality of optical sensors.6. The packaged device as claimed in claim 2, wherein said patterned metalization is a solid metal layer having clear apertures therein, said clear apertures being located to enable light to be incident upon the optical structures, said apertures being located to enable light to be reflected from the optical mirrors to pass through said cap.7. The packaged device as claimed in claim 3, wherein said patterned metalization is a metal layer having clear apertures therein, said clear apertures being located to enable light from a source external of said cap to be incident upon the optical structures.8. The packaged device as claimed in claim 3, wherein said optical structures are a plurality of optical sensors.9. The packaged device as claimed in claim 7, wherein said patterned metalization is a solid metal layer having clear apertures therein, said clear apertures being located to enable light to be incident upon the optical structures, said apertures being located to enable light to be reflected from the optical mirrors to pass through said cap.10. The packaged device as claimed in claim 1, wherein said patterned metalization is electrically coupled to said contiguous solderable seal structure.11. The packaged device as claimed in claim 1, wherein said patterned metalization is a metallic mesh having clear apertures therein, said clear apertures being located to enable light from a source external of said cap to be incident upon the optical structures.12. The packaged device as claimed in claim 1, wherein said patterned metalization is a grid of thin metal wires having clear openings therebetween, said clear openings being located to enable light from a source external of said cap to be incident upon the optical structures.13. The packaged device as claimed in claim 1, wherein said patterned metalization prevents electrostatic discharge between said cap and said semiconductive substrate having said optical structures formed thereon.14. The packaged device as claimed in claim 1, said patterned metalization is positioned between said cap and said plurality of optical structures.15. The packaged device as claimed in claim 1, wherein said plurality of optical structures from optical micro electromechanical system devices.16. The packaged device as claimed in claim 1, wherein said patterned metalization is electrically coupled to a DC voltage source.17. The packaged device as claimed in claim 9, wherein said patterned metalization is electrically coupled to a DC voltage source.18. The packaged device as claimed in claim 11, wherein said patterned metalization is electrically coupled to a DC voltage source.19. The packaged device as claimed in claim 12, wherein said patterned metalization is electrically coupled to a DC voltage source.20. The packaged device as claimed in claim 13, wherein said patterned metalization is electrically coupled to a DC voltage source.21. A metalized glass lid for an optical die, comprising:a glass portion; and a metalization adjacent to said glass portion; said metalization having clear apertures; said metalization being connected to a DC voltage. 22. The metalized glass lid for an optical die as claimed in claim 21, wherein the optical die is in a lidded package.23. The metalized glass lid for an optical die as claimed in claim 21, wherein the metalized glass lid is soldered directly to a package containing the optical die.24. An optical device, comprising:a plurality of optical structures formed on a semiconductive substrate, said plurality of optical structures forming an active area; and a cap formed over said plurality of optical structures; said cap having, formed thereon, patterned metalization, said patterned metalization being located over said active area. 25. The optical device as claimed in claim 24, wherein said optical structures are a plurality of moveable optical mirrors.26. The optical device as claimed in claim 24, wherein said cap is a glass lid.27. The optical device as claimed in claim 24, wherein said patterned metalization is a solid metal layer having clear apertures therein, said clear apertures being located to enable light from a source external of said cap to be incident upon the optical structures.28. The optical device as claimed in claim 24, wherein said patterned metalization is a metallic mesh having clear apertures therein, said clear apertures being located to enable light from a source external of said cap to be incident upon the optical structures.29. The optical device as claimed in claim 24, wherein said patterned metalization is a grid of thin metal wires having clear openings therebetween, said clear openings being located to enable light from a source external of said cap to be incident upon the optical structures.30. The optical device as claimed in claim 24, wherein said patterned metalization prevents electrostatic discharge between said cap and said semiconductive substrate having said optical structures formed thereon.31. The optical device as claimed in claim 24, wherein said patterned metalization is electrically coupled to a DC voltage source. |
FIELD OF THE INVENTIONThe present invention relates generally to providing an electrically shielded cover for a packaged optical device. More particularly, the present invention relates to providing an electrically shielded glass lid for an optical micro electromechanical system (MEMS) package.BACKGROUND OF THE INVENTIONConventionally, semiconductive wafer based devices such as optical micro electromechanical systems ("MEMS") are produced using a variety of methods of fabrication. Notwithstanding the process of fabrication, the produced MEMS requires a cap or lid to protect the MEMS from environmental insult, such an undesirable humidity. Such caps have been typically glass lids that provide both the environmental protection and provide an optical window for the optical devices in a MEMS package.However, conventional glass lids have one disadvantage in that conventional glass lids are typically dielectrics that can have electrostatic energy build up upon them. The built up electrostatic energy can result in an electrostatic discharge that can damage the optical MEMS devices. Moreover, the build up of electrostatic energy can produce local electrical fields that may distort the optical MEMS devices, especially if these devices are sensors.In one solution to counter this disadvantage, a conventional glass cover would be located far enough from the optical devices so that the air gap would be large enough to prevent a discharge. Moreover, the large air gap would cause the locally generated electrical fields to be minimally weak around the optical devices. Although these conventional devices would provide an optical device shielded from the environment and protect from electrostatic discharge and locally generated electrical fields, the package surrounding the optical device would large and thus costly and not small enough to be used in many applications. Furthermore the large air gap creates a long optical or focal length that could keep the optical device from being used in certain applications.In another solution to counter this disadvantage, a conventional glass cover would be covered with a thin transparent film of transparent electrodes. The transparent electrodes provide an electrical path for the electrostatic energy to be drained off the glass cover. Although the transparent electrodes provide protection against electrostatic insult to the optical device, the transparent film and transparent electrodes are not absolutely transparent. Using such a solution for electrostatic protection causes undesirable light loss. Such light loss is unacceptable in such applications as fiber optic telecommunication systems. More specifically, telecommunication applications using optical communication are starved for light due to light loss along the fiber optic network, thus any light loss, even a small percentage of loss, can significantly impact the application in a negative manner.Moreover, the use of transparent electrodes and thin films hinders the use of optical MEMS devices with certain wavelengths of light. For example, some transparent electrodes do not transmit well in the UV spectrum. This is significant when the optical MEMS devices may be used in conjunction with an EPROM that uses UV light to enable programming of the EPROM.The present invention provides a glass lid for an optical MEMS package, while avoiding the disadvantages of the conventional devices. More specifically, the present invention prevents an electrostatic discharge and reduces or eliminates any undesirable locally generated electrical fields. The present invention also provides protection from the environment, such as humidity, while providing an optical window for the optical MEMS devices wherein the light loss is greatly reduced to-an insignificant level or eliminated. Lastly, the packaging of the optical MEMS is greatly reduced by the present invention, thereby reducing costs and expanding the applications for the optical MEMS package.SUMMARY OF THE INVENTIONOne aspect of the present invention is a packaged device. The packaged device includes a package substrate; a plurality of optical structures formed on a semiconductive substrate and positioned on the package substrate, forming an active area; a contiguous solderable seal structure surrounding the plurality of optical structures; and a cap formed over the plurality of optical structures and upon the contiguous solderable seal structure. The cap has, formed thereon, patterned metalization. The patterned metalization is located over the active area.Another aspect of the present invention is a metalized glass lid for an optical die. The metalized glass lid includes a glass portion and a metalization adjacent to the glass portion. The metalization has clear apertures and is connected to a DC voltage.BRIEF DESCRIPTION OF THE DRAWINGSThe present invention may take form in various components and arrangements of components, and in various steps and arrangements of steps. The drawings are only for purposes of illustrating a preferred embodiment and are not to be construed as limiting the present invention, wherein:FIG. 1 is a graphical representation of an optical MEMS package according to the present invention;FIG. 2 is a graphical representation of patterned metalization upon a glass lid according to one embodiment of the present invention;FIG. 3 is a graphical representation of patterned metalization upon a glass lid according to another embodiment of the present invention;FIG. 4 a graphical representation of optical MEMS devices interacting with a patterned metalization formed upon a glass lid according to the present invention; andFIG. 5 is a graphical representation of an optical MEMS package according to another embodiment of the present invention.DETAILED DESCRIPTION OF THE INVENTIONAs noted above, the present invention is directed to an optical MEMS package that includes a glass lid or cover that provides both environmental and electrostatic insult protection. FIG. 1 illustrates an embodiment according to the concepts of the present invention.As illustrated in FIG. 1, an optical MEMS package 1 includes a package substrate 40 which can provide a base or foundation for an optical MEMS die 65. The optical MEMS die 65 is attached to the package substrate 40 through wire bonds 45. Upon the package substrate 40, deposited solder performs 50 are formed to receive deposited performs 20 located on a glass lid or cover 10 to provide a soldered seal around the optical devices on the optical MEMS die 65 within the optical MEMS package 1.The optical MEMS devices are represented by references 60 and 70. In this illustration, the optical MEMS devices are mirrors that are able to rotate to various positions based upon received control signals. The optical MEMS devices can also be sensors on any other optical MEMS device. In FIG. 1, the mirror can be placed in one of a plurality of positions as represented by reference 60, or it could be placed in another of a plurality of positions as represented by reference 70.The glass lid or cover 10 includes thereon a patterned metalization layer 30. The patterned metalization layer 30 may be located on either side of the glass lid or cover 10. In a preferred embodiment, the patterned metalization layer 30 is located within the hermitically seal environment to protect it from wear and damage. Moreover, the patterned metalization layer 30 can be electrically coupled to a DC voltage source or electrical ground through the deposited solder perform 20 or the deposited solder perform 50. The DC voltage source provides the electrostatic protection for the optical MEMS package and devices therein.In an optical MEMS package, as illustrated in FIG. 4, light 110 enters the package is incident upon, in this example, a plurality of mirrors angled at various positions depending upon the control signals received by the individual MEMS devices. For example, as shown in FIG. 4, an individual MEMS device, such as a mirror, may be positioned in one state as illustrated by mirror position 140, or the same mirror can be positioned at another of a plurality of positions as illustrated by mirror positions 150 or 160. The light 110 passes a portion of a patterned metalization layer 100 before encountering the individual MEMS device. The reflected light 120, from mirror position 140, passes by the patterned metalization layer 100 before exiting the optical MEMS device package.As noted above, the mirrors can also be positioned differently. For example, a mirror can be positioned as illustrated in FIG. 4 by mirror position 150 or by mirror position 160. Light 110 enters the package is incident upon either of these mirror positions 150 or 160. The light 110 passes a portion of a patterned metalization layer 100 before encountering the individual MEMS device. The reflected light 130, from mirror position 150, or the reflected light 170, from mirror position 160, pass by the patterned metalization layer 100 before exiting the optical MEMS device package.The patterned metalization may be realized by a metal layer formed over a glass lid. An example of such a metal layer is illustrated in FIG. 2. In FIG. 2, the metal layer is a solid metal layer 30 having apertures 80 formed therein. The apertures 80 are aligned so as to allow the optical MEMS device to interact with the light being received by the optical MEMS package. The apertures 80 of FIG. 2 allow the light to pass therethrough without interference and without light loss, while preventing electrostatic discharge between the glass lid and the optical MEMS devices. The apertures 80 of FIG. 2 may vary in size depending upon the application of the optical MEMS package.The patterned metalization may also be realized by a thin metal mesh formed over a glass lid. An example of such a thin metal mesh is illustrated in FIG. 3. In FIG. 3, the metal layer is a plurality of thin wires 90 in a grid-like pattern. The pattern provides openings 85 between the grid of thin wires 90. The wires 90 can be very thin metal conductors, thus allowing the optical MEMS package to realize a very dense population of optical MEMS devices because the light is not occluded by the grid structure. The wires 90 are aligned so as to allow the optical MEMS device to interact with the light being received by the optical MEMS package. The wires 90 of FIG. 3 allow the light to pass therethrough without interference and without light loss, while preventing electrostatic discharge between the glass lid and the optical MEMS devices. The openings 85 of FIG. 3 may vary in size depending upon the application of the optical MEMS package.FIG. 5 illustrates another embodiment according to the concepts of the present invention. As illustrated in FIG. 5, an optical MEMS die 11 includes a semiconductive substrate 400 that provides a base or foundation for the optical devices. Upon the semiconductive substrate 400, deposited solder performs 50 are placed to receive deposited performs 20 located on a glass lid or cover 10 to provide a soldered seal around the optical devices formed on the optical MEMS die 11.The optical MEMS devices are represented by references 60 and 70. In this illustration, the optical MEMS devices are mirrors that are able to rotate to various positions based upon received control signals. The optical MEMS devices can also be sensors on any other optical MEMS device. In FIG. 5, the mirror can be placed in one of a plurality of positions as represented by reference 60, or it could be placed in another of a plurality of positions as represented by reference 70.The glass lid or cover 10 includes thereon a patterned metalization layer 30. The patterned metalization layer 30 may be located on either side of the glass lid or cover 10. In a preferred embodiment, the patterned metalization layer 30 is located within the hermitically seal environment to protect it from wear and damage. Moreover, the patterned metalization layer 30 can be electrically coupled to a DC voltage source or electrical ground through the deposited solder perform 20 or the deposited solder perform 50. The DC voltage source provides the electrostatic protection for the optical MEMS die 11 and devices therein.As noted above, the present invention is directed to a packaged device having a package substrate; a plurality of optical structures formed on a semiconductive substrate and positioned on the package substrate, forming an active area; a contiguous solderable seal structure surrounding the plurality of optical structures; and a cap formed over the plurality of optical structures and upon the contiguous solderable seal structure. The cap has, formed thereon, patterned metalization. The patterned metalization is located over the active area.The optical structures may be a plurality of moveable optical mirrors. Moreover, the cap is a glass lid. In one embodiment, the patterned metalization is a solid metal layer having clear apertures therein, the clear apertures being located to enable light from a source external of the cap to be incident upon the optical structures. The apertures are also located to enable light to be reflected from the optical mirrors to pass through the cap.In another embodiment, the optical structures are a plurality of optical sensors. In a further embodiment, the plurality of optical structures from optical micro electromechanical system devices.In a preferred embodiment, the patterned metalization is electrically coupled to a DC voltage source through the contiguous solderable seal structure or coupled directly thereto. The patterned metalization may also be a metallic mesh or a grid of thin metal wires having clear apertures therein or openings therebetween. The clear apertures or openings are located to enable light from a source external of the cap to be incident upon the optical structures. The apertures or openings are also located to enable light to be reflected from the optical mirrors to pass through the cap.The patterned metalization prevents electrostatic discharge between the cap and the semiconductive substrate having the optical devices formed thereon. The patterned metalization is can also be positioned between the cap and the plurality of optical structures.In another embodiment of the present invention, a metalized glass lid for an optical die includes a glass portion and a metalization adjacent to the glass portion. The metalization has clear apertures and is coupled to a DC voltage.In the preferred embodiment, the optical die is in a lidded package, but the lid can be soldered directly to the optical die.While various examples and embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that the spirit and scope of the present invention are not limited to the specific description and drawings herein, but extend to various modifications and changes all as set forth in the following claims. |
A pad with reduced capacitance loading for a Spin Transfer Torque Magnetoresistive Random Access Memory (STT-MRAM) bit cell array is provided. The pad includes a plurality of hollow-shaped lower metal layers and a planar top metal layer formed on an uppermost layer of the plurality of hollow-shaped lower metal layers. |
The low loading pad of claim 1, further comprising: a via interconnect connecting two of the plurality of hollow-shaped lower metal layers, wherein the via interconnect is disposed along a perimeter of the pad. The low loading pad of claim 1, further comprising: a via interconnect connecting the uppermost layer of the plurality of hollow- shaped lower metal layers and the top metal layer. The low loading pad of claim 1, further comprising: a plurality of via interconnects connecting two of the plurality of hollow- shaped lower metal layers, wherein the plurality of via interconnects are disposed around a perimeter of the pad. The low loading pad of claim 1, further comprising: a plurality of via interconnects connecting the uppermost layer of the plurality of hollow-shaped lower metal layers and the top metal layer. The low loading pad of claim 1, further comprising: an aluminum layer formed over the top metal layer. The low loading pad of claim 1, further comprising: an aluminum layer formed over the top metal layer, wherein the top metal layer is a solid layer. The low loading pad of claim 1, wherein a capacitance of the plurality of hollow-shaped lower metal layers is less than a capacitance of the top metal layer. The low loading pad of claim 1, wherein a perimeter of the plurality of hollow-shaped lower metal layers substantially corresponds to a perimeter of the top metal layer. The low loading pad of claim 1, wherein a perimeter of the aluminum layer substantially corresponds to a perimeter of the top metal layer. A low loading pad for a Spin Transfer Torque Magnetoresistive Random Access Memory (STT-MRAM) bit cell, the low loading pad comprising: a plurality of lower metal layers; and a planar top metal layer formed on an uppermost layer of the plurality of lower metal layers, wherein one of the plurality of lower metal layers is a hollow-shaped metal layer. The low loading pad of claim 11, wherein each of the plurality of lower metal layers is a hollow-shaped metal layer. A Spin Transfer Torque Magnetoresistive Random Access Memory (STT-MRAM) bit cell comprising: a low loading pad, wherein the low loading pad includes: a plurality of lower metal layers; and a planar top metal layer formed on an uppermost layer of the plurality of lower metal layers, wherein one of the plurality of lower metal layers is a hollow-shaped metal layer. The STT-MRAM bit cell of claim 13, wherein each of the plurality of lower metal layers is a hollow-shaped metal layer. The STT-MRAM bit cell of claim 13, wherein the low loading pad further includes: an aluminum layer formed over the planar top metal layer. The STT-MRAM bit cell of claim 13, wherein the low loading pad further includes: an aluminum layer formed over the planar top metal layer, wherein the planar top metal layer is a solid layer. The STT-MRAM bit cell of claim 13, wherein a capacitance of the plurality of lower metal layers is less than a capacitance of the planar top metal layer. The STT-MRAM bit cell of claim 13, wherein a perimeter of the plurality of lower metal layers substantially corresponds to a perimeter of the planar top metal layer. The STT-MRAM bit cell of claim 15, wherein a perimeter of the aluminum layer substantially corresponds to a perimeter of the planar top metal layer. The STT-MRAM bit cell of claim 13, wherein the low loading pad further includes: a via interconnect connecting two of the plurality of lower metal layers, wherein the via interconnect is disposed along a perimeter of the pad. The STT-MRAM bit cell of claim 13, wherein the low loading pad further includes: a via interconnect connecting the uppermost layer of the plurality of lower metal layers and the planar top metal layer. The STT-MRAM bit cell of claim 13, wherein the low loading pad further includes: a plurality of via interconnects connecting two of the plurality of lower metal layers, wherein the plurality of via interconnects are disposed around a perimeter of the pad. The STT-MRAM bit cell of claim 13, wherein the low loading pad further includes: a plurality of via interconnects connecting the uppermost layer of the plurality of lower metal layers and the planar top metal layer. A method of forming a low loading pad for a Spin Transfer Torque Magnetoresistive Random Access Memory (STT-MRAM) bit cell, the method comprising: forming a plurality of lower metal layers; and forming a planar top metal layer on an uppermost layer of the plurality of lower metal layers, wherein one of the plurality of lower metal layers is a hollow-shaped metal layer. |
CA 02723830 2010-11-08 WO 2009/142931 PCT/US2009/043346 071328 1 PAD DESIGN FOR STT-MRAM Field of Disclosure [0001] Exemplary embodiments of the invention are directed to structural designs of low loading pads for Magnetoresistive Random Access Memory (MRAM) bit cells. More particularly, embodiments of the invention are related to structural designs of low loading pads for Spin Transfer Torque Magnetoresistive Random Access Memory (STT-MRAM) bit cells. Background [0002] Magnetoresistive Random Access Memory (MRAM) is a non-volatile memory technology that uses magnetic elements. For example, Spin Transfer Torque Magnetoresistive Random Access Memory (STT-MRAM) uses electrons that become spin-polarized as the electrons pass through a thin film (spin filter). STT- MRAM is also known as Spin Transfer Torque RAM (STT-RAM), Spin Torque Transfer Magnetization Switching RAM (Spin-RAM), and Spin Momentum Transfer (SMT- RAM). [0003] Referring to Fig. 1, a diagram of a conventional STT-MRAM cell 100 is illustrated. The STT-MRAM bit cell 100 includes magnetic tunnel junction (MTJ) storage element 105, transistor 110, bit line 120 and word line 130. The MTJ storage element is formed, for example, from a pinned layer and a free layer, each of which can hold a magnetic field, separated by an insulating (tunnel barrier) layer as illustrated in Fig. 1. The STT-MRAM bit cell 100 also includes a source line 140, sense amplifier 150, read / write circuitry 160 and bit line reference 170. Those skilled in the art will appreciate the operation and construction of the memory cell 100 is known in the art. Additional details are provided, for example, in M. Hosomi, et al., A Novel Nonvolatile Memory with Spin Transfer Torque Magnetoresistive Magnetization Switching: Spin- RAM, proceedings of IEDM conference (2005), which is incorporated herein by reference in its entirety. [0004] Conventionally, a pad is used to connect, for example, the source line 140 of the STT-MRAM cell 100 to the lower portion of the transistor 110, or to connect the transistor 110 to the word lines 130, etc. Conventional pad designs use large metal grid CA 02723830 2010-11-08 WO 2009/142931 PCT/US2009/043346 071328 2 layers (arrays) such as slotted designs in which alternating layers run perpendicular to each other, or large metal plates (e.g., full metal plates) which cover the entire pad area. The conventional pad designs typically include a large amount of metal, which leads to large capacitance from the probing pads. The conventional pads having such large amounts of parasitic capacitance can lead to signal distortion and/or to signal extinguishing, particularly for short pulse signals or high frequency signals. SUMMARY [0005] Exemplary embodiments of the invention are directed to structural designs of low loading pads for Magnetoresistive Random Access Memory (MRAM) bit cells. More particularly, embodiments of the invention are related to structural designs of low loading pads for Spin Transfer Torque Magnetoresistive Random Access Memory (STT-MRAM) bit cells. [0006] Embodiments of the present invention are directed to pad designs with reduced parasitic capacitance characteristics. For example, an embodiment of a pad design reduces the capacitance from the metal layers of the pad by removing a portion (e.g., a majority) of one or more of the lower metal layers (e.g., metal layers Ml - M6) to reduce the effective area of one or more of the lower metal layers of the pad, for example, of a STT-MRAM bit cell. More particularly, an embodiment of a pad design reduces the capacitance from the metal layers of the pad by removing a center or central portion of one or more of the lower metal layers (e.g., metal layers Ml - M6) to reduce the effective area of one or more of the lower metal layers of the pad, for example, of a STT-MRAM bit cell. By maintaining the edge or perimeter portion of the lower metal layers (i.e., by forming hollow-shaped lower metal layers), the novel pad design permits wire bounding at any location around the perimeter of the pad. [0007] Accordingly, at least one embodiment can reduce the effective areas of the lower metal layers so that the capacitance from the pads may be reduced while also reducing the resistance of the pad. The exemplary embodiment can reduce or eliminate signal distortion and/or the occurrence of signal extinguishing, particularly for short pulse signals or high frequency signals. [0008] For example, an exemplary embodiment is directed to a low loading pad for a Spin Transfer Torque Magnetoresistive Random Access Memory (STT-MRAM) bit cell. The low loading pad includes a plurality of hollow-shaped lower metal layers, and CA 02723830 2010-11-08 WO 2009/142931 PCT/US2009/043346 071328 3 a top metal layer formed on an uppermost layer of the plurality of hollow- shaped lower metal layers. [0009] In another embodiment, a low loading pad for a Spin Transfer Torque Magnetoresistive Random Access Memory (STT-MRAM) bit cell includes a plurality of lower metal layers, and a planar top metal layer formed on an uppermost layer of the plurality of lower metal layers. One of the plurality of lower metal layers is a hollow- shaped metal layer. [0010] In yet another embodiment, a Spin Transfer Torque Magnetoresistive Random Access Memory (STT-MRAM) bit cell includes a low loading pad. The low loading pad includes a plurality of lower metal layers, and a planar top metal layer formed on an uppermost layer of the plurality of lower metal layers. One of the plurality of lower metal layers is a hollow-shaped metal layer. [0011] Another exemplary embodiment is directed to a method of forming a low loading pad for a Spin Transfer Torque Magnetoresistive Random Access Memory (STT-MRAM) bit cell. The method includes forming a plurality of lower metal layers, and forming a planar top metal layer on an uppermost layer of the plurality of lower metal layers. One of the plurality of lower metal layers is a hollow-shaped metal layer. BRIEF DESCRIPTION OF THE DRAWINGS [0012] The accompanying drawings are presented to aid in the description of embodiments of the invention and are provided solely for illustration of the embodiments and not limitation thereof. [0013] FIG. 1 illustrates a conventional Spin Transfer Torque Magnetoresistive Random Access Memory (STT-MRAM) cell. [0014] FIG. 2 is a side view of a pad according to an embodiment. [0015] FIG. 3 is a top down view of a hollow-shaped lower metal layer of a pad according to an embodiment. [0016] FIG. 4 is a top down view of a top metal layer and a hollow-shaped lower metal layer of a pad according to an embodiment. [0017] FIG. 5 is an exploded, perspective view of a pad according to an embodiment. [0018] FIG. 6 is another exploded, perspective view of a pad according to an embodiment. CA 02723830 2010-11-08 WO 2009/142931 PCT/US2009/043346 071328 4 [0019] FIG. 7 is a screen view of a top down view of a hollow-shaped lower metal layer of a pad according to an embodiment. [0020] FIG. 8 is a screen view of a top down view of a top metal layer and a hollow- shaped lower metal layer of a pad according to an embodiment. DETAILED DESCRIPTION [0021] Aspects of the invention are disclosed in the following description and related drawings directed to specific embodiments of the invention. Alternate embodiments may be devised without departing from the scope of the invention. Additionally, well- known elements of the invention will not be described in detail or will be omitted so as not to obscure the relevant details of the invention. [0022] The words "exemplary" and/or "example" are used herein to mean "serving as an example, instance, or illustration." Any embodiment described herein as "exemplary" and/or "example" is not necessarily to be construed as preferred or advantageous over other embodiments. Likewise, the term "embodiments of the invention" does not require that all embodiments of the invention include the discussed feature, advantage or mode of operation. Further, certain terminology, such as "on" (e.g., as in mounted `on') and "substantially" are used in a broad manner herein. For example, the term "on" is intended to include, for example, an element or layer tht is directly on another element or layer, but could alternatively include intervening layers between the elements/layers. [0023] With reference to FIGS. 2-8, exemplary embodiments of structural designs of low loading pads for Magnetoresistive Random Access Memory (MRAM) bit cells, and more particularly, of low loading pads for Spin Transfer Torque Magnetoresistive Random Access Memory (STT-MRAM) bit cells, will now be described. [0024] With reference to FIG. 2, an embodiment of a pad 100 can include a plurality of lower metal layers (e.g., metal layers Ml to M6) and a top metal layer (e.g., metal layer M7). In another embodiment, an additional metal layer, such as an aluminum (Al) layer 30, can be formed on the top metal layer 20. In this embodiment, the top metal layer 20 provides connectivity to the aluminum layer 30. [0025] In an embodiment of the invention, the capacitance of the pad 100 can be reduced by removing or etching a portion (e.g., a majority) of one or more of the lower metal layers 10 (e.g., one or more of metal layers Ml to M6) to reduce the effective area of one or more of the lower metal layers 10 of the pad 100. CA 02723830 2010-11-08 WO 2009/142931 PCT/US2009/043346 071328 5 [0026] FIG. 3 shows an embodiment of a hollow-shaped lower metal layer 10 of a pad 100. The hollow-shaped lower metal layer 10 can form one or more of the lower metal layers Ml to M6 of a pad 100, for example, of a STT-MRAM bit cell. One of ordinary skill in the art will recognize that the lower metal layer 10 can be formed to be hollow- shaped according to various conventional techniques. The embodiments are not limited to etching or removing the lower metal layers 10 to form the hollow-shaped layer. [0027] In FIG. 3, the lower metal layer 10 (e.g., one or more of metal layers Ml to M6) is exemplarily illustrated as being square shaped with a width X and a thickness t. For example, an exemplary lower metal layer 10 can be 90 gm x 90 gm, with a thickness t = gm. However, the width X and/or thickness t of the lower metal layer 10 can be made smaller or larger. For example, in designing a pad 100, the thickness t of one or more of the lower metal layers 10 can be selected to reduce the capacitance of the pad 100 while also reducing the resistance. Also, the pad can be rectangular- shaped (e.g., square-shaped). [0028] FIG. 4 is a top down view of a top metal layer 20 formed over a hollow- shaped lower metal layer 10 of a pad 100 according to an embodiment. [0029] FIG. 5 is an exploded, perspective view of an embodiment of a pad 100 including a top metal layer 20 (e.g., metal layer M7) formed over the lower metal layers 10 (e.g., Ml to M6). The top metal layer 20 is a plate metal, for example, to facilitate wiring bonding to the pad 100. Thus, the parasitic capacitance of the pad 100 can be reduced by removing a portion of one or more of the lower metal layers 10 (e.g., metal layers Ml - M6) up to the next to the top (e.g., second from the top) metal layer. For example, in the embodiment shown in FIG. 5, a center or central portion of each of the lower metal layers 10 (e.g., metal layers Ml - M6) is removed to reduce the effective area of the lower metal layers 10 of the pad 100. By maintaining the edge or hollow- shaped portion of the lower metal layers 10, the novel pad 100 permits wire bounding at any location around the perimeter of the pad 100. [0030] One of ordinary skill in the art will recognize that less than all of the lower metal layers 10 can have portions removed. Also, the amount of metal removed from each of the lower metal layers 10 can be different from layer to layer, or removed from different locations from layer to layer. [0031] FIG. 6 illustrates an embodiment of a pad 100 including a top metal layer 20 (e.g., metal layer M7) and an aluminum (Al) layer 30, which are formed over the lower metal layers 10 (e.g., Ml to M6). The top metal layer 20 provides connectivity to the CA 02723830 2010-11-08 WO 2009/142931 PCT/US2009/043346 071328 6 aluminum layer 30. Thus, the top metal layer 20 is formed to be a plate metal (e.g., a planar shape), instead of hollow-shaped. The aluminum layer 30 is formed over the top metal layer 20. The aluminum layer 30 is a plate metal, for example, to facilitate wiring bonding to the pad 100. [0032] Referring again to FIG. 2, according to another embodiment, one or more via interconnects 40 are provided, for example, to connect the lower metal layers 10 (e.g., Ml to M6) to each other, or to connect the uppermost lower layer 10 (e.g., M6) to the top metal layer 20. The via interconnects 40 are moved to be edge via interconnects to provide a connection between the hollow-shaped metal layers 10, as exemplarily shown in FIG. 2. One of ordinary skill in the art will recognize that the via interconnects 40 can be formed at any location around the hollow-shaped layer. Also, the location of each respective via interconnect 40 around the perimeter of each metal layer can be the same or different from layer to layer. [0033] FIG. 7 is a screen view of a top down view of a hollow-shaped lower metal layer of a pad according to an embodiment. FIG. 8 is a screen view of a top down view of a top metal layer and a hollow-shaped lower metal layer of a pad according to an embodiment. [0034] As shown in FIGS. 7 and 8, in an example, a pad for a STT-MRAM bit cell includes one or more hollow-shaped lower metal layers and a top metal layer formed over the lower metal layers. [0035] Accordingly, the exemplary embodiments can reduce the capacitance of a pad 100 of, for example, a STT-MRAM bit cell, by removing or etching a portion (e.g., a majority) of one or more of the lower metal layers 10 (e.g., one or more of metal layers M 1 to M6) to reduce the effective area of one or more of the lower metal layers 10 of the pad 100. In designing the pad 100, the thickness t of one or more of the lower metal layers 10 can be selected to reduce the capacitance of the pad 100 while also minimizing the resistance. The exemplary embodiment can reduce or eliminate signal distortion and/or the occurrence of signal extinguishing, particularly for short pulse signals or high frequency signals. [0036] While the foregoing disclosure shows illustrative embodiments of the invention, it should be noted that various changes and modifications could be made herein without departing from the scope of the invention as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the embodiments of the invention described herein need not be performed in any particular CA 02723830 2010-11-08 WO 2009/142931 PCT/US2009/043346 071328 7 order. Furthermore, although elements of the invention may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated. |
Embodiments disclosed herein include electronic packages and methods of forming such electronic packages. In an embodiment, the electronic package comprises a base substrate. The base substrate may have a plurality of through substrate vias. In an embodiment, a first die is over the base substrate. In an embodiment a first cavity is disposed into the base substrate. In an embodiment, the first cavity is at least partially within a footprint of the first die. In an embodiment, a first component is in the first cavity. |
An electronic package, comprising:a base substrate, the base substrate having a plurality of through substrate vias;a first die over the base substrate;a first cavity into the base substrate, wherein the first cavity is at least partially within a footprint of the first die; anda first component in the first cavity.The electronic package of claim 1, wherein the first component is a second die.The electronic package of claim 2, wherein the second die comprises through substrate vias.The electronic package of claim 2 or 3, wherein an active surface of the second die faces an active surface of the first die.The electronic package of claim 2 or 3, wherein an active surface of the second die faces away from an active surface of the first die.The electronic package of claim 1, wherein the first component is a passive electrical component.The electronic package of claim 1, wherein the first component is a thermoelectric cooling (TEC) module.The electronic package of claim 1, 2, 3, 4, 5, 6 or 7, wherein the first cavity is entirely within the footprint of the first die.The electronic package of claim 1, 6, 7 or 8, further comprising:a second die over the base substrate.The electronic package of claim 9, wherein the first cavity is at least partially within a footprint of the second die.A method of forming an electronic package, comprising:forming through substrate vias (TSVs) partially through a base substrate;thinning the base substrate, wherein the TSVs are not exposed;attaching a carrier to the base substrate;forming a cavity into the base substrate, wherein the cavity exposes a plurality of pads;attaching a component to the plurality of pads;embedding the component within a mold layer;planarizing the base substrate, wherein the planarizing exposes the TSVs;removing the carrier; andattaching a die to the base substrate.The method of claim 11, wherein an active surface of the component faces an active surface of the die.The method of claim 11 or 12, wherein the cavity is at least partially within a footprint of the die.The method of claim 11, 12 or 13, wherein the component is a second die.The method of claim 14, wherein the second die comprises through substrate vias. |
TECHNICAL FIELDEmbodiments of the present disclosure relate to electronic packaging, and more particularly, to multi-chip packaging architectures with one or more dies attached to a base substrate and one or more components embedded in cavities in the base substrate.BACKGROUNDThe demand for increased performance and reduced form factor are driving packaging architectures towards multi-chip integration architectures. Multi-chip integration allows for dies manufactured at different process nodes to be implemented into a single electronic package. However, current multi-chip architectures result in larger form factors that are not suitable for some use cases, or are not otherwise desirable to end users.BRIEF DESCRIPTION OF THE DRAWINGSFigure 1A is a cross-sectional illustration of an electronic package with a base substrate having a first die and a first component embedded in a cavity in the base substrate below the first die, in accordance with an embodiment.Figure 1B is a cross-sectional illustration of an electronic package with a base substrate having a first die, a second die, and a component embedded in a cavity in the base substrate below the first die and the second die, in accordance with an embodiment.Figure 1C is a cross-sectional illustration of an electronic package with a base substrate having a first die, a second die, and a component embedded in a cavity in the base substrate below the first die, in accordance with an embodiment.Figure 1D is a cross-sectional illustration of an electronic package with a base substrate having a first die, a second die, a first component embedded in a first cavity in the base substrate, and second component embedded in a second cavity in the base substrate, in accordance with an embodiment.Figure 1E is a cross-sectional illustration of an electronic package with a base substrate having a first die, a second die, a first component with a face-to-face configuration with the first die and the second die, and a second component with a back-to-face configuration with the first die and the second die, in accordance with an embodiment.Figure IF is a cross-sectional illustration of an electronic package with a base substrate having a first die, a second die, a first component without through substrate vias, and a second component with through substrate vias, in accordance with an embodiment.Figure 1G is a cross-sectional illustration of an electronic package with a base substrate having a first die, a second die, a first component, and a second component, in accordance with an embodiment.Figure 1H is a cross-sectional illustration of an electronic package with a base substrate that comprises a stack of dies, in accordance with an embodiment.Figure 1I is a plan view illustration of an electronic package that includes a plurality of bridges in a base substrate that connect a first die to a second die, in accordance with an embodiment.Figure 1J is a plan view illustration of an electronic package that includes a plurality of bridges in a base substrate that connect a first die to a second die, and the first die to a third die, in accordance with an embodiment.Figure 1K is a plan view illustration of an electronic package that includes a plurality of bridges in a base die that connect a first die to a second die, and a plurality of dies embedded in the base die below the first die and the second die, in accordance with an embodiment.Figure 2A is a cross-sectional illustration of a base substrate with through substrate vias (TSVs) into the base substrate, in accordance with an embodiment.Figure 2B is a cross-sectional illustration of the base substrate after the base substrate is thinned, in accordance with an embodiment.Figure 2C is a cross-sectional illustration of the base substrate after a carrier is attached, in accordance with an embodiment.Figure 2D is a cross-sectional illustration after a cavity is formed into the base substrate, in accordance with an embodiment.Figure 2E is a cross-sectional illustration after a component is attached to pads exposed by the cavity, in accordance with an embodiment.Figure 2F is a cross-sectional illustration after the component is embedded in a mold layer, in accordance with an embodiment.Figure 2G is a cross-sectional illustration after the base substrate is planarized to expose the TSVs, in accordance with an embodiment.Figure 2H is a cross-sectional illustration after package side bumps (PSBs) are attached to the TSVs, in accordance with an embodiment.Figure 2I is a cross-sectional illustration after the carrier is removed, in accordance with an embodiment.Figure 2J is a cross-sectional illustration after a die is attached to the base substrate and overmolded, in accordance with an embodiment.Figure 3A is a cross-sectional illustration of a base substrate without TSVs, in accordance with an embodiment.Figure 3B is a cross-sectional illustration after a carrier is attached to the base substrate, in accordance with an embodiment.Figure 3C is a cross-sectional illustration after TSVs are formed in the base substrate, in accordance with an embodiment.Figure 3D is a cross-sectional illustration after a cavity is formed into the base substrate, in accordance with an embodiment.Figure 4A is a cross-sectional illustration of a base substrate with TSVs that are still fully embedded, in accordance with an embodiment.Figure 4B is a cross-sectional illustration of a cavity formed into the base substrate to expose pads, in accordance with an embodiment.Figure 4C is a cross-sectional illustration of a component attached to the pads in the cavity, in accordance with an embodiment.Figure 4D is a cross-sectional illustration after the cavity is filled with a mold layer and the base substrate is planarized to expose the TSVs, in accordance with an embodiment.Figure 5A is a cross-sectional illustration of a base substrate without TSVs, in accordance with an embodiment.Figure 5B is a cross-sectional illustration after via openings are formed into the base substrate, in accordance with an embodiment.Figure 5C is a cross-sectional illustration after TSVs are disposed in the via openings, in accordance with an embodiment.Figure 5D is a cross-sectional illustration after a cavity is formed into the base substrate and a component is attached to pads in the cavity, in accordance with an embodiment.Figure 6 is a cross-sectional illustration of an electronic system that comprises a multi-chip package, in accordance with an embodiment.Figure 7 is a schematic of a computing device built in accordance with an embodiment.EMBODIMENTS OF THE PRESENT DISCLOSUREDescribed herein are multi-chip packaging architectures with one or more dies attached to a base substrate and one or more components embedded in cavities in the base substrate and methods of forming such electronic packages, in accordance with various embodiments. In the following description, various aspects of the illustrative implementations will be described using terms commonly employed by those skilled in the art to convey the substance of their work to others skilled in the art. However, it will be apparent to those skilled in the art that the present invention may be practiced with only some of the described aspects. For purposes of explanation, specific numbers, materials and configurations are set forth in order to provide a thorough understanding of the illustrative implementations. However, it will be apparent to one skilled in the art that the present invention may be practiced without the specific details. In other instances, well-known features are omitted or simplified in order not to obscure the illustrative implementations.Various operations will be described as multiple discrete operations, in turn, in a manner that is most helpful in understanding the present invention, however, the order of description should not be construed to imply that these operations are necessarily order dependent. In particular, these operations need not be performed in the order of presentation.As noted above, the trends in electronic packaging architectures are driving towards the use of multi-chip architectures. However, form factors are not currently at desired levels. Accordingly, embodiments disclosed herein include multi-chip package architectures with improved form factor. Particularly, embodiments disclosed herein allow for homogenous or heterogeneous integrations over a base substrate. Furthermore, the base substrate may comprise one or more cavities that allow for additional components to be located below (and at least partially within the footprint of) the dies. Accordingly, the form factor is improved by reducing the overall footprint in the X-Y plane, as well as reducing the Z-height. Positioning the additional components within the footprint of the one or more dies also reduces the length of signal paths between dies and the additional components. As such, signal integrity is optimized.Referring now to Figure 1A , a cross-sectional illustration of an electronic package 100 is shown, in accordance with an embodiment. In an embodiment, the electronic package 100 may comprise a base substrate 105. The base substrate 105 may be a silicon substrate in some embodiments. The base substrate 105 may comprise signaling traces, pads, and the like (not shown) proximate to surface 106 of the base substrate. The surface 106 may be referred to herein as a redistribution layer (or layers), a back end of line (BEOL) stack, or the like. In an embodiment, the base substrate 105 is a passive substrate. That is, only passive components (e.g., pads, traces, vias, etc.) are fabricated on the base substrate 105. In other embodiments, the base substrate 105 is an active substrate. That is, active circuitry (e.g., transistors, etc.) may be fabricated on the base substrate.In an embodiment, a plurality of through substrate vias (TSVs) 107 (also referred to as through silicon vias when the base substrate is a silicon substrate) may pass through a thickness of the base substrate 105. TSVs 107 may provide electrical connections between surfaces of the base substrate 105. For example, package side bumps (PSBs) 114 may be electrically coupled to features in the surface 106 of the base substrate 105.In an embodiment, a die 130 may be attached to the base substrate 105. For example, first level interconnects (FLIs) 116 may electrically couple the die 130 to the surface 106 of the base substrate 105. In an embodiment, the die 130 may have an active surface 131 (i.e., the surface proximate to where active circuitry is fabricated). The active surface 131 may be oriented to face the surface 106 of the base substrate 105. In an embodiment, the die 130 is embedded in a mold layer 112. In some embodiments, a backside surface of the die 130 opposite from the active surface 131 may be exposed. In other embodiments, the backside surface of the die 130 is covered by the mold layer 112.In an embodiment, a cavity 115 is formed into the base substrate 105. The cavity 115 may pass through a thickness of the base substrate 105 and end at the surface 106 of the base substrate 105. In an embodiment, the cavity 115 may be at least partially within a footprint of the die 130. As used herein, "within a footprint" refers to being positioned within an outer perimeter of a given feature. For example, the cavity 115 is within the outer perimeter of the die 130 in Figure 1A .In an embodiment, a component 120 may be positioned in the cavity 115. The component 120 may be any of a variety of different component types, such as a die or die stack (e.g., a processor die, a memory die, a power die, a communication die, etc.), a passive component (e.g., a bridge, a capacitor, an inductor, etc.), a cooling module (e.g., a thermoelectric cooling (TEC) module), or the like. In embodiments where the component 120 is a die or a die stack, the component 120 may be fabricated at a first process node and the die 130 may be fabricated at a second process node. In some embodiments, the first process node may be different than the second process node.In an embodiment, the component 120 may have an active surface 121. The active surface 121 may be electrically coupled to the backside surface with one or more TSVs 127. In an embodiment, the active surface 121 may be oriented in a face-to-face configuration with the die 130. That is, the active surface 121 of the component 120 may face the active surface 131 of the die 130. In an embodiment, the component 120 may be coupled to the surface 106 of the base substrate 105 with FLIs 118.In an embodiment, the component 120 may be embedded in a mold layer 126. The mold layer 126 may substantially fill the remaining portion of the cavity 115 that is not occupied by the component 120, the FLIs 118, and any underfill material (not shown) surrounding the FLIs 118. In an embodiment, a backside surface of the component 120 may be exposed (i.e., not covered by the mold layer 126). In other embodiments, the mold layer 126 may cover the backside surface of the component 120.Referring now to Figure 1B , a cross-sectional illustration of an electronic package 100 with a first die 130 and a second die 140 is shown, in accordance with an embodiment. In an embodiment, the electronic package 100 in Figure 1B may be substantially similar to the electronic package 100 in Figure 1A , with the exception that a second die is added and the position of the cavity 115 is moved.As shown in Figure 1B , a second die 140 may be positioned over the surface 106 of the base substrate 105. That is, the second die 140 may be laterally adjacent to the first die 130. In an embodiment, the second die 140 has an active surface 141 that faces the surface 106 of the base substrate 105. In an embodiment, the first die 130 is different than the second die 140. For example, the first die 130 may be fabricated at a first process node and the second die 140 may be fabricated at a second (different) process node. In other embodiments, the first die 130 may be substantially similar to the second die 140. For example, the first die 130 and the second die 140 may be processor dies that are electrically coupled together by a bridge (or any other interconnect) in order to function as a monolithic die.In an embodiment, the cavity 115 may be positioned at least partially within a footprint of the first die 130 and at least partially within a footprint of the second die 140. That is, the cavity 115 may span a gap separating the first die 130 from the second die 140. Such an embodiment may be particularly beneficial when the component 120 is coupled to both the first die 130 and the second die 140. For example, the component 120 may be a bridge (e.g., an embedded multi-die interconnect bridge (EMIB)) that electrically couples the first die 130 to the second die 140. Alternative embodiments may include a component 120 that is a memory device (or any other component) that is accessible by both the first die 130 and the second die 140.Referring now to Figure 1C , a cross-sectional illustration of an electronic package 100 with a first die 130 and a second die 140 is shown, in accordance with an additional embodiment. The electronic package 100 in Figure 1C is substantially similar to the electronic package 100 in Figure 1B , with the exception of the location of the cavity 115. As shown, the cavity 115 is entirely within a footprint of the first die 130. Such an embodiment may be particularly beneficial when the component 120 is only accessed by a single one of the dies (e.g., the first die 130).In an embodiment, the electronic package 100 in Figure 1C may also differ from the electronic package 100 in Figure 1B in that traces 152 are fabricated in the surface 106 of the base substrate 105 to provide a connection between the first die 130 and the second die 140. In embodiments where the base substrate 105 is a silicon substrate, traces with fine line spacing (FLS) may be patterned directly onto the base substrate 105 and there may not be a need for a dedicated bridge die to couple the first die 130 to the second die 140.Referring now to Figure 1D , a cross-sectional illustration of an electronic package 100 with a first component 120 and a second component 160 is shown, in accordance with an embodiment. The electronic package 100 in Figure 1D may be substantially similar to the electronic package 100 in Figure 1B , with the exception that a second cavity 115B and a second component 160 are positioned in the base substrate 105. In an embodiment, the first cavity 115A and the first component 120 may be at least partially within a footprint of the first die 130 and at least partially within a footprint of the second die 140, and the second cavity 115B and the second component 160 may be entirely within a footprint of the first die 130.In an embodiment, the second component 160 may be any of a variety of different component types, such as a die or die stack (e.g., a processor die, a memory die, a power die, a communication die, etc.), a passive component (e.g., a bridge, a capacitor, an inductor, etc.), a cooling module (e.g., a TEC module), or the like. In embodiments where the second component 160 is a die or a die stack, the second component 160 may be fabricated at a first process node and the die 130 may be fabricated at a second process node. In some embodiments, the first process node may be different than the second process node. In an embodiment, the first component 120 and the second component 160 may be the same component. In other embodiments, the first component 120 and the second component 160 may be different components.In an embodiment, the second component 160 may comprise an active surface 161. The active surface 161 may be oriented in a face-to-face configuration with the first die 130. The second component 160 may be electrically coupled to the surface 106 of the base substrate 105 with FLIs 118. In an embodiment, TSVs 167 may pass through the second component 160 to provide electrical connections from a backside surface of the second component 160 to the active surface 161 of the second component 160. In an embodiment, the second component 160 may be embedded in a mold layer 166. As shown in Figure 1D , the mold layer 166 does not cover the backside surface of the second component 160. Other embodiments may include the mold layer 166 covering the backside surface of the second component 160.Referring now to Figure 1E , a cross-sectional illustration of an electronic package 100 with a first component 120 and a second component 160 is shown, in accordance with an embodiment. The electronic package 100 in Figure 1E is substantially similar to the electronic package 100 in Figure 1D , with the exception that the second component 160 is oriented in a different direction. As shown, the second component 160 is oriented with the active surface 161 facing away from the active surface 131 of the first die 130 (i.e., a face-to-back configuration). As such, the first component 120 and the second component 160 are oriented in opposite directions. However, it is to be appreciated that in some embodiments, both the first component 120 and the second component 160 may be oriented in a face-to-back configuration with the first die 130 and the second die 140.Referring now to Figure 1F , a cross-sectional illustration of an electronic package 100 with a first component 120 and a second component 160 is shown, in accordance with an embodiment. The electronic package 100 in Figure IF may be substantially similar to the electronic package 100 in Figure 1D , with the exception that the first component 120 does not include TSVs. In an embodiment, dummy PSBs 114' may be positioned on the backside surface of the first component 120 in order to provide structural robustness. "Dummy PSBs" 114' refer to PSBs that are not electrically coupled to other circuitry of the electronic package 100. While the second component 160 is shown as having TSVs 167, it is to be appreciated that in some embodiments, the second component 160 may also omit TSVs 167.Referring now to Figure 1G , a cross-sectional illustration of an electronic package 100 with a first component 120 and a second component 160 is shown, in accordance with an embodiment. The electronic package 100 in Figure 1G is substantially similar to the electronic package 100 in Figure 1F , with the exception that there are no dummy PSBs 114' below the first component 120. Additionally, the mold layer 126 completely embeds the first component 120 (i.e., the backside surface of the first component 120 is covered by the mold layer 126).Referring now to Figure 1H , a cross-sectional illustration of an electronic package 100 is shown, in accordance with an additional embodiment. In an embodiment, the electronic package 100 may be substantially similar to the electronic package 1C, with the exception that a plurality of components 120A-C are included in the cavity 115. For example, the plurality of components 120A-C may comprise a stack of dies (e.g., a memory die stack).Referring now to Figure 1I , a plan view illustration of an electronic package 100 is shown, in accordance with an embodiment. In an embodiment, the electronic package 100 may comprise a first die 130 and a second die 140 placed over a base substrate 105. The first die 130 may be electrically coupled to the second die 140 by a plurality of components 120 (e.g., bridges). In an embodiment, the plurality of components 120 may be disposed in a single cavity 115. In other embodiments, each component 120 may be disposed in separate cavities 115. As shown in Figure 1I , additional components 120 and cavities 115 may be formed entirely under one of the first die 130 and/or the second die 140.Referring now to Figure 1J , a plan view illustration of an electronic package 100 is shown, in accordance with an additional embodiment. In an embodiment, a first die 130, a second die 140A, and a third die 140B may be placed over the base substrate 105. In an embodiment, components 120 embedded in cavities 115 in the base substrate 105 may electrically couple the first die 130 to the second die 140A, and/or electrically couple the first die 130 to the third die 140B.Referring now to Figure 1K , a plan view illustration of an electronic package 100 is shown, in accordance with an additional embodiment. The electronic package 100 in Figure 1K may be substantially similar to the electronic package 100 in Figure 1I , with the exception that a cavity 115 below the first die houses a pair of components 120, and a pair of cavities 115 are positioned below the second die 140. Each of the cavities 115 may comprise one or more components 120.While Figures 1A-1G illustrate electronic packages 100 with one, two, or three dies and one or more components embedded in cavities in the base substrate, it is to be appreciated that embodiments are not limited to such configurations. For example, electronic packages may include a plurality of dies (e.g., two or more dies) and/or a plurality of components (e.g., two or more components). Furthermore, each cavity in the base substrate may house one or more components.Referring now to Figures 2A-2J , a series of cross-sectional illustrations depict a process for fabricating an electronic package in accordance with an embodiment. In Figures 2A-2J only a single cavity, component, and die are shown for simplicity. However, it is to be appreciated that additional cavities and components and/or dies may also be included in the electronic package using similar processing operations to those described.Referring now to Figure 2A , a cross-sectional illustration of a base substrate 205 is shown, in accordance with an embodiment. In an embodiment, the base substrate 205 may be a silicon substrate. The base substrate 205 may have a thickness T1. For example, the thickness T1 may be a standard wafer thickness (e.g., 800µm).In an embodiment, a surface 206 of the base substrate 205 may comprise a conductive features (e.g., traces, pads, etc.). In some embodiments, the base substrate 205 is a passive substrate. Other embodiments include an active base substrate 205. For example, the base substrate 205 may comprise transistors or the like. In an embodiment, a plurality of TSVs 207 may be positioned in the base substrate 205. As shown in Figure 2A , the plurality of TSVs 207 may not extend entirely through the base substrate 205. The TSVs 207 may be omitted from regions where a cavity is desired. For example, there are no TSVs 207 in a central region of the base substrate 205 shown in Figure 2A .Referring now to Figure 2B , a cross-sectional illustration of the base substrate 205 after the base substrate is thinned is shown, in accordance with an embodiment. For example, the base substrate 205 may be thinned to have a thickness T2 that is approximately 100µm or less. The base substrate 205 may be thinned with a grinding or polishing process. As shown, thinned base substrate 205 may still have the TSVs 207 fully embedded. That is, the TSVs 207 do not pass completely through the base substrate 205 at this point.Referring now to Figure 2C , a cross-sectional illustration of the base substrate 205 after a carrier 280 is attached is shown, in accordance with an embodiment. In an embodiment, the carrier 280 may be secured to the surface 206 of the base substrate 205 by an adhesive film 282.Referring now to Figure 2D , a cross-sectional illustration of the base substrate 205 after a cavity 215 is formed is shown, in accordance with an embodiment. In an embodiment, the cavity 215 may be formed with an etching process that removes a portion of the base substrate 205. The etching process may be a wet or dry etching process that utilizes a photoresist (not shown) over the base substrate 205 in order to define the boundary of the cavity 215. The cavity 215 may extend through the base substrate 205 and end at the surface 206. In an embodiment, a plurality of pads 229 may be exposed by the cavity 215. The pads 229 may have been fabricated as part of the surface 206 prior to the formation of the cavity 215.Referring now to Figure 2E , a cross-sectional illustration of the base substrate 205 after a component 220 is mounted in the cavity 215 is shown, in accordance with an embodiment. The component 220 may be attached to the pads 229 exposed by the cavity 215 by FLIs 218. In an embodiment, the attachment may be a thermocompression bonding (TCB) attachment process. In some embodiments a flux (e.g., epoxy flux) may be used during the attachment process. The FLIs 218 may comprise solder that is reflown between pads. In other embodiments, the FLIs 218 may comprise a copper to copper attachment. After attachment of the component 220 to the pads 229, an underfill material 225 may be dispensed around the FLIs 218.In an embodiment, the component 220 may comprise any of a variety of different component types, such as a die or die stack (e.g., a processor die, a memory die, a power die, a communication die, etc.), a passive component (e.g., a bridge, a capacitor, an inductor, etc.), a cooling module (e.g., a TEC module), or the like. In an embodiment, the component 220 may comprise an active surface 221. The active surface 221 may be oriented to face the surface 206. However, in other embodiments, the active surface 221 may face away from surface 206 (e.g., similar to the component 160 shown in Figure 1E ). In some embodiments, the backside surface of the component 220 may be electrically coupled to the active surface 221 by one or more TSVs 227. In other embodiments, the TSVs 227 may be omitted (e.g., similar to the component 120 shown in Figure 1F ).The component 220 may sit completely in the cavity 215. That is, the depth of the cavity 215 may be greater than a combined thickness of the component 220 and the FLIs 218. Accordingly, a backside surface of the component 220 may be recessed below a backside surface of the base substrate 205.Referring now to Figure 2F , a cross-sectional illustration of the base substrate 205 after the cavity 215 is filled with a mold layer 226 is shown, in accordance with an embodiment. The mold layer 226 may substantially fill the remainder of the cavity 215. In an embodiment, the mold layer 226 may be an epoxy or the like. In some embodiments, the mold layer 226 may also surround the FLIs 218, and in which case, the underfill material 225 may be omitted. The mold layer 226 may also embed the component 220. For example, the mold layer 226 may cover sidewalls and a backside surface of the component 220.Referring now to Figure 2G , a cross-sectional illustration of the base substrate 205 after it has been planarized to expose the TSVs 207 and TSVs 227 is shown, in accordance with an embodiment. In an embodiment, the base substrate 205 may be planarized with a polishing process (e.g., chemical mechanical polishing (CMP) or the like). The polishing process may also recess the mold layer 226 to expose the backside surface of the component 220 and the TSVs 227 (when present).Referring now to Figure 2H , a cross-sectional illustration of the base substrate 205 after PSBs 214 are disposed over the TSVs 207 and 227 is shown, in accordance with an embodiment. In an embodiment, the PSBs 214 may comprise a pad or bump (e.g., a copper bump) and/or a solder ball. In an embodiment, the PSBs 214 over the TSVs 207 may be substantially similar to the PSBs 214 over the TSVs 227 of the component 220.Referring now to Figure 2I , a cross-sectional illustration after the carrier 280 is removed is shown, in accordance with an embodiment. In an embodiment, the carrier 280 may be removed by mechanically separating the carrier 280. In an embodiment, any residual portion of the adhesive film 282 on the base substrate 205 may be cleaned with suitable cleaning processes.Referring now to Figure 2J , a cross-sectional illustration after a die 230 is attached to the base substrate 205 is shown, in accordance with an embodiment. In an embodiment, the die 230 may be attached to the base substrate 205 with FLIs 216. For example, the attachment process may be a TCB process or the like. A mold layer 212 may then be formed over the die 230, with suitable processes (e.g., molded underfill (MUF) process). In an embodiment, the mold layer 212 may cover sidewall surfaces of the die 230, and a backside surface of the die 230 may remain exposed. In other embodiments, the backside surface of the die 230 may be covered by the mold layer 212.In an embodiment, the die 230 may have an active surface 231. The active surface 231 may be oriented to face the surface 206 of the base substrate 205. Accordingly, the die 230 may be referred to as having a face-to-face configuration with the base substrate 205 and with the component 220. In embodiments where the component is oriented with the active surface 221 facing away from surface 206, the die 230 and the component 220 may be referred to as having a face-to-back orientation.Referring now to Figures 3A-3D , a series of cross-sectional illustrations depicting a process for forming an electronic package with a via-last process flow is shown, in accordance with an embodiment.Referring now to Figure 3A , a cross-sectional illustration of a base substrate 305 is shown, in accordance with an embodiment. The base substrate 305 may be a silicon substrate in some embodiments. In an embodiment, the base substrate 305 may comprise a surface 306. The surface 306 may comprise conductive features (e.g., pads, traces, etc.). In some embodiments where the base substrate 305 is an active substrate, the surfaces 306 may also comprise active circuitry (e.g., transistors or the like). In an embodiment, the base substrate 305 may have a thickness T1. For example, the thickness T1 may be approximately 100µm or less. It is to be appreciated that the reduced thickness T1 (compared to a typical silicon wafer thickness of 800µm) may be provided by grinding the base substrate 305 down to a desired thickness. In contrast to the base substrate 205 illustrated in Figure 2B , the base substrate 305 does not have TSVs at this point in the process flow.Referring now to Figure 3B , a cross-sectional illustration of the base substrate 305 after a carrier 380 is attached is shown, in accordance with an embodiment. In an embodiment, the carrier 380 may be secured to the surface 306 of the base substrate 305 by an adhesive film 382.Referring now to Figure 3C , a cross-sectional illustration of the base substrate 305 after TSVs 307 are formed is shown, in accordance with an embodiment. In an embodiment, the TSVs 307 may be formed by creating openings through the base substrate 305 and filling the openings with a conductive material. The openings may be formed with an etching process using a photoresist (not shown) as a mask. The TSVs 307 may have a surface exposed at the backside surface of the base substrate 305.Referring now to Figure 3D , a cross-sectional illustration of the base substrate after a cavity is formed is shown, in accordance with an embodiment. In an embodiment, the cavity 315 may be formed with an etching process that removes a portion of the base substrate 305. The etching process may be a wet or dry etching process that utilizes a photoresist (not shown) over the base substrate 305 in order to define the boundary of the cavity 315. The cavity 315 may extend through the base substrate 305 and end at the surface 306. In an embodiment, a plurality of pads 329 may be exposed by the cavity 315. The pads 329 may have been fabricated as part of the surface 306 prior to the formation of the cavity 315.After formation of the cavity 315, the processing may continue with substantially the same processing operations detailed with respect to Figures 2E-2J in order to provide an electronic package in accordance with an embodiment.Referring now to Figures 4A-4D a series of cross-sectional illustrations depicting a process for forming a cavity and disposing a component in the cavity is shown in greater detail, in accordance with an embodiment.Referring now to Figure 4A , a cross-sectional illustration of a base substrate 405 on a carrier 480 is shown, in accordance with an embodiment. The base substrate 405 may be attached to the carrier 480 with an adhesive film 482. The adhesive film may cover the surface 406 of the base substrate 405 and any pads 453 over the surface 406. In some embodiments, the surface 406 may comprise conductive features 452, such as traces, pads, vias, and the like that will provide interconnections to components and dies of the electronic package.In an embodiment, a plurality of pads 429 may be formed along the surface 406 and embedded in the body of the base substrate 405. The pads 429 are located where the component will be attached in a subsequent processing operation. In some embodiments, the pads 429 may be separated from the surface 406 by an insulative liner (e.g., SiN or the like). In an embodiment, the base substrate 405 may also comprise TSVs 407 that are over pads 409. The TSVs 407 may not extend entirely through the base substrate 405 at this point in the process flow.Referring now to Figure 4B , a cross-sectional illustration of the base substrate 405 after a cavity 415 is formed is shown, in accordance with an embodiment. In an embodiment, the cavity 415 may be formed into the base substrate 405 through the backside surface. The cavity 415 may be positioned between TSVs 207 and expose the pads 429. In some embodiments, the cavity 415 may be lined with a lining (not shown) such as a nitride. The exposed pads 429 may also be plated with a conductive barrier layer, or the like.Referring now to Figure 4C , a cross-sectional illustration after the component 420 is attached to the pads 429 is shown, in accordance with an embodiment. In an embodiment, the component 420 may comprise pads 433 over an active surface 421. The pads 433 may be coupled to the pads 429 with a FLIs 418. The FLIs 418 may comprise solder. In other embodiments, the FLIs 418 may comprise a copper to copper interconnection between the pads 433 and the pads 429.Referring now to Figure 4D , a cross-sectional illustration after a mold layer 426 is disposed into the cavity 415 is shown, in accordance with an embodiment. In an embodiment, the mold layer 426 may be an epoxy or the like. In the illustrated embodiment, the mold layer 426 may also function as an underfill material that surrounds the FLIs 418. However, other embodiments may include a dedicated underfill material that surrounds the FLIs 418 and that is distinct from the mold layer (e.g., similar to what is shown in Figure 2F ). After the mold layer 426 fills the cavity 415, the base substrate 405 (and the mold layer 426) may be planarized in order to expose the TSVs 407 at the backside surface of the base substrate 405. While not illustrated in Figure 4D , it is to be appreciated that the planarizing process may also expose TSVs in the component 420 when they are present.Referring now to Figures 5A-5D , a series of cross-sectional illustrations depicting a process for forming a cavity and disposing a component in the cavity with a via last process is shown in greater detail, in accordance with an embodiment.Referring now to Figure 5A , a cross-sectional illustration of a base substrate 505 on a carrier 580 is shown, in accordance with an embodiment. The base substrate 505 may be attached to the carrier 580 with an adhesive film 582. The adhesive film may cover the surface 506 of the base substrate 505 and any pads 553 over the surface 506. In some embodiments, the surface 506 may comprise conductive features 552, such as traces, pads, vias, and the like that will provide interconnections to components and dies of the electronic package.In an embodiment, a plurality of pads 529 may be formed along the surface 506 and embedded in the body of the base substrate 505. The pads 529 are located where the component will be attached in a subsequent processing operation. In some embodiments, the pads 529 may be separated from the surface 506 by an insulative liner (e.g., SiN or the like). In contrast to the embodiment shown in Figure 4A , the base substrate 505 may omit TSVs at this point in the process flow.Referring now to Figure 5B , a cross-sectional illustration after via openings 504 are formed into the base substrate 505 is shown, in accordance with an embodiment. In an embodiment, the openings 504 may be formed with an etching process that utilizes a photoresist mask (not shown) to define the openings. In some embodiments, the openings 504 may be lined with an insulating liner (e.g., SiN, or the like). The openings 504 may expose portions of pads 509 embedded in the base substrate 505.Referring now to Figure 5C , a cross-sectional illustration after TSVs 507 are disposed in the openings 504 is shown, in accordance with an embodiment. In an embodiment, the TSVs 507 may be plated with any suitable process, such as electroless plating or the like.Referring now to Figure 5D , a cross-sectional illustration of the base substrate 505 after a cavity 515 is formed and a component is disposed in the cavity 515 is shown, in accordance with an embodiment. In an embodiment, the cavity 515 may be formed into the base substrate 505 through the backside surface. The cavity 515 may be positioned between TSVs 507 and expose the pads 529. In some embodiments, the cavity 515 may be lined with a lining (not shown) such as a SiN. The exposed pads 529 may also be plated with a conductive barrier layer, or the like.In an embodiment, the component 520 may comprise pads 533 over an active surface 521. The pads 533 may be coupled to the pads 529 with a FLIs 518. The FLIs 518 may comprise solder. In other embodiments, the FLIs 518 may comprise a copper to copper interconnection between the pads 533 and the pads 529. Subsequent to the attachment of the component 520 to pads 529, the processing flow may continue in substantially the same manner described above with respect to Figure 4D .Referring now to Figure 6 , a cross-sectional illustration of an electronic system 680 is shown, in accordance with an embodiment. In an embodiment, the electronic system 680 may comprise an electronic package 600 that is attached to a board 690, such as a printed circuit board (PCB) or the like. In an embodiment, the electronic package 600 may be coupled to the board 690 with PSBs 614 or any other suitable interconnect architecture.In an embodiment, the electronic package 600 may be any package such as those described above in greater detail. For example, the electronic package 600 may comprise a base substrate 605. In an embodiment, the base substrate 605 may be an active or passive substrate. The base substrate 605 may comprise a surface 606 that includes conductive routing or other conductive features (not shown). The base substrate 605 may comprise silicon. In an embodiment, a plurality of dies (e.g., dies 630 and 640) may be coupled to the base substrate. For example, active surfaces 631 and 641 of the dies 630 and 640 may be attached to the surface 606 with FLIs 616. In an embodiment, the base substrate 605 may comprise TSVs 607. In an embodiment, the plurality of dies 630, 640 may be embedded in a mold layer 612.In an embodiment, the base substrate 605 may comprise a plurality of cavities (e.g., cavities 615A and 615B). In an embodiment, one or more of the cavities 615 are entirely within a footprint of one of the dies 630, 640. In other embodiments, one or more of the cavities 615 are at least partially within a footprint of a first die 630 and at least partially within a footprint of a second die 640.In an embodiment, each of the cavities 615 may be filled with a component (e.g., component 620 or component 660). The components 620, 660 may be any of a variety of different component types, such as a die or die stack (e.g., a processor die, a memory die, a power die, a communication die, etc.), a passive component (e.g., a bridge, a capacitor, an inductor, etc.), a cooling module (e.g., a TEC module), or the like. In embodiments where the component 620 and/or 660 is a die or a die stack, the components 620, 660 may be fabricated at a first process node and one or both of the dies 630, 640 may be fabricated at a second process node. In some embodiments, the first process node may be different than the second process node. In an embodiment, the components 620, 660 may comprise active surfaces 621, 661. The active surfaces 621, 661 may be oriented in a face-to-face configuration or back-to-face configuration with the dies 630, 640. In an embodiment, one or both of the components 620, 660 may comprise TSVs 627, 667. The components 620, 660 may be electrically coupled to the surface 606 of the base die 605 with interconnects 618.Figure 7 illustrates a computing device 700 in accordance with one implementation of the invention. The computing device 700 houses a board 702. The board 702 may include a number of components, including but not limited to a processor 704 and at least one communication chip 706. The processor 704 is physically and electrically coupled to the board 702. In some implementations the at least one communication chip 706 is also physically and electrically coupled to the board 702. In further implementations, the communication chip 706 is part of the processor 704.These other components include, but are not limited to, volatile memory (e.g., DRAM), non-volatile memory (e.g., ROM), flash memory, a graphics processor, a digital signal processor, a crypto processor, a chipset, an antenna, a display, a touchscreen display, a touchscreen controller, a battery, an audio codec, a video codec, a power amplifier, a global positioning system (GPS) device, a compass, an accelerometer, a gyroscope, a speaker, a camera, and a mass storage device (such as hard disk drive, compact disk (CD), digital versatile disk (DVD), and so forth).The communication chip 706 enables wireless communications for the transfer of data to and from the computing device 700. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non-solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. The communication chip 706 may implement any of a number of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 family), WiMAX (IEEE 802.16 family), IEEE 802.20, long term evolution (LTE), Ev-DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The computing device 700 may include a plurality of communication chips 706. For instance, a first communication chip 706 may be dedicated to shorter range wireless communications such as Wi-Fi and Bluetooth and a second communication chip 706 may be dedicated to longer range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others.The processor 704 of the computing device 700 includes an integrated circuit die packaged within the processor 704. In some implementations of the invention, the integrated circuit die of the processor may be packaged in an electronic system that comprises a multi-chip package with a base substrate that comprises a cavity that houses a component, in accordance with embodiments described herein. The term "processor" may refer to any device or portion of a device that processes electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory.The communication chip 706 also includes an integrated circuit die packaged within the communication chip 706. In accordance with another implementation of the invention, the integrated circuit die of the communication chip may be packaged in an electronic system that comprises a multi-chip package with a base substrate that comprises a cavity that houses a component, in accordance with embodiments described herein.The above description of illustrated implementations of the invention, including what is described in the Abstract, is not intended to be exhaustive or to limit the invention to the precise forms disclosed. While specific implementations of, and examples for, the invention are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the invention, as those skilled in the relevant art will recognize.These modifications may be made to the invention in light of the above detailed description. The terms used in the following claims should not be construed to limit the invention to the specific implementations disclosed in the specification and the claims. Rather, the scope of the invention is to be determined entirely by the following claims, which are to be construed in accordance with established doctrines of claim interpretation.Example 1: an electronic package, comprising: a base substrate, the base substrate having a plurality of through substrate vias; a first die over the base substrate; a first cavity into the base substrate, wherein the first cavity is at least partially within a footprint of the first die; and a first component in the first cavity.Example 2: the electronic package of Example 1, wherein the first component is a second die.Example 3: the electronic package of Example 1 or Example 2, wherein the second die comprises through substrate vias.Example 4: the electronic package of Examples 1-3, wherein an active surface of the second die faces an active surface of the first die.Example 5: the electronic package of Examples 1-4, wherein an active surface of the second die faces away from an active surface of the first die.Example 6: the electronic package of Examples 1-5, wherein the first component is a passive electrical component.Example 7: the electronic package of Examples 1-6, wherein the first component is a thermoelectric cooling (TEC) module.Example 8: the electronic package of Examples 1-7, wherein the first cavity is entirely within the footprint of the first die.Example 9: the electronic package of Examples 1-8, further comprising: a second die over the base substrate.Example 10: the electronic package of Examples 1-9, wherein the first cavity is at least partially within a footprint of the second die.Example 11: the electronic package of Examples 1-10, wherein the first component electrically couples the first die to the second die.Example 12: the electronic package of Examples 1-11, further comprising: a second cavity into the base substrate, wherein the second cavity is entirely within the footprint of the first die.Example 13: the electronic package of Examples 1-12, further comprising: a second component in the second cavity.Example 14: the electronic package of Examples 1-13, wherein the first die is electrically coupled to the second die by one or more traces on the base substrate.Example 15: the electronic package of Examples 1-14, wherein the base substrate is a passive substrate.Example 16: the electronic package of Examples 1-15, wherein the base substrate is an active substrate.Example 17: a method of forming an electronic package, comprising: forming through substrate vias (TSVs) partially through a base substrate; thinning the base substrate, wherein the TSVs are not exposed; attaching a carrier to the base substrate; forming a cavity into the base substrate, wherein the cavity exposes a plurality of pads; attaching a component to the plurality of pads; embedding the component within a mold layer; planarizing the base substrate, wherein the planarizing exposes the TSVs; removing the carrier; and attaching a die to the base substrate.Example 18: the method of Example 17, wherein an active surface of the component faces an active surface of the die.Example 19: the method of Example 17 or Example 18, wherein the cavity is at least partially within a footprint of the die.Example 20: the method of Examples 17-19, wherein the component is a second die.Example 21: the method of Examples 17-20, wherein the second die comprises through substrate vias.Example 22: an electronic system, comprising: a board; an electronic package coupled to the board, wherein the electronic package comprises: a base substrate, wherein the base substrate comprises through substrate vias (TSVs), and wherein the base substrate comprises silicon; a first die over the base substrate; a second die over the base substrate; a first cavity into the base substrate, wherein the first cavity is at least partially within a footprint of the first die and at least partially within a footprint of the second die; a first component in the first cavity; a second cavity into the base substrate; and a second component in the second cavity.Example 23: the electronic system of Example 22, wherein the first component electrically couples the first die to the second die.Example 24: the electronic system of Example 22 or Example 23, wherein the at least one of the first component and the second component comprises through substrate vias.Example 25: the electronic system of Examples 22-24, wherein an active surface of the first die faces an active surface of the first component. |